Having worked deeply in AI domain for a year now with Flipkart and through my own personal experiments, I have formulated my theory which I would love to build around in subsequent year.
My theory revolves around the evolution and value shift of technology infrastructure and how companies should approach leveraging AI for long-term success. Here are the key tenets of my theory:
When I look at the AI wave, I see history repeating itself.
Electricity: In the early days, the most valuable companies were those building power grids and generators like GE. But today, the biggest value lies with those who use electricity to build experiences - from factories to smartphones to now AI model companies. While refrigerators was a tech innovation, Coco-Cola went on to became cold beverage giant.
Cloud Computing: AWS, Google Cloud, and Azure were foundational to the internet economy. But the real user value sits with companies like Netflix, which leveraged cloud to deliver something tangible and delightful to people.
The Internet: The early web was all about protocols and infrastructure. The long-term value went to Amazon, Uber, and other companies that built user-first applications on top of it.
AI will follow the same curve. Companies like OpenAI are today’s “cloud providers” - building the base models. But the long-term value will accrue to those who create deeply useful, human-centric applications built on this foundation.
We’re in the LLM infrastructure phase right now. The big breakthroughs - GPTs, Claude, Gemini - are the “AWS” equivalents of this generation. But the next decade belongs to those who can turn these models into real utility.
I believe the winners will be companies that:
Build for real, high-impact consumer needs.
Go beyond novelty and solve persistent pain points.
Integrate AI seamlessly into how people live and work.
Today’s AI tools are great at surface-level interactions - chat, summarization, Q&A - but they struggle with constraint optimization and multi-source synthesis.
The next leap will come from systems that can reason like humans do - gathering inputs from different sources (friends, research papers, podcasts, expert blogs), weighing them, and arriving at contextual conclusions.
This requires modular AI architectures like multiple specialized models (for user personas, domains, or intent) working together and aggregating insights into actionable outcomes.
One principle I deeply believe in: users should always control what they’re consuming and consume basis on who they trust. AI solutions should give users control over what sources to trust, which models to use, and how data is processed.
AI systems shouldn’t be black boxes. People should be able to choose:
Whose wisdom or frameworks they want to rely on - from Naval Ravikant to the Bhagavad Gita.
How much weight to give each perspective.
What kind of reasoning process the AI uses.
True personalization is not just about context - it’s about control.
AI should not live in isolated apps - it should live in context. I see AI becoming a co-pilot for everyday thinking and action:
Journaling that automatically captures reflections and patterns.
Meeting assistants that not only summarize but contextually connect decisions across projects.
Tools that help you track, reason, and act across your professional and personal life - not as a task manager, but as a thinking partner.
One of my strongest beliefs is that AI can help democratize wisdom.
I believe in creating systems where AI acts as a mediator that can provide wisdom from curated sources, enabling people to make better decisions.
- For example, if someone is facing a personal challenge, they should be able to access wisdom from sources they resonate with, like the Bhagavad Gita or a modern thinker, in a contextual and relevant manner.
- This will also bring solutions where people could convert their knowledge into weights of models the same way books did for saving knowledge and subsequently happened with Audio (Disks) , Video and eventually cloud based solutions.
Sample Use case -
Being able to leverage religion details for a given community
Business knowledge and decision making powered by people like Warren Buffets. Steve Jobs said how do we preserve people like aristocrats and then ask them later for any doubts.
One step ahead on this is that you dont even need to ask the system, it needs to seamlessly integrate in your work flows where relevant folks give you advice and aid decision making whenever needed. Also, you need lot of AI enabled workers for you and they can work on principles derived from these personalities.
Imagine:
Being able to channel Warren Buffett’s mental models while making investment decisions.
Drawing from community-specific sources like religious texts when facing moral or personal dilemmas.
Working alongside AI “assistants” trained on the principles of your favorite thinkers - seamlessly integrated into your workflows.
AI becomes not just a tool but a bridge between past wisdom and present action.
Most people’s knowledge today is trapped in conversations, experiences, or memories. The world lacks incentives for individuals to share structured data.
I see a massive opportunity here - enabling people to document their experiences (travel, career, education) in structured formats and then get value back in the form of personalized insights.
This requires:
Trust-based systems that respect privacy.
Clear awareness of how shared data is used.
Tangible benefits for the contributors.
If done right, AI can create a positive-sum ecosystem where wisdom compounds.
The future of AI assistants won’t just be reactive. They’ll understand context, track goals, and orchestrate actions across domains.
Think of an assistant that:
Knows your ongoing projects and pulls in relevant insights automatically.
Records, organizes, and summarizes your conversations.
Acts on your behalf - scheduling, drafting, executing - while staying aligned with your preferences and values.
This assistant will be powered by the democratized wisdom of others and close the loop from recommendation to execution.
Diminishing value of Experience -
Experience gives two things, previous data around certain aspects and ability to preempt the impact size of decision and hence take better decisions.
But as models get better with more data leading to AI’s capability to guide better decision making, only the person who is to take the decision and execute on it becomes valuable thereafter making middle managers irrelevant.
Example - Designers bring knowledge on how typically users read and consumer information on apps. That information is going to no longer stand as value in long term as LLMs would be able to share the context with vision models able to give general feedback. But first principle problem solvers who have design understanding will still stand the test of AI.
PM Jobs :
Writing evaluations will become a key role in PM Job
PMs would need to be able to execute simpler tech pilots with prototyping and tooling a key application lever
PM middle managers would become less important who majorly are responsible for disseminating relevant information for IC PMs.
The next phase of this evolution will focus on three pillars:
Capturing high-quality data - finding ways to incentivize and structure human knowledge.
Custom Solves and Model Architecture – building domain-specific systems that can reason across modalities and context.
Agentic Behavior – moving from reactive chatbots to autonomous, multi-step agents capable of reasoning and acting.
my theory is about building highly contextual, user-controlled AI applications that integrate deeply into decision-making and everyday activities, enabling people to make informed choices based on trusted, structured data sources. These solutions will hold the greatest long-term value as AI technology matures.