AI in 2025
💻🤖🧠
“Because innovation like the former can’t exist without the likes of the latter…”
Meme by KK the memelord (Me).
Okay, so now that we have the priorities set clear…
1. Welcome to 2025
- Large Language Models (LLMs) are the dominant type of models that make for most of the AI.
- May not be the best or the path to Artificial General Intelligence (AGI) or Artificial Super Intelligence (ASI), which are the points that are equivalent to, or surpass human level of intelligence.
- AGI could come in 2030-2035, says the Nobel laureate and DeepMind CEO Demis Hassabis.
- Google is outperforming all other models in most tasks with their Gemini 2.5 Pro (NLP) and Veo 3 (Video generation) models. The previously King in AI is back, it seems.
- Google’s offerings are also nuts, giving free access to API and developers.
- There are other types of models catching up, like JEPA (Joint Embedding Predictive Architecture from Meta, latest: V-JEPA), which has a more grounded undertanding of the physical world, meant for robots and other applications aiming to achieve Artifical Machine Intelligence (AMI). AMI could be one of pathways to AGI.
2a. Nature of LLMs
- 📈Are stochastic (probabilistic) simulations of people
- Have Emergent behaviour, or exhibhit Emergence
- 🧠😕Have anterograde amnesia - they do not know of anything beyond the point of their training
- 🧠🤯ADHD (Attention-Deficit Hyperactivity Disorder) behaviour, hallucination: Get distracted in their vast amount of knowledge easily.
- 🧠🤓Jagged intelligence: smart mostly, but dumb a few times. When asked how many ‘r’s are there in ‘strawberry’, they often say the wrong answer. This is mainly because computers do not think the way we do, their thinking is upside down - They can do rational/complex math more easily than do intuitive simple tasks like identify a fruit in a basket. For a two year old human, identifying a fruit is easier than doing complex math. (more on How We The Humans Build
- 🏃♂️❌Don’t continuously learn or think autonomously
- 😴❌Don’t sleep after that to consolidate knowledge or insight, expertise into weights (that happens only when the model is being created): Humans do that every night.
- 🤔❌Lack true reasoning and understanding: As they’re just stochastic simulations of people, they really lack true understanding
- 🔠❌🔢✅They see tokens, not letters or words: LLMs think in terms of tokens, each word is converted to a token for the model to process and once that is done, it is converted back to word for you to read.
2b. Some more things about LLMs
- 🧍♀️🤔🖥️🔤Unlike Humans: LLMs are just probabilistic next-token (a token is a math representation of words) prediction systems. Although they’re pretty useful, they process language and predict the next group of words in a statistical manner. A) It is not how humans think. Humans think in terms of mental representations of objects and events, not language. B) It means that it can only say things out of the data it was trained on, with some degree of creativity.
- ❓They cannot make new discoveries or inventions: Because they do not asks questions or act on their own. They lack curiosity and true agency.
- LLms can’t be AGI As Yann Lecun says, LLMs are much limited and do not have true reasoning and human-like thinking capabilities.
- 🤔💭❌Reasoning involves searching through a solution space. LLMs have no such mechanism.
- 🔤✔️📷🎥⁉️Language is already compressed, unlike images and video: Generation of text in LLMs is just predicting the next most probabilistic dictionary words. But generation of image/video requires knowledge of the physical world, a mental model, and the endless possiblities in 3D space are just mathematically intractable.
- 🥅👎Have no goal of their own
- 4 critical capabilities AI needs to have for true understanding: (according to Yann LeCun)
- Understanding of the physical world
- Having persistent memory
- Being able to reason and
- plan
2c. What’s next on LLMs
- With the LLMs, it’s simply not possible to reach AGI. They lack many mechanisms needed. A new architecture is definitely needed to come close to AGI.
- But maybe we don’t need AGI after all, maybe we’ll get closer to Utopia with just incremental improvements to LLMs or any other inferior-to-AGI architectures.
- Like the airplane isn’t exactly like the birds, and the submarine isn’t exactly like fish, maybe AI need not be exactly like or similar to humans…
- What’s the degree of likeness of the earliest or best AGI to that of the human brain? Certainly there’s a degree of likeness of planes to birds that is above zero (on a scale, lets say, from 0 to 10).
- Maybe the LLMs are the way to go… or maybe not…
Maybe there’s something better than LLMs, which works like humans… or maybe unlike humans…
- Moravec’s paradox: Going back to How We The Humans Build, Moravec’s paradox is the concept that tasks that are easy for humans, such as perception and mobility, are difficult for machines and AI to replicate, while tasks that humans find challenging, like high-level reasoning, complex calculations, and logical analysis, are relatively easy for machines.
3. How to deal with AI
- AI needs to be kept in leash, says Andrej Karpathy
- We need to augment✅ the AI like the Ironman does with his suit, instead of letting AI do it all the agentic❌ way.
4. A brief interlude on BioTech
- Biggest revolutions of this century (after the Internet):
-
AI (Software + Hardware): The software wave is already here. The hardware/robotics wave is next—and it’s not far behind. This revolution is moving fast, largely because it doesn’t need heavy regulation to scale.
-
BioTech: Possibly even more transformative and more ethically complex. Its pace may be slower due to strict regulations, as it directly involves the human body. But its implications run deeper: it will force humanity to rethink morality, identity, and what it means to be human. Neuralink, for instance, is already tackling things like blindness and aims to augment human capabilities in ways that could permanently reshape how we live. BioTech is being accelerated by AI, and while it’s not massive yet in 2025, brace yourself - the wave is coming.
-
5. Crisis of Meaning
- GDP will obviously grow because of increased productivity, hence economy may not suffer (unless there are other factors that I’m not aware of).
- So even if people lose jobs, there may not be an Economic Depression.
- There will be a huge crisis for meaning. Because just getting money and resources will not make humans happy, that is not how we’ve grown evolutionarily. We need meaning. There is a huge possibility of an Existential Depression or Moral Disintegration.
- But the number of problems to solve will never be zero, there is cancer to cure, planets to colonize. Hence there is always a way to look for meaning.
Future is here, and its definitely not boring.
Onward, to solve the next great problems. Onward, to the stars….
🌎🚀