back to top
Sunday, January 5, 2025
spot_img
HomeBillionaires2025 AI: A 'Killer App' without AGI

2025 AI: A ‘Killer App’ without AGI

This is 1/10 AI predictions for 2025. AGI and the Singularity have been hot topics in AI, sparking both fear and excitement. Sam Altman even recently predicted AGI would arrive in 2025, and Elon Musk 2026. Bold are more about hype than reality. In 2025, there will be no AGI but Large Language Models will find their “Killer App”.

What are AGI and the Singularity?

  • AGI (Artificial General Intelligence): This refers to an advanced AI that can think, learn, and solve problems across a wide range of tasks, just like a human.
  • The Singularity: This is the idea of AI surpassing human intelligence, improving itself endlessly, and causing massive, unpredictable changes in society.

My prediction: we wont see any of this in 2025. Let’s dig into the technology to understand why we are not even close.

Sentence Completion is Not Intelligence or AGI

Generative AI, like OpenAI’s GPT models, can hold human-like conversations. That sounds amazing. But it is limited to spotting and repeating patterns. ChatGPT and similar systems are based on so called “Large Language Models”. They work by predicting the most statistically probable next word or token based on their training data. One example:

  • Input: “Life is like a box of…”
  • Prediction: “chocolates” (thanks to Forrest Gump).

This isn’t real understanding—it’s just pattern matching. Generative AI doesn’t “consider” other options like “box of surprises”. It might seem intelligent because it can give polished responses, but it’s no more self-aware than a chess computer that doesn’t care if it loses a game.

OpenAI’s “o1”: isn’t that the first step for AGI?

No, it’s not. Let’s see what it is. OpenAI’s O1, released in 2024, does not directly answer a given question. Instead, it creates a plan to determine the best way to answer it. It then critiques its response, improves it, and continues refining. This chained output is truly impressive.

Let’s critique the statement: ‘Life is like a box of chocolates.’

  • Cliché Factor: Overused.
  • Limited Scope: Focuses solely on unpredictability.
  • Cultural Bias: May not resonate universally.

Not bad… Based on this critique, the AI can now craft an improved sentence.

2025 will see many of those “chains” but not AGI

I recently launched an eCornell online course to train students to think about products using AI and data. To make this rather technical AI and product course accessible as a no-code course, I created the same iterative process as we see with o1.

  1. First, students formulate the product concept (plan).
  2. Next, the AI tool generates code autonomously (generation).
  3. During runtime, errors may arise (test).
  4. The AI tool then critiques its own output (critique) and iteratively refines it.

The innovation lies in OpenAI’s ability to pass through this loop multiple times to enhance the answer. But is this intelligence? No. It’s a fixed framework and not dynamic. Neither is this approach scalable. The more critique cycles you allow the model, the more facets of the problem it can address, but this comes at the cost of increased time.

Don’t get me wrong. O1 is amazing but it as well highlights the fundamental technological challenge we face in the quest to achieve AGI.

The Barriers to get to AGI

  1. Humans can think fast and instinctively (System 1) or slowly and logically (System 2). AI only works through patterns, missing this balance.
  2. AI struggles with context and often misses important details humans pick up naturally.
  3. Current AI builds outputs on previous ones (called autoregressive models), so mistakes can snowball.

Many researchers believe that much of this might be fixable. When? 2050? 2030? .. but not this year.

What Will Happen in 2025?

In 2025, we’ll see more narrow AI solutions integrated into chains similar to OpenAI’s “o1” approach. These systems will be designed to excel at specific tasks and, when combined, will enhance productivity and surpass human performance in many areas. This development will be exciting, but it’s important to emphasize that these advancements will not constitute AGI. Focusing on the real risks and opportunities of AI, rather than getting sidetracked by AGI debates, and whether AGI will replace us. See here a quick video about my view. In short: it’ wont.

And what’s now with Sam’s Claim of AGI?

It’s mostly marketing. Big, bold claims grab attention. The promise of AGI grabs attention. Tomorrow, I’ll share my next prediction for 2025, which focuses on the biggest application of large language models. Sam’s prediction might seem over-the-top, but it’s the right strategy for what I see as the biggest application or the so called “Killer Application”. Stay tuned.

Follow me on here on Forbes or on LinkedIn for more of my 2025 AI predictions.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments