The AI industry oscillates between AGI hype and scepticism. We cut through the noise with a sober analysis of what current systems can and cannot do, and what remains unsolved.
Artificial General Intelligence, a system that can perform any intellectual task a human can, remains the stated goal of several leading AI labs. Sam Altman has suggested AGI could arrive within a few years. Dario Amodei at Anthropic has spoken about "powerful AI" emerging by 2026-2027. Google DeepMind's Demis Hassabis has placed AGI within a decade. Yet many respected researchers argue these timelines are wildly optimistic. Cutting through the hype requires examining what current systems actually achieve and where fundamental gaps remain.
What Current Systems Do Well
Modern large language models are genuinely remarkable. They can write competent code, analyse complex documents, engage in nuanced reasoning across diverse domains, and even demonstrate rudimentary planning capabilities. On many standardised benchmarks, including graduate-level examinations and professional certifications, frontier models now match or exceed average human performance. Multimodal models extend these capabilities to images, audio, and video. The pace of improvement has been extraordinary, with capabilities that seemed years away arriving in months.
These systems are enormously valuable for business applications, which is why companies like QverLabs can build transformative products on top of them. But being useful and being generally intelligent are very different things.
Where Fundamental Gaps Remain
Current AI systems lack genuine understanding of the physical world, struggling with basic intuitive physics that toddlers master effortlessly. They cannot learn efficiently from small amounts of data the way humans do, requiring enormous datasets for each new capability. They have no persistent memory or identity across sessions. They cannot formulate truly novel hypotheses or design experiments to test them. They struggle with long-horizon planning and adapting to genuinely novel situations that differ substantially from their training data.
Perhaps most critically, current systems lack what researchers call "world models," internal representations of how the world works that enable prediction, planning, and counterfactual reasoning. Without these, AI systems are sophisticated pattern matchers rather than genuine reasoners, even when their pattern matching is impressively powerful.
A More Useful Frame
Rather than debating whether AGI will arrive in three years or thirty, a more productive framing focuses on specific capabilities and their timelines. AI systems that can reliably manage complex multi-step business workflows are here now. Systems that can conduct genuine scientific research autonomously are likely still years away. Systems that match the full breadth and flexibility of human cognition may be decades away, or may require fundamental architectural innovations we have not yet conceived.
For businesses, the practical implication is clear: there is enormous value to capture from AI as it exists today, without waiting for AGI. The organisations that benefit most are those that focus on concrete, high-value applications rather than speculating about theoretical future capabilities.



