Human Attention And Desire Are The New AI Bottlenecks

ai, llms, attention, validation, innovation

People often claim compute or data limits AI. I believe our attention span and desire are now the bottleneck of AI development.

Before 2020, supervised learning was the main paradigm in AI. Deep neural networks, often with tens or hundreds of millions of parameters, were trained for specific tasks like image recognition or machine translation. Datasets were small by today’s standards typically having tens or hundreds of thousands of examples. Model behavior trained on such datasets was somewhat predictable and we understood limitations better. Progress to the naked eye was clear as well. For example, the improvement in image recognition accuracy from 70% to 90% was easily visible just by looking at a couple of predictions made by the trained model before and after the upgrade.

Today, model ambition has increased dramatically. Foundation models no longer make simple mistakes; they can answer PhD-level questions, making finding their mistakes and limitations much harder (xAI releases Grok 4, claiming Ph.D.-level smarts across all fields). The number of tasks has grown exponentially. Imagine the sheer number of tasks multiplied by the expertise needed for each; we would require an exponential number of hours to truly validate these models, and I don’t believe distilling this complexity into a single task like ARC-AGI (link1, link2) adequately captures the challenge.

You need to be an expert investing significant mental effort to probe these foundation models. Even as an expert trying to probe these models myself, I barely scratch the surface. With the number of both model and application releases coming every day, there is not enough time in the day to give them the attention they deserve. If AI development stopped today (not coming from a safety angle, but rather as an exercise), I believe that we would still have about five to ten years of applying, probing, and just inventing new ways of using these models.

Finally, human desire is a factor. Many people I know, both in and outside AI, give up easily when using AI, and I am guilty of it myself sometimes. They try a prompt, it does not work as expected, and they immediately say AI is not great. This quick surrender is a problem, and I keep reminding myself to keep stretching these models to the maximum.

This immediate capitulation to AI’s perceived shortcomings could also be psychological. People subconsciously resist AI fearing replacement and finding excuses for its perceived shortcomings. However, in this new world where AI is increasingly taking over tasks, we must fully embrace it rather than run away from it.

Acknowledgments #

Thanks to Alejandro Cartagena and Keke for discussions over dinner that helped shape these ideas.