It’s critical to start small.
While Large-Language Models (LLMs) have a vast associative memory capacity, allowing impressive predictions, they currently lack an understanding of the meaning of the stored information. In his book A Brief History of Intelligence: Evolution, AI, and the Five Breakthroughs That Made Our Brains, Max Bennett stresses the importance of this understanding:
“It is possible, perhaps inevitable, that continuing to scale up these language models by providing them with more data will make them even better at answering commonsense and theory-of-mind questions. But without incorporating an inner model of the external world or a model of other minds— without the breakthroughs of simulating and mentalizing—these LLMs will fail to capture something essential about human intelligence. And the more rapid the adoption of LLMs—the more decisions we offload to them—the more important these subtle differences will become.”