Researchers Build a Mind-Mimicking AI
A team of scientists has trained an AI model they say can act like the human mind. They built it using a large language model (LLM) and named it Centaur. The model uses a data set called Psych-101, which includes data from 160 psychology experiments and over 10 million decisions made by 60,000 people.
The researchers trained Llama, a model developed by Meta, using the choices people made in these studies. After training, the model could predict how people would respond in new situations. For example, in a task where people chose between virtual slot machines, Centaur’s decisions looked more human than other models designed for the task.
Some Scientists See Promise
Dr. Marcel Binz, one of the lead scientists, says Centaur could help researchers design and test new psychology experiments digitally before trying them on real people. He believes this tool could improve how we study human behavior.
Others agree that while Centaur might not fully explain the human mind, the Psych-101 dataset is a helpful tool for future research. Some say this new data can help test different models and theories in psychology.
Others Remain Skeptical
Not all scientists are convinced. Dr. Blake Richards believes many experts will doubt the model. He says it doesn’t truly copy how humans think.
Dr. Jeffrey Bowers tested Centaur and found it performs in ways no human can. It remembered 256 digits in a short-term memory test — far more than the 7 digits most people can handle. He also found it reacts faster than humans ever could, raising doubts about its human-likeness.
Dr. Federico Adolfi says Centaur is easy to break and calls the experiment data “a grain of sand in an ocean.”
Conclusion
Centaur has sparked a big debate. Some scientists feel hopeful about using it as a tool. Others say it doesn’t act or think like a human at all. Still, most agree that the project is interesting and may help future studies — even if it doesn’t fully unlock the secrets of the human mind.