A team of scientists in Belgium may have solved one of the biggest challenges in the field of AI using a blockchain-based, decentralized training method. While the research is still in its earliest stages, its potential implications could range from revolutionizing outer space exploration to posing an existential threat to humanity.
In a simulated environment, the researchers developed a way to coordinate learning between individual, autonomous AI agents. The team used blockchain technology to facilitate and secure the agents’ communications, thus creating a decentralized “swarm” of learning models.
The individual training results for each agent in the swarm were then used to develop a larger AI model. Because the data was handled via blockchain, this bigger system benefited from the swarm’s collective intelligence without accessing any of the individual agents’ data.
AI swarms
Machine learning, a concept closely related to artificial intelligence, comes in many forms. The typical chatbot, such as OpenAI’s ChatGPT or Anthropic’s Claude, for example, is developed using multiple techniques. It’s pre-trained using a paradigm called “unsupervised learning” and then fine-tuned with another referred to as “reinforcement learning from human feedback.”
One of the biggest challenges with this approach is, typically, it requires the system’s training data to be siloed into a centralized database. This makes it impractical for applications requiring continuous autonomous learning or wherever privacy is important.
The research team conducted their blockchain research using a learning paradigm called “decentralized federated learning.” In doing so, they found that they could successfully coordinate the models while maintaining data decentralization.
Swarm security
Most of the team’s research involved studying the swarm’s resiliency against various methods of attack. Because blockchain technology is a shared ledger and the training network used in the experiment was itself decentralized, the team was able to demonstrate robustness against traditional hacking attacks.
However, they did find a definitive threshold for exactly how many rogue robots the swarm could handle. The researchers developed scenarios featuring robots intentionally designed to harm the network. These included agents with nefarious agendas, agents with outdated information, and robots coded with simple disruption instructions.
While the simple and outdated agents were relatively easy to defend against, the team found that smart agents with nefarious agendas could eventually perturb the swarm intelligence if enough were able to infiltrate it.
This research remains experimental and has only been conducted via simulations. But there could soon come a time when robot swarms can be cross-coordinated in a decentralized manner. This might one day allow teams of AI agents from different companies or countries to work together to train a larger agent without sacrificing data privacy.
Related: Satoshi vs physics: How quantum Bitcoin miners could make ASIC obsolete