About
In recent years, AI has become an increasingly powerful tool for information processing. It has reshaped society, whether through social media newsfeeds, resource allocation algorithms in government, or changing human interactions at work through virtual assistants. Indeed, it and other technologies have progressed to the point where our social institutions need to be updated to keep up with technological advancement.
The question we seek to address is: can we use AI itself to help us design and build better institutions, which enhance human capabilities for collaborating, governing, and living together? We might even hope that using AI to improve our institutions could enable them to better govern our AI, in a virtuous loop. There is already plenty of work in this domain:
Online insight aggregation and deliberation platforms (such as Remesh, Pol.is, and Stanford’s moderation chatbot) could help surface topics of societal importance and enable productive discourse between people with differing beliefs and values, enabling greater social cohesion and, further down the line, better institutions.
AI-enabled tools (such as the AI Economist and DeepMind’s Democratic AI) could be used in the process of designing institutions, such as by searching for novel redistribution policies, or new corporate structures that help solve constrained optimization problems more complicated than maximising profit.
Better agent-based models, behavioural cloning from humans, and/or multi-agent learning algorithms (such as DeepABM, and more generally in ABMs for economics), could help us to simulate how people respond to such policies.
Finally, as AI becomes more embedded in our existing institutions, it will be critical to ensure that those AIs are more cooperatively intelligent, better-aligned with human values, and built using processes that can capture a wide range of such values.
There are clearly high stakes here, regardless of where or whether such AI systems are deployed. Can we test such systems in lower-stakes settings? Where are the best opportunities for deploying them? What bottlenecks exist when attempting to do so, and how can we unblock them?
Answering such questions requires input from many different disciplines and perspectives. This includes multi-agent learning, game theory and mechanism design, economics, social choice, political science, philosophy, complex systems, and work from practitioners who can bring these ideas to life. If you’re interested, we want to hear from you!
The Collective Intelligence Project is a research and policy organisation looking to improve and leverage humanity’s collective intelligence to better govern and thus benefit from transformative technologies such as AI.
The Cooperative AI Foundation is a research foundation focused on understanding and improving the cooperative capabilities of AI systems, including transformative AI, for the benefit of all.