Embracing the Liquid Society
CHRIS SUMMERFIELD, TANTUM COLLINS
Institutional Problem
Describe what the open institutional problem or research question you’ve identified is, what features make it challenging, and how people deal with it currently.
Human preferences are volatile, self-contradictory and opportunistic, but social institutions such as the systems we use for democratic decision-making generally assume that they are constant and consistent. As a result, elections, referendums and other collective decision processes often select policy equilibria that fail to maximise societal welfare.
Plenty of research reveals the complexity, plasticity and even incoherence of human preferences. Often, we hold some views very firmly but are largely indifferent across (or frequently change our minds about) vast swaths of the policy space. In theory this opens the door to positive-sum “policy trades” whereby people relinquish agency over areas about which they care little in exchange for preferred outcomes in areas where they have a greater stake. A central planner in possession of granular preference representations could oversee such optimisation in service of a given social welfare function.
In practice, however, most institutions that elicit such information do so in ways that are far too lossy to accommodate this kind of fine-grained preference Tetris. Elections and referendums occur infrequently and ask questions crudely (choose yes/no, or one of a handful of candidates, etc.). Historically, this is understandable, since the available tools for gathering and aggregating feedback at scale could only process crude representations. Today, however, that is no longer the case.
Possible Solution
Describe what your proposed solution is and how it makes use of AI. If there’s a hypothesis you’re testing, what is it? What makes this approach particularly tractable? How would you implement your solution?
AI is well-suited to modelling preferences in a high-dimensional way, for instance via natural language conversation. Once a system has extracted rich representations of the views of members of a given community, it can set about making collective decisions that meet threshold levels of user satisfaction while maximising the collective gains to society according to some welfare function (e.g., one that emphasises social justice or sustainable consumption).
Concretely: community members could converse with a personal AI (PAI) which would make decisions on their behalf about local resource allocation, political influence, consumption, etc. The user would provide feedback to their PAI on the outcomes of those choices. People most likely would scale their engagement according to the personal stake that they have in the output; where they are broadly indifferent they might offer minimal feedback, whereas for high-stakes outputs they would provide detailed course-corrective input, allowing the agent to establish the boundaries of an “indifference space” in which it can coordinate centrally for desired outcomes. (This would in some ways resemble “liquid democracy”, where people fluidly allocate influence among representatives.) The PAI would be trained to satisfy user preferences up to a threshold level: in particular, learning about which outcomes the user is indifferent over. This means that it can provide information about those preferences to a central planner which makes wider decisions for the collective good. In other words: assistive technologies present an opportunity to reveal our preferences (and indifferences) in such a way that they can be pooled more readily for collective action.
Method of Evaluation
Describe how you will know if your solution works, ideally at both a small and large scale. What resources and stakeholders would you require to implement and test your solution?
Initial assessments could take place in simulated environments, which have the benefits of low downside risk and high throughput, at the cost of some forms of realism. Prior work has shown the applicability of ML to simulated environments. In this case, one could create a population of agents with randomly initialised preferences that vary in intensity, thereby leaving space for ‘trades’ that capitalise on the indifference space described above. One could then compare the outcome, as measured by the social welfare function of choice (e.g. maximin) to an alternative simulation in which rather than convey their rich preferences and areas of uncertainty, agents simply voted according to a standard social choice rule. A simple version might give the ‘planner’ direct access to agent preferences, while a more sophisticated simulation could have the agents learn to communicate (perhaps via natural language) with virtual PAIs.
If these systems prove effective in simulation, subsequent experiments featuring human subjects could assess applicability to real-world problems. These could begin with low-stakes decisions and then, conditional on successful prior testing, gradually ramp up to more consequential choices.
Risks and Additional Context
What are the biggest risks associated with this project? If someone is strongly opposed to your solution or if it is tried and fails, why do you think that was? Is there any additional context worth bearing in mind?
Any attempt to capture and act on human preferences via machine learning is fraught with risks of interpretability and bias. This is made worse by the lack of clear benchmarks akin to the desiderata that have guided social choice. However, ongoing research on safety, fairness/accountability/transparency, interpretability and algorithmic bias have shown promise and may reduce the magnitude of some of these issues.
In addition to technical concerns, systems such as those outlined above raise fundamental philosophical questions about desire and utility, such as to what extent a PAI should optimise for user’s immediate desires versus their long-run well-being, when and whether PAIs should try to persuade users to change their minds, and so on. These issues touch on longstanding dilemmas in moral philosophy, political philosophy and philosophy of identity.
The selection of a suitable social welfare function also creates questions and risks, since this is a normative choice that will in expectation affect the welfare of all participants.
Next Steps
Outline the next steps of the project and a roadmap for future work. What are your biggest areas of uncertainty?
A growing community has started investigating questions closely related to the vision articulated above. We will both continue to conduct research on these topics and we also encourage further work on the below:
Further exploration of the promise and pitfalls of using natural language for preference representation
Ongoing research in automated mechanism design and computational social choice
Applied research on virtual assistants that emphasises when to re-engage with users to clarify preferences and when not to
AI-driven interfaces for content selection and other choice automation
Technical safety research, especially inverse reinforcement learning