A few years ago in the wake of the failure of governments to sign an international agreement to reduce greenhouse gases in Copenhagen, we fell into a discussion on fixing the system: Let’s get computers to do it for us. We mixed a sort of tragedy of the commons scenario with some basic understanding of game theory to construct a fantasy in which some sort of super computer installed just outside the conference would be programmed to determine the appropriate binding greenhouse gas reduction targets that each nation would need to adopt. You could program preference functions for each nation, skipping the bureaucratic headache of international diplomacy, and arrive at an agreement. Problem solved.
The rationality and fairness that a computer could idealistically dispense around a climate change agreement is offered as a direct critique of the fragility of human beings to manage and govern systems. This thought quickly snowballed. We could outsource many more challenges of systems governance to actors capable of making the changes we need, the changes that are in the best interest of society.
The strength of bot-managed decision-making is that it can be reduced to rules and hard logic, where human decision-making, built on top of evolutionary quirks, habits, and imitative behaviors, may provide a strong veneer of logic, but is susceptible to these underpinnings, as well as bureaucratic and social pressures. In short, the social contract, invented in the early days of democracy, may not be enough of a policing mechanism anymore.
The crisis in the financial system shows how corporate interests can override banks’ social contract with society. Additionally, the complexity of the financial crisis meant that no one actor was solely responsible, and so the risk of what was happening at an aggregate level was not properly registered.
Certainly bots can have their own glitches.There is the problem of "runaway," a weakness of complex systems. They can feed on their own complexity and wobble out of control. The Flash Crash of 2010, for example, in which the Dow Jones Industrial Average plunged about 1,000 points, was a result of computerized high frequency traders exiting the market because they were triggered by a mutual fund’s unusual selling. Here, our only response is by introducing mechanisms into systems that offer self-correction. Learning networks, for example, which detect anomalies and react to them.
However, bots can also act as good agents for systems governance as long as two principles are in place: transparency and trust. First, if we are to depend on bots to manage these complex systems then this management must be transparent for anyone to inspect, challenge, and improve. Secondly, the trust in the system must similarly be distributed. We are long past the days where any one entity could simply say "trust me." The bots must act within a trust framework, where any agent in the system can begin to assign trust values to other agents. Add back in transparency, and you get a web of trust which scales rapidly without the need for any central trusted-by-default agent.
But robots aren't the only things that can disrupt the system with a new kind of logic. There is already one agent of systems-change that’s working outside the traditional methodology in a way that can effect drastic change: social entrepreneurs.
The social entrepreneur is, like a robot, another type of actor accustomed to operating in complex environments. Social entrepreneurs tackle major social issues and offer new, innovative ideas for wide-scale change. They seek out what is not working and solve the problem by changing the system, spreading the solution, and persuading entire societies to take new leaps. In this way they are both the destabilizing element and the control system.
Ashoka Fellow Gary Slutkin, for example, is a social entrepreneur working to eradicate the norm of violence in the most dangerous urban neighborhoods in the United States by conceptualizing and treating violence as an infectious disease. His CeaseFire program identifies those who have been most "infected" by urban violence and treats this core group, in order to stop the transmission of violence to others. CeaseFire’s treatment is based on a corps of "violence interrupters," former perpetrators of violence now employed to disrupt armed conflicts and educate the community about the consequences of violent behavior.
Overall, 84% of Ashoka Fellows like Gary have changed a system at a national level within 10 years of their election to the fellowship. These systems changes occur across five dimensions: changing the rules that govern our societies (public policy and industry norms), redefining interconnections in market systems (market dynamics and value chains), transforming the meaning of private versus citizen sector (business social congruence), fully integrating marginalized populations (full citizenship and empathetic ethics), and increasing the number of people who are problem-solvers (culture of changemaking).
This movement away from a centralized decision-making body is the key to future systems governance. And this environment of decentralized management is one where social entrepreneurs and bots both thrive. So rather than putting our faith in governments and institutions that are meant to be "guardians" of social systems—whether they be financial systems or our climate change system—we should develop confidence in the "democratizing agency" offered by social entrepreneurs and bots.
Jon Camfield is the Technology Strategist for Ashoka Changemakers. When the robots take over, he'll still find joy in being a technology for development geek, gardening, homebrewing, salsa dancing, cooking, and being a husband and all-around dork. Not in that order.
Alexa Clay is the Director of Open Growth Advisory, a consulting division of Ashoka Changemakers. She is an economic historian turned futurist and enjoys the company of both social entrepreneurs and robots.
More information about Ashoka Changemakers can be found here.
[Image: Flickr user andreavallejos]