How democracy can defend humanity against AI’s catastrophic outcomes

     

 

In the context of the possible emergence of rogue artificial intelligence, there are three reasons why we should focus specifically on the preservation, and ideally enhancement, of democracy and human rights, argues Yoshua Bengio, professor of computer science at the Université de Montréal. While democracy and human rights are intrinsically important, they are also fragile, as evidenced repeatedly throughout history, including cases of democratic states transitioning into authoritarian ones.

It is crucial that we remember the essence of democracy—that everyone has a voice—and that this involves the decentralization of power and a system of checks and balances to ensure that decisions reflect and balance the views of diverse citizens and communities, he writes for The Journal of Democracy:

  • Powerful tools, especially AI, could easily be leveraged by governments to strengthen their hold on power, for instance, through multifaceted surveillance methods such as cameras and online discourse monitoring, as well as control mechanisms such as AI-driven policing and military weapons. Naturally, a decline in democratic principles correlates with a deterioration of human rights. Furthermore, a superhuman AI could give unprecedented power to those who control it, whether individuals, corporations, or governments, threatening democracy and geopolitical stability.
  • Highly centralized authoritarian regimes are unlikely to make wise and safe decisions due to the absence of the checks and balances inherent in democracies. While dictators might act more swiftly, their firm conviction in their own interpretations and beliefs could lead them to make bad decisions with an unwarranted level of confidence. This behavior is similar to that of machine-learning systems trained by maximum likelihood: They consider only one interpretation of reality when there could be multiple possibilities. ….
  • Furthermore, an authoritarian regime is likely to focus primarily on preserving or enhancing its own power instead of thoughtfully anticipating potential harms and risks to its population and humanity at large. These two factors—unreliable decisionmaking and a misalignment with humanity’s well-being—render authoritarian regimes more likely to make unsafe decisions regarding powerful AI systems, thereby increasing the likelihood of catastrophic outcomes when using these systems.

Historian and philosopher Yuval Noah Harari and Mustafa Suleyman, the co-founder of DeepMind, discuss what the artificial-intelligence revolution means for employment, geopolitics and the survival of liberal democracy with The Economist’s Zanny Minton Beddoes (above).

Print Friendly, PDF & Email