Reconciling artificial intelligence and human rights

     

 

Around the world, concern about the consequences of our growing reliance upon artificial intelligence (AI) is rising. Perhaps the darkest concerns relate to development of AI by authoritarian regimes, some of which are devoting massive resources toward applying AI in the service of authoritarian, rather than democratic or humanitarian, goals, note analysts Eileen Donahoe and Megan MacDuffee Metzger.

A shared global framework is needed to ensure that AI is developed and applied in ways that respect human dignity, democratic accountability, and the bedrock principles of free societies, they write for the NED’s Power 3.0 blog. We argue that the Universal Declaration of Human Rights, along with the series of international treaties that explicate the wide range of civil, political, economic, social, and cultural rights it envisions, already has wide global legitimacy and is well suited to serve this function for several reasons:

  • First, it would put the human person at the center of any assessment of AI and make AI’s impact on humans the focal point of governance.
  • Second, this international body of human-rights law, through its broad spectrum of both substantive and procedural rights, speaks directly to the most pressing societal concerns about AI….
  • Third, the human-rights framework establishes the roles and responsibilities of both governments and the private sector in protecting and respecting human rights and in remedying violations of them. Within the UN Guiding Principles on Business and Human Rights, the general legal obligation to protect human rights remains with states, while private firms have responsibilities to respect and protect human rights (and to remedy violations of them) when the firm’s own products, services, and operations are involved.
  • Finally, although interpreted and implemented in vastly different ways around the world, the existing universal framework enjoys a level of geopolitical recognition and status under international law that any newly emergent ethical framework is unlikely to match. …RTWT

The post is drawn from a longer article, titled “Artificial Intelligence and Human Rights,” that appears in the April 2019 issue of the Journal of Democracy.

Eileen Donahoe, former U.S. ambassador to the UN Human Rights Council in Geneva, is executive director of the Global Digital Policy Incubator and adjunct professor at Stanford University’s Center on Democracy, Development, and the Rule of Law. Megan MacDuffee Metzger is research scholar and associate director for research at the Global Policy Incubator. Follow them on Twitter @EileenDonahoe and @meganicka.

The views expressed in this post represent the opinions and analysis of the author and do not necessarily reflect those of the National Endowment for Democracy or its staff.

Print Friendly, PDF & Email