The very individuals responsible for advancing AI are expressing significant concerns about the potential risks of their work. A recent statement from the non-profit Center for AI Safety, endorsed by hundreds of prominent AI executives and researchers, asserted, "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." Extinction? Nuclear war? If their worries are so grave, why don’t these scientists halt their efforts? While that may sound easy, it is much more complicated. Nuclear scientists continued their work until they perfected the atomic bomb.
Moreover, AI also offers countless benefits. However, this statement, along with a growing call for government regulation of AI, raises several important questions: What should the guidelines for AI development entail? Who will enforce these regulations? How can these standards coexist with existing laws in society? How do we address the cultural and national differences in perspectives on AI?
Alondra Nelson, who served in the White House during the first two years of President Joe Biden's administration, provided insights on these issues. Nelson was the first African American and the first woman of color to lead the Office of Science and Technology Policy, where she played a pivotal role in drafting the influential Blueprint for an AI Bill of Rights. She is currently a professor at the Institute for Advanced Study, an independent research center in Princeton, New Jersey. She was interviewed and the following questions were asked:-
Industry leaders, including Sam Altman from OpenAI, have recently warned about the risk of extinction posed by AI. What are your thoughts on these warnings?
When business leaders, entrepreneurs, and prominent scientists issue warnings, we should take them seriously. However, we also need to recognize that we have the opportunity to shape a different future. The same individuals who are raising alarms about potential risks also play a crucial role in collaborating with the government and civil society to create this future. This may mean taking the necessary time to ensure that new technologies are safe and effective before they are implemented.
If we use the climate crisis as a parallel, we see that we are engaged in research and development around new green and clean technologies. We are also working on geopolitical efforts, such as the Paris Climate Accords. Similar collaborative efforts are essential for addressing the challenges posed by AI, and we can work across various sectors to achieve this.
At a fundamental level, how can we define the risks associated with AI? Is there a consensus on that description?
There is a consensus that AI presents certain risks. AI has already become a significant part of our lives, influencing our daily transactions and interactions. For example, if you use Face ID on your smartphone or rely on AI for rental and mortgage decisions and recruitment, you are experiencing AI in action. While AI offers numerous benefits such as convenience, time savings, and the ability to analyze vast amounts of data to make predictions and informed decisions, it also introduces several challenges.
One major concern is the potential destabilization of the job market due to increased automation, which raises questions about the future of work. Additionally, AI systems are often trained on historical data, leading to outputs that can be outdated, incorrect, or discriminatory. This has serious implications for individuals' access to resources and opportunities for social mobility.
There are also significant security vulnerabilities associated with advanced AI, exacerbating existing concerns about cybersecurity. Other issues include sustainability, the spread of disinformation and misinformation, erosion of democracy and public trust, and the potential for catastrophic outcomes, as warned by some experts.
In summary, while there are many challenges and risks associated with AI, there is also a valuable opportunity for collaboration across sectors to develop strategies for mitigating these risks.
AI itself isn't new. However, there are risks and benefits that people like you have been studying for years. Recently, it seems that something has shifted. What exactly is causing this change? Is it that the technology itself has advanced significantly in the past few months? Or is it because we are witnessing more real-world applications, such as ChatGPT, in ways we didn't a few months ago?
The emergence of large language models and foundation models represents both an exciting and daunting new technology. These models are somewhat akin to the creation of the Internet because of their wide-ranging applications. They serve as a new infrastructure that is increasingly ubiquitous. For instance, you can develop chatbots that respond to questions in a way that feels almost human. Additionally, applications like Midjourney allow for the creation of new images.
The growth of this technology has sparked competition among major tech companies, all vying to be the first and the best in this field. Some conversations about the risks and cautions surrounding AI stem from concerns that these companies might prioritize profits over safety, driven by the intense pressure to outdo competitors and lead in various markets. This complex web of incentives can lead highly talented engineers and designers, who have developed remarkable technologies, to admit, “We can’t stop ourselves.”
This is also a public-sector race. The United States and China are competing to see who will be the first to effectively use AI. How important is this race?
The issue is complex. During the Cold War, we managed to maintain both an adversarial relationship and a collaborative research partnership with the Soviet Union. If we are serious about mitigating risks, we need to find ways to close certain doors while opening new opportunities, as the saying goes in geopolitics. However, we must also acknowledge the existence of malicious state actors and national security concerns. It's essential to keep the American public, democratic societies, and the global community safe, and we need to have a clear understanding of what that will entail.
How should we start considering regulation?
We need to begin our efforts immediately. There are existing regulations that can help us manage the impact of AI. Shortly after the emergence of generative AI, the U.S. Copyright Office had to determine whether copyright could be granted to works created by generative AI. The conclusion was that it could not; without a human creator, a work does not receive copyright protection.
Lina Khan from the Federal Trade Commission has been particularly adept in addressing this issue. In April, she stated that there is no exception for AI within the law, meaning that existing laws related to discrimination, bias, or consumer liability still apply regardless of whether an algorithmic system or AI tool is used.
We need to empower governments, policymakers, and legislators to understand that just because we have a new technological development operating in the social space, it doesn’t mean that our existing laws, regulations, policies, and guidance do not apply. However, we may also need to consider new rules and regulations moving forward.
We must contemplate the implications for the labor market as well. While we have established that copyright must have a human author in the U.S., we still need to address issues of training data and intellectual property. Additionally, we should consider what matters fall under civil jurisdiction versus those that pertain to national security, as there may be overlaps.
Long-term risks and concerns are perhaps better managed using regulatory tools associated with national security, such as export controls and sanctions. We need to have a comprehensive understanding of tracking hardware: where systems are going, who is building them, and how they are being built.
Although we haven’t seen much success in Congress over the last few years, various pieces of legislation addressing these issues have been introduced. Many of us, especially in the United States, recognize the need for general data privacy protection and to rethink competition and antitrust regulations. After the generative AI shift, powerful multinational companies—many of them based in the U.S.—are consolidating more power.
We are generally familiar with what we need to do, but the real challenge lies in the political will to accomplish these tasks. The rise of large language models like ChatGPT has energized some, while it has caused fear and concern for others. This development has opened up opportunities for a broader public discussion on these issues, allowing the public to urge their lawmakers to push necessary legislation forward.
Explain the necessity of an AI Bill of Rights and how it differs from government regulations imposed on companies or countries.
Certain principles should remain consistent, even as technology evolves. My team spent a year engaging with developers, academic researchers, civil society members, and the American public. From these discussions, we distilled five actionable principles:
1. Systems should be safe and effective.
2. Users should be informed when an AI system is being utilized.
3. We must address algorithmic discrimination.
4. Data privacy is essential.
5. Individuals should have the option to choose not to engage with AI systems, especially when it comes to critical access to services and goods.
These principles help ensure that technology serves everyone fairly and responsibly.
How can we establish global regulations? There is a risk that if each country addresses these questions independently, we will end up with a fragmented set of rules that malicious actors could take advantage of.
The challenge we face is significant. Many efforts are underway to improve coordination. Since 2019, the Organization for Economic Cooperation and Development (OECD) has provided principles and regulatory recommendations regarding AI. In May, the G-7 agreed to coordinate efforts on AI through the Hiroshima AI process. Discussions will continue within the G-20 and among various groups of stakeholders. This issue is one of the most crucial of our time due to its infrastructural nature and its potential for multiplicative and transformative effects. Addressing it will require international collaboration, similar to our approach for other high-risk, high-stakes endeavors. The United Nations has also been actively engaged in this area, and we must acknowledge the potential risks seriously enough to engage with both our comfortable partners and those with whom we may have adversarial relationships.
Conclusion
Artificial Intelligence stands at the crossroads of being a transformative blessing or a catastrophic threat. Its dual nature reflects the choices we make as a society. While AI has the potential to revolutionize industries, enhance lives, and solve global challenges, it also carries risks that could disrupt economies, compromise security, and erode trust.
The future of AI lies in our hands. By fostering ethical development, implementing effective regulations, and prioritizing international collaboration, we can harness its immense power responsibly. If approached with wisdom and care, AI can be a force for unprecedented progress—an enduring testament to humanity’s ingenuity. However, neglecting its risks could lead to unintended consequences that echo for generations.
Ultimately, AI will be what we choose to make of it: a tool for creation, collaboration, and hope—or a harbinger of irreversible challenges. The decision is ours, and the time to act is now.