On Monday, Joe Biden signed an executive order, which he described as the most important step any government has taken to ensure the safe use of AI.
“We’re going to see more technological change in the next 10, maybe next five years than we’ve seen in the last 50 years,” the US president said at a press conference.
“AI is all around us. Much of it is making our lives better … but in some case AI is making life worse.”
Kamala Harris, the vice-president and the US’s representative at the global AI safety summit in the UK this week, said that the government has a “a moral, ethical and societal duty to make sure AI is adopted and advanced in a way that protects the public from potential harm and ensures that everyone is able to enjoy its benefits”.
Harris emphasized the administration’s belief that the United States leads the world in AI and that the executive order should set an example for global action. She stated, “American companies are the driving force behind AI innovation worldwide. The United States possesses the ability to lead and foster global collaboration in a way that no other nation can. Under President Joe Biden’s leadership, this commitment to AI leadership will continue.”
Biden, on the other hand, described the executive order as a “bold move” but stressed the importance of Congress taking swift action to ensure the safe development and deployment of AI.
As per the executive order, technology companies will be required to share the results of their artificial intelligence system tests with the US government before they are made public.
The government will also set stringent testing guidelines. “As we advance this agenda at home, the administration will work with allies and partners abroad on a strong international framework to govern the development and use of AI,” said the order.
The White House has issued several directives regarding AI:
- Companies that develop AI models posing threats to national security, economic security, or public health and safety must share their safety test results with the government.
- The government will establish guidelines for red-team testing, where assessors simulate the actions of rogue actors in their testing procedures.
- Official guidance on watermarking AI-generated content will be provided to address the risks associated with fraud and deepfakes.
- New standards for screening biological synthesis, aimed at identifying potentially harmful gene sequences and compounds, will be developed to mitigate the risk of AI systems contributing to the creation of bioweapons.
- White House Chief of Staff Jeff Zients stated that President Biden has directed his staff to act urgently on AI-related issues.
“We can’t move at a normal government pace,” Zients said Biden told him. “We have to move as fast, if not faster than the technology itself.”
The White House said the sharing of test results for powerful models would “ensure AI systems are safe, secure and trustworthy before companies make them public”.
Regarding AI-generated deepfakes, the US Department of Commerce will provide guidance for labeling and watermarking AI-generated content. This guidance is intended to assist in distinguishing between authentic interactions and content created by AI software.
Referring to the watermarking plans, the order stated:
“Federal agencies will use these tools to make it easy for Americans to know that the communications they receive from their government are authentic, and set an example for the private sector and governments around the world.”
The executive order also addresses areas such as privacy, civil rights, consumer protections, and workers’ rights.
Civil liberties and digital rights groups have largely praised the executive order as a positive first step. Alexandra Reeve Givens, the Chief Executive of the Center for Democracy and Technology, a nonprofit digital rights group, stated that it signifies a milestone, demonstrating that the entire government supports “the responsible development and governance of AI.”
“It’s notable to see the administration focusing on both the emergent risks of sophisticated foundation models and the many ways in which AI systems are already impacting people’s rights, a crucial approach that responds to the many concerns raised by public interest experts and advocates,” Givens said in a statement.
But the effectiveness of the order hinges on how well the directives can be enforced and put into action, as Givens pointed out. She stated, “We encourage the administration to act promptly to meet the specified deadlines, and to ensure that any guidance or mandates issued under the EO are specific and practical enough to achieve their intended impact.”
Caitriona Fitzgerald, the Deputy Director of the Electronic Privacy Information Center (EPIC), emphasized the importance of the privacy safeguards outlined in the order, especially in light of the absence of comprehensive federal protections in the United States.
“While Epic continues to call on Congress to pass a comprehensive privacy law that limits the mass data collection that fuels harmful uses of technology, this executive order is a significant step towards establishing the necessary fairness, accountability, and transparency guardrails to protect people from discrimination and inequality facilitated by AI systems,” Fitzgerald said in a statement.
Nonetheless, not all groups, particularly those focused on surveillance, share the same level of optimism. Albert Fox Cahn, from the Surveillance Tech Oversight Project, expressed concerns that the approach outlined in the order might facilitate additional AI abuses. He pointed out that the White House order relies on AI auditing methods that can be manipulated or exploited by companies and agencies.
“The worst forms of AI, like facial recognition, don’t need guidelines, they need a complete ban,” Fox Cahn said.
“Many forms of AI simply should not be allowed on the market. And many of these proposals are simply regulatory theater, allowing abusive AI to stay on the market.”
The executive order outlines a timeline for implementation, with the tasks expected to be completed within a range of 90 to 365 days. Safety and security measures are given the earliest deadlines.
Additionally, the order includes a national security memorandum that instructs the US military and intelligence community on the ethical and safe use of AI. It also calls upon Congress to pass legislation safeguarding Americans’ data privacy. Federal agencies will be responsible for developing guidelines for assessing privacy-preserving techniques in AI systems.
Addressing concerns about bias, the order instructs guidance for landlords, federal benefits programs, and federal contractors to prevent AI algorithms from perpetuating discrimination. It also highlights the importance of developing best practices for AI use within the justice system, such as in sentencing, predictive policing, and parole.
The potential job market disruptions caused by AI are addressed through an order aimed at creating best practices for mitigating the negative effects of job displacement. This includes preventing employers from undercompensating workers, ensuring fair job application evaluations, and protecting workers’ organizing rights. Government agencies will receive guidance on AI use, including standards to protect rights and safety.
The Federal Trade Commission, a regulatory authority focused on competition, will be encouraged to utilize its authority if any imbalances or distortions arise within the AI market.
In recognition of global efforts to regulate AI and discussions held at the recent safety summit, the White House announced plans to expedite the formulation of AI standards in collaboration with international partners. Additionally, on the same day, the G7 group of nations released a code of conduct for organizations involved in the development of advanced AI systems. This code includes provisions for watermarking AI-generated content, external model testing, and prioritizing the use of AI to address pressing global challenges such as the climate crisis and health issues.