President Biden recently signed an executive order on artificial intelligence to set safeguards against new AI technology. According to the White House, the bill apparently seeks to protect privacy and national security, “[ensuring] that America leads the way in seizing the promise and managing the risks of AI.”
“The primary thing the executive order does is create standards for federal agencies or industry; that is what most orders do,” said Mr. Heidt, AP government and debate teacher. “The thing that makes this bill significant is how widespread it is. It intends to set standards for the entire industry, but also for national security, the economy, and all facets of AI as it develops.”
These new standards for AI security include a requirement for developers to share safety test results or data for any new AI software with the U.S. government, in accordance with the Defense Production Act (DPA). The National Institute of Standards and Technology (NIST) will create standards to ensure these AI models are trustworthy, safe, and secure.
The order also stated that safety testing will examine the potential for “societal harms” such as bias and discrimination.
“The DPA allows the president to compel industries to comply with federal government demands if it is in the interest of national security,” explained Mr. Heidt.
The Biden administration made it clear that they see AI as a potential threat to national security, even comparing it to the harm social media has had on newer generations.
“Social media has shown us the harm that powerful technology can do without the right safeguards in place,” Biden stated in a press release from July of this year, where he announced Amazon, Google, Meta, and Microsoft had committed to setting AI safeguards.
The order also enacted a procedure to label authentic AI-generated content to protect consumers from AI-enabled fraud. The Department of Commerce is planning to develop watermarks and icons that will guide people to differentiate authentic government communication from fraudulent messages.
Furthermore, it established limits on the U.S. military’s use of AI. The National Security Council and the White House Chief of Staff will develop a set of standards to ensure the military and intelligence agencies use “AI safely, ethically, and effectively in their missions.”
It is important to note that most of these standards are optional. “[The executive order] creates standards for the government or industry to comply with, but it is almost entirely voluntary,” said Mr. Heidt.
“So, a Department of Commerce watermark, for example, might be something that an industry has an incentive to adopt because they want to use that for marketing purposes; that [the AI model] is safe, that it is approved by the government, and so forth. But it’s not a requirement.”
Even though the act is not directly enforceable or required, White House chief of staff Jeff Zients told AP News that Biden believes that “we can’t move at a normal government pace. We have to move as fast, if not faster, than the technology itself.”
Although public opinion on AI varies, the U.S. is definitely not the one that sees AI as a threat.
Governments around the world have begun to take clear steps to regulate the rapidly-developing technology: the European Union is nearing the passage of a comprehensive law that regulates AI; The UN is planning a conference on the impact of AI on public safety; and Japan passed a law that states an “AI operator may be held liable for tort or product liability if an accident occurs.”
The EU, Canada, and Japan may be at the forefront of developing standards for AI, “but the US is, too,” Mr. Heidt explained. Even though Congress did not pass any law, “the US has taken many steps in the past, [and it] is a participant in all those discussions, including the ones in the EU.”
Not only is the U.S. involved in developments of AI, the U.S. hopes to lead AI regulations in the sense that it will establish another precedent of standards to shape private sector behavior and government actions. Congress would need to pass legislation for there to be stronger action or enforcement.
According to Time magazine, Congress is in the early stages of debate over what safeguards to implement as an enforcement of the executive order. However, whether or not Congress will actually create legislation as opposed to voluntary standards is hard to predict.
“My guess is it would be very difficult if not impossible for Congress to agree on mandates for the industry,” said Mr. Heidt. “But you know, politics shifts, and AI doesn’t fall on party lines, so I could be wrong.”