With large companies like Google, Microsoft, and Meta investing in the research of artificial intelligence in the last few years, the significant influence of these companies on the development and application of AI has raised questions about the potential impacts of these algorithms on humans.
AI can benefit humans by increasing automation in jobs, generating ideas, and helping in various other ways, but it also has the potential to be harmful. AI is constructed by collecting data from the internet, but this information is often copyrighted or personal, and its use could violate the privacy of companies and people worldwide. AI can also be used to alter images and mimic people’s likenesses and voices, creating a dangerous opportunity for identity theft and misinformation. To combat this, several nations have implemented regulations to prevent the unethical use of AI.
“I think AI is very useful for making new concepts or trying to imagine something that you’re having a hard time seeing, but in terms of using AI for more malicious purposes, such as impersonation and stealing other people’s data, I think there is a need for regulation by the government,” Ella Mills ‘24 said.
With consultation from the public, the U.S. government constructed the Blueprint for an AI Bill of Rights, a guide restricts the use of AI to align with the principles of democracy and protect citizens from the dangers of AI, in 2022. This blueprint is based on five principles: using safe and effective systems, preventing algorithmic discrimination, maintaining data privacy, providing notice and explanation of AI usage, and having human alternatives.
In short, any algorithm used would have to be effective without compromising the safety of individuals or groups of people, algorithms must be designed not to show any bias towards classifications protected by law, and developers must seek consent before any private data is used for AI learning, notice of when an AI system is being used and impacts users, and being able to opt for a human alternative to AI system.
“I think that the privacy protection this law brings will make artists such as myself and others feel safer about the integrity of our work,” Mills said. “Artwork is very susceptible to just being stolen online and used without our permission if we do not take necessary precautions, so I think this law can help ease some of those troubles.”
The EU introduced the EU AI Act, which will regulate algorithms by categorizing them into 5 different levels based on their risk factors, on Dec. 8. Unacceptable-risk AI systems are considered threats and will be immediately banned, including those capable of Cognitive behavioral manipulation, social scoring, and biometric identification. High-risk AI systems that are capable of putting the safety and rights of people in danger will be assessed by the government before being used commercially.
Generative AI would have to disclose that their content is generated by AI and publish summaries of copyrighted data used for training. This type of AI would still have to go through evaluations before being used in case they qualify as high-risk. Limited-risk AI systems should meet minimal transparency standards for informed user decisions. Users, post-interaction, can choose to continue usage. Awareness is crucial during AI interaction, particularly for systems generating or manipulating multimedia content, such as deepfakes.
“The system forces developers to publish summaries with training data, which makes it harder for companies to train AI on stolen data like what happened with OpenAI recently,” Draden Jones ‘26 said. “It does pose an issue in that the strict rules of the act will make it harder for smaller businesses to use AI, as proving that their programs follow the regulations will require a dedicated workforce that many can’t afford.”
China also introduced laws to specifically limit generative AI, the sources it uses to learn, and its outputs. This includes ensuring any AI systems adhere to core socialist values, preventing discrimination based on individual factors, respecting intellectual property rights, business ethics, and individuals’ right to privacy and safety, and maintaining transparency regarding the accuracy and reliability of generated content.
“I think certain positive aspects to China’s rule deal with ethics and privacy, because AI can be used in a lot of negative ways and it’s important to protect users,” Shruthi Srikanth ‘25 said. “But also by limiting the output to follow socialist values, the capabilities of AI would be diminished.”
As AI technology continues to develop and serve a purpose in more fields of work, governments have created and enforced laws to prevent its unethical use. Initiatives like the U.S. AI Bill of Rights and the EU AI Act are a step in a more cautious direction for the future of AI regulation, but as time goes on, these initiatives could change to adapt to advancements in the capabilities of AI.