AI policy and regulation were somewhat inevitable considering how many areas of life it could affect. The nature of these models also puts a lot of concerns about creator rights and copyright at the forefront. Recent AI regulation news is always updating rapidly, so it can be difficult to keep up. That’s why we’ll examine how different entities, whether countries or different companies are dealing with AI in this handy resource.
Globally, countries are actively developing AI governance laws to keep pace with rapidly evolving AI technologies, according to the Global AI Legislation Tracker from September 2023. These efforts range from comprehensive to focused laws and include voluntary guidelines. The number of countries that have AI-specific laws has surged, from 25 in 2022 to 127 in 2023. The EU is making progress with new regulatory frameworks, while international cooperation is also growing, involving entities like the OECD, the UN, and the G7. The aim is to balance the risks of AI with its potential benefits.
Let’s dive deeper into the issue.
AI Policy Across the World
It is clear that AI regulation is coming from various governments globally. Generative AI policy is a hot-button and every country is approaching it differently.
US AI Outlook
Let’s start with the US. As the hub of Silicon Valley, they can have the most impact on any AI company operating. While the US has no comprehensive federal regulations on technology, companies and industries are coming up with their own rules. The government has proposed an AI bill of rights as a basic blueprint for potential regulations. The core principles of the bill are as follows:
Among different industries, the creative fields have been at the forefront of AI rules. Most famously, one of the platforms the Hollywood writer’s strike in 2023 ran on was restricting the use of AI. The writer’s guild successfully lobbied for this, preventing studios from running scripts through LLMs and AI-generated writing.
There is also a lot of discussion about banning AI art among social media users on various platforms within artist communities, but nothing official has been put in place on the larger websites. Users are concerned about AI training data and how their work is being used without attribution, credit, or compensation.
The United States has taken a cautious approach to AI regulation, amidst intensifying global efforts. Although it is uncertain whether broad legislation will be passed by Congress, President Biden issued an extensive executive order in October 2023 focused on ensuring AI safety, security, and trustworthiness. Implementing this order will pose significant challenges.
US Government Stance
In the U.S., recent years have seen the introduction of the first federal AI laws, including significant acts such as the National Artificial Intelligence Initiative Act of 2020. The AI in Government Act and the Advancing American AI Act are among others that drive AI-related policies across federal agencies.
Over the past congressional hearings, numerous AI-focused bills have been introduced, though few have been passed. State-level AI legislation has also been significant, with Maryland, California, and Massachusetts leading in the number of AI-specific bills passed.
President Biden’s October 2023 executive order outlines a robust policy for the development and use of AI, covering various areas from AI system safety and security to the protection of U.S. citizens from AI-related threats. It encourages the development of privacy-preserving techniques and aims to advance equity and civil rights.
The order also supports the responsible use of AI in sectors like healthcare and education and emphasizes the importance of maintaining a competitive and innovative AI ecosystem in the U.S. It calls for increased international engagement on AI issues and stresses the need for responsible government use of AI technologies.
Reactions to the executive order have been mixed, with broad bipartisan support among the public but some criticism from Republicans regarding potential regulatory overreach. Experts generally view the executive order as a significant step, though they acknowledge the challenges in its implementation. The overall consensus is that while the executive order is a strong signal of U.S. intentions, comprehensive legislation is still needed for a more robust AI governance framework.
UK AI Policies
AI regulation in the UK is in development as the government is looking into policies. The government has established a comprehensive and outcome-oriented regulatory framework for AI, guided by five fundamental principles: safety, security and robustness, clear transparency and explainability, fairness, accountability and governance, and the right to challenge and seek redress.
This framework for UK AI regulation will be implemented across various sectors by regulators who will use existing laws and provide additional regulatory advice. Select regulators are set to release their annual AI strategy plans by April 30th, offering essential guidance to businesses.
In addition to the framework, voluntary initiatives focusing on the safety and transparency of advanced AI models and systems will support regulators’ efforts. While the framework will not immediately become law, the Government recognizes the potential need for specific legislative measures in the future to fill any gaps in the existing regulations, especially those concerning the challenges posed by sophisticated General Purpose AI and its major developers.
Organizations should anticipate more regulatory actions in the coming year, including new guidelines, data collection, and enforcement measures. Moreover, international companies must be ready to deal with variations in regulations across different countries.
EU Regulations
The EU has been one of the most active in discussing big data and AI. The bloc has always had some of the strongest tech regulations like the General Data Protection Regulation (GDPR). AI usage policies are still an ongoing debate.
Generative AI is poised to revolutionize various sectors by enhancing innovation, empowering people, and boosting productivity. However, a growing challenge is the difficulty in distinguishing between content created by humans and that produced by AI, which could facilitate illegal or harmful activities. In response, policymakers worldwide are exploring how to incorporate watermarking techniques to create a more secure AI environment. China has already implemented regulations requiring AI-generated images to bear watermarks.
In the United States, the government is working on effective labelling and content provenance methods to help users recognize AI-generated content. The G7 has called on companies to create and apply reliable content authentication and provenance systems, including watermarking, to help identify content created by AI. The EU’s new AI Act, preliminarily settled in December 2023, mandates that AI system providers and users facilitate the identification and tracking of AI-generated content, likely through watermarking.
However, the current AI data labelling and watermarking technologies face significant technical challenges in implementation, accuracy, and reliability. As a result, AI developers and policymakers must address the development of effective watermarking tools and the standardization and regulation of these techniques.
AI Policies From Different Companies
AI policy for companies can vary. According to Forbes, More than half of business owners are implementing artificial intelligence for cybersecurity and fraud management. A quarter of businesses are afraid that AI will affect website traffic.
The same survey indicates that an overwhelming number of companies believe that ChatGPT will help their business. Almost half of business owners already use AI to craft their internal communications. Nearly two-thirds also believe AI will improve customer relationships, which may indicate an interest in using AI chatbots in the future. Businesses are implementing the technology primarily for website content.
An AI acceptable use policy is the most common form, particularly on social media. Corporate AI policy measures sometimes highlight transparency rather than outright bans. The Etsy AI art policy requires that users disclose that an AI was used, for example. Meta has settled on allowing AI but labelling it where possible.
X (formerly known as Twitter) has integrated AI into its business model. Their Grok AI uses Twitter content as a basis and generates headlines from the data it receives. This has had mixed results.
The World Economic Forum has discussed setting up AI policy for schools with some recommendations. These recommendations include setting up AI-focused task forces, technological literacy, supporting professional development support, and investing in AI research and development.
Overall, every company is looking to develop their own AI policies and personnel to deal with AI. Developments in the field will continue over the coming years so we’ll keep an eye on them.