Artificial intelligence is moving faster than most regulatory systems were built to handle. Across major jurisdictions, the emerging response is not a single global rulebook but a layered model: binding laws in some places, treaty obligations in others, and governance frameworks or principles that shape expectations even where no single AI statute exists. What is becoming clear is that regulators are not choosing between innovation and safety. They are increasingly trying to preserve innovation while imposing stronger controls where AI creates meaningful risks to rights, safety, markets, or public trust, as reflected in the EU AI Act implementation timeline.

The European Union has built the clearest binding model so far. Its implementation timeline states that the AI Act applies progressively, with general provisions and prohibitions applying from February 2, 2025, rules for general-purpose AI applying from August 2, 2025, most remaining obligations from August 2, 2026, and full rollout foreseen by August 2, 2027. That staged structure reflects a risk-based design rather than a one-size-fits-all regime under the European Commission AI Act timeline.

A second major development is the Council of Europe’s Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law. Article 1 states that the Convention aims to ensure that activities within the lifecycle of AI systems are fully consistent with human rights, democracy, and the rule of law. The Convention also calls for transparency and oversight in Article 8, safe innovation and controlled testing environments in Article 13, and a risk and impact management framework in Article 16, as set out in the Council of Europe convention text.

China has taken a more state-directed approach, but it has reached a similar conclusion that generative AI should not be left entirely to voluntary governance. The official CAC text of the Interim Measures for the Management of Generative Artificial Intelligence Services states that the measures were issued on July 13, 2023 and took effect on August 15, 2023. Article 1 states that the measures were formulated to promote healthy development and standardized application while safeguarding national security and public interests, according to the official CAC notice.

South Korea has also moved into dedicated AI legislation. The Ministry of Science and ICT states that the Basic Act on the Development of Artificial Intelligence and the Establishment of Foundation for Trustworthiness was passed by the National Assembly on December 26, 2024 and that the new law is to take effect in January 2026. The ministry also describes the Act as both supporting AI development and establishing a foundation for trustworthiness, as described in A New Chapter in the Age of AI: Basic Act on AI Passed at the National Assembly’s Plenary Session and National AI Strategy Committee Launched as Korea’s Top Control Tower for AI Policy.

In the United States, the picture is more fragmented, but it is not accurate to describe it as unregulated. NIST states that it developed the AI Risk Management Framework to help manage risks to individuals, organizations, and society associated with AI. NIST also describes the framework as voluntary and intended to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems, as described on the NIST AI Risk Management Framework page and the AI RMF Development page.

The federal government has also issued AI-specific governance direction for agencies. OMB Memorandum M-24-10 states on page 2 that agencies must continue to comply with applicable OMB policies in other domains relevant to AI, including enterprise risk management, privacy, accessibility, IT, and cybersecurity. That confirms that AI governance in the U.S. federal context is being integrated into broader existing compliance and risk-management structures, as stated in M-24-10: Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence.

At the state level, Colorado enacted one of the clearest U.S. statutory examples. The state’s official bill page for SB24-205 states that, on and after February 1, 2026, developers and deployers of high-risk AI systems must use reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination, as stated in SB24-205 Consumer Protections for Artificial Intelligence.

Singapore has remained an important governance-focused jurisdiction. IMDA states that the AI Verify Foundation and IMDA developed a draft Model AI Governance Framework for Generative AI, expanding on the earlier framework for traditional AI. IMDA also states that Singapore established the AI Verify Foundation to advance responsible AI testing worldwide. This is not a single hard-law regime, but it is a practical model of assurance, testing, and governance infrastructure, as described in Model AI Governance Framework 2024 – Press Release, Singapore Launches AI Verify Foundation 2023, and Artificial Intelligence in Singapore.

At the international principles level, UNESCO and the OECD remain highly influential. UNESCO states that its Recommendation on the Ethics of Artificial Intelligence is its first global standard on AI ethics and is applicable to all 194 UNESCO member states. UNESCO also states that the protection of human rights and dignity is the cornerstone of the Recommendation. The OECD states that its AI Principles promote AI that is innovative and trustworthy and that respects human rights and democratic values. These instruments are not substitutes for statute, but they help explain why so many national frameworks converge on transparency, accountability, human oversight, and trustworthiness, as described by UNESCO in Recommendation on the Ethics of Artificial Intelligence, Ethics of Artificial Intelligence, and Recommendation on the Ethics of Artificial Intelligence, and by the OECD in AI Principles and Recommendation of the Council on Artificial Intelligence.

Taken together, these sources show a consistent direction of travel. The world is not moving toward one universal AI code. It is moving toward overlapping legal, treaty, and governance layers that all point to the same operating expectation: organizations should know where AI is being used, assess the risks it creates, maintain meaningful oversight, and be able to justify the controls they put in place. Innovation remains the objective, but innovation without governance is no longer the model these frameworks are building toward.

References

European Union, European Commission, Directorate-General for Communications Networks, Content and Technology. Timeline for the Implementation of the EU AI Act. AI Act Service Desk, n.d., ai-act-service-desk.ec.europa.eu/en/ai-act/timeline/timeline-implementation-eu-ai-act. Accessed 15 Apr. 2026.

Council of Europe. Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law. Council of Europe Treaty Series No. 225, 5 Sept. 2024, rm.coe.int/1680afae3c. Accessed 15 Apr. 2026.

China, Cyberspace Administration of China. 生成式人工智能服务管理暂行办法 [Interim Measures for the Management of Generative Artificial Intelligence Services]. Cyberspace Administration of China, 13 July 2023, www.cac.gov.cn/2023-07/13/c_1690898327029107.htm. Accessed 15 Apr. 2026.

Korea, Republic of, Ministry of Science and ICT. A New Chapter in the Age of AI: Basic Act on AI Passed at the National Assembly’s Plenary Session. Ministry of Science and ICT, 27 Dec. 2024, www.msit.go.kr/eng/bbs/view.do?bbsSeqNo=42&mId=4&mPid=2&nttSeqNo=1071&sCode=eng. Accessed 15 Apr. 2026.

Korea, Republic of, Ministry of Science and ICT. National AI Strategy Committee Launched as Korea’s Top Control Tower for AI Policy. Ministry of Science and ICT, 4 Feb. 2025, www.msit.go.kr/eng/bbs/view.do%3Bjsessionid%3DBcrOnOFsS7Ab27F9n5E-tQULifVnhsZMYQ4RlT7-.AP_msit_1?bbsSeqNo=42&mId=4&mPid=2&nttSeqNo=1165&sCode=eng. Accessed 15 Apr. 2026.

United States, National Institute of Standards and Technology. AI Risk Management Framework. National Institute of Standards and Technology, n.d., www.nist.gov/itl/ai-risk-management-framework. Accessed 15 Apr. 2026.

United States, National Institute of Standards and Technology. AI RMF Development. National Institute of Standards and Technology, n.d., www.nist.gov/itl/ai-risk-management-framework/ai-rmf-development. Accessed 15 Apr. 2026.

United States, Executive Office of the President, Office of Management and Budget. M-24-10: Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence. Executive Office of the President, 28 Mar. 2024, www.whitehouse.gov/wp-content/uploads/2024/03/M-24-10-Advancing-Governance-Innovation-and-Risk-Management-for-Agency-Use-of-Artificial-Intelligence.pdf. Accessed 15 Apr. 2026.

Colorado. General Assembly. SB24-205 Consumer Protections for Artificial Intelligence. Colorado General Assembly, n.d., leg.colorado.gov/bills/sb24-205. Accessed 15 Apr. 2026.

Singapore, Infocomm Media Development Authority. Model AI Governance Framework 2024 – Press Release. Infocomm Media Development Authority, 28 May 2024, www.imda.gov.sg/resources/press-releases-factsheets-and-speeches/press-releases/2024/public-consult-model-ai-governance-framework-genai. Accessed 15 Apr. 2026.

Singapore, Infocomm Media Development Authority. Singapore Launches AI Verify Foundation 2023. Infocomm Media Development Authority, 7 June 2023, www.imda.gov.sg/resources/press-releases-factsheets-and-speeches/press-releases/2023/singapore-launches-ai-verify-foundation. Accessed 15 Apr. 2026.

Singapore, Infocomm Media Development Authority. Artificial Intelligence in Singapore. Infocomm Media Development Authority, n.d., www.imda.gov.sg/about-imda/emerging-technologies-and-research/artificial-intelligence. Accessed 15 Apr. 2026.

UNESCO. Recommendation on the Ethics of Artificial Intelligence. UNESCO, n.d., www.unesco.org/en/articles/recommendation-ethics-artificial-intelligence. Accessed 15 Apr. 2026.

UNESCO. Ethics of Artificial Intelligence. UNESCO, n.d., www.unesco.org/en/artificial-intelligence/recommendation-ethics. Accessed 15 Apr. 2026.

UNESCO. Recommendation on the Ethics of Artificial Intelligence. Legal Affairs, UNESCO, n.d., www.unesco.org/en/legal-affairs/recommendation-ethics-artificial-intelligence. Accessed 15 Apr. 2026.

OECD. AI Principles. OECD, n.d., www.oecd.org/en/topics/ai-principles.html. Accessed 15 Apr. 2026.

OECD. Recommendation of the Council on Artificial Intelligence. OECD Legal Instruments, 22 May 2019, legalinstruments.oecd.org/en/instruments/oecd-legal-0449. Accessed 15 Apr. 2026.

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy policy and terms and conditions on this site
Welcome to AIM-E click here to chat with our AI strategist
×
×
Avatar
Global AI Strategy Architect
Senior AI Strategist, Systems Architect, and AI Governance Advisor
Hello. If you're evaluating or planning an AI initiative, I can help you assess the approach, identify risks, and determine the most effective path forward. Feel free to describe what you're working on, and we can break it down from a strategic and architectural perspective.