That’s a great way for a country to close the year with a bang! Just 5 days before the conclusion of 2024, South Korea has joined European Union (EU) to become the second nation to pass laws for regulation of AI. On December 26, 2024, the National Assembly of South Korea has passed a bill on the Act of the Development of Artificial Intelligence and Establishment of Trust (AI Basic Act), which is expected to come into effect in January 2026 following a Cabinet vote and promulgation. This follows the approval given by the Judiciary Committee of the South Korean National Assembly on December 17th, 2024. This move is in alignment with the vision South Korea has for innovation and governance.
What does the legislation include?
The proposed legislation emphasizes the need for a national cooperative system on AI along with systematic development of the AI industry to form a legal basis for prevention of AI-related risks. The proposed organizational framework includes establishment of National AI Committee and AI Safety Research Institute for promotion of AI-related policies. South Korea aims to support this organizational system with research, academic data and an AI data centre. The nation also endeavours to develop a reliable cornerstone for generative AI to mitigate its negative implication for the society.
Salient features of the Act
The Act defines AI, high-impact AI, AI ethics, generative AI and AI business operators. It states that the AI business operators providing products and services using high-impact AI or generative AI must notify their users of such AI systems in advance. Moreover, the AI business operators offering generative AI services have to notify their users that the results of the system were created by generative AI. Furthermore, if virtual results are being offered which are indistinguishable or difficult to distinguish from reality, then the users must be clearly informed about it, so that they can clearly visualize the difference.
AI business operators are also required to implement measures for risk identification, assessment and mitigation for AI safety where computational amount used for learning exceeds the standard prescribed by Presidential Decree. The Act also states that the Ministry of Science and ICT can request an AI business operator to submit necessary materials or have public officials conduct necessary investigation, and/or direct measures for AI safety in case there is a discovery or suspicion of violation of the AI law.
Is it well-thought?
Yes, the proposed legislation is quite comprehensive and inclusive of a large volume of viewpoints. The bill is the consolidation of 19 separate bills on AI by the Science, Technology, Information and Broadcasting and Communications Committee. To unify these bills into a coherent framework, the bill addresses ethical, safety and societal concerns related to AI. The bill defines AI as an “electronic implementation of intellectual capabilities that humans possess, such as learning, reasoning, perception, judgment and language comprehension.” This makes South Korea the first Asian nation to pass a comprehensive regulation dedicated to AI.
Not quite different from EU Act
Given the holistic work done by the EU AI Act, South Korea’s bill does not look quite different from the themes found in the EU AI Act. Both frameworks provide a classification of AI systems based on their potential impact on safety and human rights, while maintaining a risk-based approach. Emphasis on ethical guidelines, transparency and establishment of oversight bodies is a commonality in both the Acts. Transparency obligations in South Korean Act in context of deepfakes and anthropomorphism are similar to the Article 50 of the EU AI Act. South Korea’s Act imposes strict requirements on the high-impact AI systems – basically the ones that pose high risk to individuals. Both the EU AI Act and the South Korean Act aim to protect fundamental rights and provide measures for standardization.
What’s expected Tomorrow?
As the year-end is kicking in, the South Korean law comes as a delight for the AI ethicists and lawmakers. With the United Kingdom and Japan developing their own AI policies, 2025 is expected to witness more nations regulating AI with enactment of laws and with focus on the social implication of AI. We must expect more momentum in AI safety space in 2025.