Skepticism as South Korea Tackles AI Regulation
Critics warn that vague definitions undermine the new law’s effectiveness.
Written by Tim Hornyak | 5 min • October 16, 2025
Skepticism as South Korea Tackles AI Regulation
Critics warn that vague definitions undermine the new law’s effectiveness.
Written by Tim Hornyak | 5 min • October 16, 2025
In August 2024, South Korean President Yoon Suk Yeol ordered a crackdown on deepfake adult content circulating on Telegram. Following similar concerns about election deepfakes, it was the latest example of mounting anxiety over AI’s role in reshaping public discourse and undermining trust.
Just four months later, on the night of Dec. 3, Yoon stunned the nation by declaring martial law in a televised address, ordering the arrest of opponents and suspending political activities at the National Assembly. The announcement was so shocking that many South Koreans initially dismissed it as AI-generated disinformation — a sign of how deeply digital skepticism had taken root.
It was, in fact, real. Hours later, opposition leaders such as Lee Jae Myung defied troops gathered at the National Assembly and unanimously passed a motion to end the state of martial law, plunging the country into political chaos.
Against this backdrop of AI anxiety and democratic crisis, parliamentarians could not have chosen a more challenging time to present sweeping new legislation. But just weeks after voting to impeach Yoon, South Korea passed the AI Basic Act — making it only the world’s second governing body, after the European Union, to enact comprehensive AI regulation.
The Act reflects South Korea’s broader ambitions to become one of the world’s top three AI powers, despite persistent concerns about the technology’s risks. Still, the country is struggling with how to implement the law formally known as the Basic Act on the Development of Artificial Intelligence and the Establishment of Foundation for Trustworthiness. Promulgated on Jan. 21, 2025, it is scheduled to come into force on Jan. 22, 2026.
The law defines AI broadly as systems that replicate human intellectual abilities — learning, reasoning, perception and judgment — to make predictions, recommendations and decisions.
The Act “balances two key values: promoting the development of the AI industry and establishing a safe and trustworthy foundation for AI utilization,” said Shim Zeeseop, who is in charge of AI legislation at South Korea’s Ministry of Science and ICT, the government body in charge of the law.
The ministry describes the law as both an industrial competitiveness measure and a trust-building framework. But its path to passage was complicated. Since 2020, lawmakers have introduced 19 different AI-related bills with overlapping provisions. By late 2024, they had agreed to merge them into a single statute.
The Act is structured as a hybrid law: part industrial policy and part regulatory framework. It sets out national AI master plans, funding for R&D, support for standardization, the creation of AI training datasets, and the designation of AI industrial clusters and data centers.
Its centerpiece is the regulation of “high-impact AI” — systems used in critical sectors such as healthcare, criminal justice, transportation and government decision-making, where failures could endanger lives or violate fundamental rights. Companies using high-impact AI must implement risk management systems, maintain human oversight and be able to explain how their systems make decisions.
The Act also addresses generative AI. Providers must notify users when such systems are being used and must label outputs as AI-generated, especially when they involve audio, images or video that could easily be mistaken for reality. The Act allows an exception in the case of artistic or creative works, where notification may be provided in a more flexible and less intrusive manner. The transparency obligations are intended to reduce risks of deception, manipulation and disinformation.
Despite its ambitious scope, the South Korean law differs significantly from its European predecessor.
“After the EU passed the General Data Protection Regulation, this led to a significant turning point to renew and strengthen our own personal data protection act,” said Heejin Kim, Visiting Assistant Professor in the Graduate School of Data Science and co-lead of the Trustworthy AI Lab at Seoul National University.
The same pattern emerged with AI regulation, Kim noted. “It’s widely recognized that the AI Framework Act of Korea is modeled after the EU AI Act in many ways. But compared to the European legislation, our law has a narrower regulatory scope and its enforcement measures, such as fines, are very limited,” said Kim.
The ministry can investigate, order corrective measures and fine offenders — but only up to 30 million won (about $21,000) — for violations such as failing to notify users of AI use, neglecting to designate a domestic representative or disobeying ministerial orders.
Despite surface similarities to EU regulation, the South Korean law “dresses like a Brussels daydream, but in reality [is] more akin to US executive orders that promote AI uptake,” according to researchers at the European Centre for International Political Economy. Like most countries, they argue, South Korea has taken a “wait and see” approach while the technology remains nascent.
If anything, the Act is aimed at growing the industry instead of regulating it. South Korea doesn’t have a large internal market, but it does have a significant number of advanced technology companies seeking to expand overseas. Plus, many international AI companies operate in South Korea, said Kim.
“We think that domestic companies need to consistently work on developing more innovative and high-quality tech products and services, but at the same time, it is equally important to guide them and prepare the market in general to be aligned with global standards,” added Kim.
Even before implementation, the AI Basic Act faces growing criticism. Industry groups are advocating for a two- to three-year delay to give companies more preparation time. Another sticking point is the Act’s definition of “high-impact AI,” which critics have slammed as too vague.
“The current bill mostly classifies most generative AI as high-impact AI, imposing regulations that could severely obstruct industrial development,” Sangchul Park, a professor at Seoul National University School of Law who specializes in AI, told ChosunBiz. Each industry should have its own AI regulations, he suggested, noting that, for example, recruitment AI and autonomous vehicle AI require different considerations and should be categorized accordingly.
Seungmin Lee, Director of Intelligent Cyber Research at cybersecurity group Next Peak, believes the Act will have only limited impact. It leaves fears about deepfakes unaddressed, she asserted, which could fan public skepticism about authentic content. She also criticized its vague definition of high-impact AI and weak enforcement mechanisms.
“What the impact and effectiveness of the AI Basic Law will be is still unclear; further evaluation will be needed after this year of preparation and development,” Lee wrote on a blog by the Stimson Center, an international security think tank.
Kim, meanwhile, agreed that it’s too early for South Korea to impart meaningful lessons to the rest of the world. “Without proper means and concrete plans to implement what is agreed, legislating the framework Act alone cannot achieve what it proclaims,” said Kim. “That kind of legislative move is just an empty promise for people who want to see some government interventions to minimize the societal risks that could potentially arise from AI use and development.”
The Act’s implementing agency remains committed to the legislation, though it plans to solicit more opinions from industry while considering a regulatory grace period.
“The Korean government is in the process of preparing subordinate legislation and guidelines to faithfully implement the intent of the law,” said Shim. He expects the Act to enhance the country’s AI competitiveness and serve as “a valuable example for other countries.”
As of now, South Korea’s law stands at a crossroads between ambition and uncertainty. By unifying policy, promoting innovation and extending oversight to foreign firms, it signals a bold bid to position the country as a global AI leader. Yet delays, vague definitions and limited enforcement risk reducing it to a compromise that struggles to reassure skeptics.
The ultimate test will be implementation. As a semiconductor and technology powerhouse, South Korea’s regulatory choices will influence global AI governance — but only if the law proves more than symbolic. With subordinate regulations still being drafted and industry resistance mounting, the AI Basic Act’s sweeping promises remain largely unfulfilled.