South Korea's 'world-first' AI laws face pushback amid bid to become leading tech power

South Korea Launches Comprehensive AI Laws Amid Backlash from Tech Startups and Civil Society Groups.

The South Korean government has unveiled what it claims is the world's first comprehensive set of artificial intelligence (AI) laws, designed to regulate the use of AI technology in various sectors. However, the new legislation has already faced criticism from local tech startups and civil society groups, who argue that the rules are too strict or don't go far enough.

The AI Basic Act, which took effect last Thursday, requires companies providing AI services to label AI-generated content and conduct risk assessments for high-impact AI systems used in areas such as medical diagnosis, hiring, and loan approvals. The law also stipulates that extremely powerful AI models must have safety reports, although the threshold is set so high that no models worldwide currently meet it.

The legislation has been hailed by government officials as a model for other countries to follow, but tech startups and civil society groups have expressed frustration with the rules. Many argue that they will create uncertainty and stifle innovation, particularly since companies must self-determine whether their systems qualify as high-impact AI.

One major concern is competitive imbalance, as all Korean companies face regulation regardless of size, while only foreign firms meeting certain thresholds – such as Google and OpenAI – are exempt. This has led to warnings about the potential for a "technological arms race" in Korea.

Critics also argue that the law does not provide sufficient protection for people harmed by AI systems. Four organizations, including Minbyun, a collective of human rights lawyers, have issued joint statements highlighting the law's shortcomings and calling for clearer definitions of high-impact AI and exemptions for certain types of AI systems.

Despite the pushback, government officials maintain that the law is 80-90% focused on promoting industry rather than restricting it. The ministry of science and ICT has promised to clarify the rules through revised guidelines and expects the law to "remove legal uncertainty" and build a healthy and safe domestic AI ecosystem.

Experts note that South Korea has opted for a more flexible approach to AI governance, centered on trust-based promotion and regulation, which may serve as a useful reference point in global AI governance discussions. However, the country's unique path may also lead to challenges in enforcing the law and ensuring its effectiveness in regulating the use of AI technology.
 
idk why they're making it so hard for startups 🤷‍♂️. i mean, i get that they wanna protect people from bad ai, but this law is just gonna stifle innovation and make korean companies go overseas to avoid all these regulations 🚫. it's like, what's the point of having a 'tech arms race' if you're not even competing on an equal footing? 🤔
 
I'm not sure if this is really enough 🤔. I mean, labeling AI-generated content sounds like a good idea, but how do they plan on policing that? And what's with the safety reports threshold being so high that no models meet it? That just seems like a way to avoid responsibility 💼.

And don't even get me started on the exemptions for foreign firms 🤷‍♀️. That's just not fair. If they're gonna regulate everyone, then everyone should be held to the same standard, right?

I'm also curious about what these joint statements from human rights organizations are saying exactly 📝. What specific provisions of the law do they think need more work? Can't they provide some concrete examples or sources to back up their claims?
 
I gotta say, 80-90% focused on promoting industry? That sounds like a whole lotta doublespeak 🙄. I mean, if they're really trying to regulate AI, why are foreign firms exempt just because they meet certain thresholds? Doesn't that create a competitive disadvantage for Korean companies?

And what's with the "trust-based promotion" approach? How exactly is that gonna work in practice? We need clear definitions of high-impact AI and specific exemptions for certain types of systems. This vague talk about building a healthy and safe ecosystem just doesn't add up 🤔.

I'm not buying it, at least not until we see some concrete evidence of how this law's actually being enforced. And what's with the "remove legal uncertainty" claim? That just sounds like more PR spin to me 💡. We need transparency and accountability, not just a bunch of empty promises 🙃.
 
idk why ppl r so mad about dis new ai law 🤔... i mean, ur gonna put ppl out of business if u dont regulate it irl? like, think bout all da companies dat got hurt by fake news and fake pics on facebook lol 📸... we need to keep up w/ the times & protect peoples rights. but at the same time, dont wanna stifle innovation 🤖... i guess its a trade off? we'll just have 2 see how it plays out 💻
 
idk why theyre making this so complicated 🤔... i mean, whats wrong with just having some basic guidelines for companies using AI? like, what if a startup wants to try out an AI tool but is worried about getting shut down because of some "complex" rule? it seems to me that this law will just scare people off and slow innovation down 🚫. and what about all these big corporations that can afford to hire lawyers and deal with the red tape? theyre gonna be fine, but the little guys... idk 🤷‍♂️
 
Back
Top