Categories
Life and Personal Politics Tech and Culture

On Regulating AI

There has been quite a bit of chatter on all chatter-worthy platforms about the need to regulate AI. There have been a few epochs in modern history where the term ‘AI’ has almost gone mainstream and either threatened our very existence or helped us slouch just a little bit more towards utopia. I am not sure when the current one started, but I do remember that it was Facebook rolling out AI based content moderation systems that started it all. It’s been a few years and a few national elections since then, and AI still hasn’t solved the content moderation problem, mostly because moderation gets squarely in the way of monetization, but that’s a different topic.

What’s curious this time around is that the large technology companies that usually strongly dislike any oversight or are usually very reactive in their damage control are the ones calling for clearer laws and regulations. I strongly believe that this is an effect of lower trust in these companies driven by their monopolization of every business they get into as well as the strong desire to set a ceiling to how far they’re able to use any technological advanced to upend older and more important regulations around employment and intellectual property. What better way to gain positive press and intellectual debate sentiment than to ask to be regulated because you believe that you’re in control of dangerous technology and want to use it only for the public good, recent history notwithstanding.

This brings me to ask — what exactly is regulation? What do we even mean by regulating AI? Isn’t regulation supposed to be reactive based on public perception and impact instead of proactively built upon doomsday scenarios?

Today, I came across an essay (archive) by a renowned FT columnist where she points out that AI regulation ought to tackle three areas — it needs to be released from the shackles of rich technology companies, the technology behind it needs to be made public to help debate, and lastly, that regulation should be flexible yet enforceable.

As I see it, regulating something that isn’t even fully defined is way way ahead of its time. The EU has taken a giant leap by drafting an AI Act, and although, I haven’t read it, it seems to focus a lot on ‘high-risk AI systems’ and upon applications of AI rather than how it is developed and trained. There is a presentation to get the bigger details here. Furthermore, while the EU could limit applications of such high-risk systems, there is a lot of procedural lag to protect citizens were they to be exposed to a system’s output from another jurisdiction.

There are still a lot of questions around regulation. This led me to ask if there is a checklist for effective regulation that governments use. That turned out to be be a not so bad idea as, indeed, there is such a checklist used by OECD countries, at least in principle. When I look at it, I find that the current debate misses a lot of context. For example, how do people understand any regulation when they don’t even know what AI is capable of now other than remixing some public domain pictures or producing often-inaccurate chat transcripts. And then, how do we know if what helps someone doesn’t have a negative impact on another, which is often where regulatory actions fail? What about geopolitical differences? An AI platform in the EU might be designed to augment democratic systems while one in shadier geographies might be used to supplant any such setup. There is also focus upon the impact of AI systems but not on how they’re trained in the first place. While there is a cursory requirement to use high-quality training data, where does that come from?

And this is all before we even get into the realm of regulating an AI that is generic and self-advancing. Why wouldn’t this kind of AI be capable of figuring out loopholes and exploiting them until patching them is too late for humanity? Who would legally be in control of an AI that has improved far beyond what it was submitted for regulatory clearance?

There are so many questions simply because there is no AI, yet. We need to get back to the current epoch and have our companies and governments focus on issues that plague humanity today, and addressing which would make the world a better habitable place in the future.

Leave a Reply

Your email address will not be published. Required fields are marked *