Categories
Life and Personal Politics Tech and Culture

On Regulating AI

There has been quite a bit of chatter on all chatter-worthy platforms about the need to regulate AI. There have been a few epochs in modern history where the term ‘AI’ has almost gone mainstream and either threatened our very existence or helped us slouch just a little bit more towards utopia. I am not sure when the current one started, but I do remember that it was Facebook rolling out AI based content moderation systems that started it all. It’s been a few years and a few national elections since then, and AI still hasn’t solved the content moderation problem, mostly because moderation gets squarely in the way of monetization, but that’s a different topic.

What’s curious this time around is that the large technology companies that usually strongly dislike any oversight or are usually very reactive in their damage control are the ones calling for clearer laws and regulations. I strongly believe that this is an effect of lower trust in these companies driven by their monopolization of every business they get into as well as the strong desire to set a ceiling to how far they’re able to use any technological advanced to upend older and more important regulations around employment and intellectual property. What better way to gain positive press and intellectual debate sentiment than to ask to be regulated because you believe that you’re in control of dangerous technology and want to use it only for the public good, recent history notwithstanding.

This brings me to ask — what exactly is regulation? What do we even mean by regulating AI? Isn’t regulation supposed to be reactive based on public perception and impact instead of proactively built upon doomsday scenarios?

Today, I came across an essay (archive) by a renowned FT columnist where she points out that AI regulation ought to tackle three areas — it needs to be released from the shackles of rich technology companies, the technology behind it needs to be made public to help debate, and lastly, that regulation should be flexible yet enforceable.

As I see it, regulating something that isn’t even fully defined is way way ahead of its time. The EU has taken a giant leap by drafting an AI Act, and although, I haven’t read it, it seems to focus a lot on ‘high-risk AI systems’ and upon applications of AI rather than how it is developed and trained. There is a presentation to get the bigger details here. Furthermore, while the EU could limit applications of such high-risk systems, there is a lot of procedural lag to protect citizens were they to be exposed to a system’s output from another jurisdiction.

There are still a lot of questions around regulation. This led me to ask if there is a checklist for effective regulation that governments use. That turned out to be be a not so bad idea as, indeed, there is such a checklist used by OECD countries, at least in principle. When I look at it, I find that the current debate misses a lot of context. For example, how do people understand any regulation when they don’t even know what AI is capable of now other than remixing some public domain pictures or producing often-inaccurate chat transcripts. And then, how do we know if what helps someone doesn’t have a negative impact on another, which is often where regulatory actions fail? What about geopolitical differences? An AI platform in the EU might be designed to augment democratic systems while one in shadier geographies might be used to supplant any such setup. There is also focus upon the impact of AI systems but not on how they’re trained in the first place. While there is a cursory requirement to use high-quality training data, where does that come from?

And this is all before we even get into the realm of regulating an AI that is generic and self-advancing. Why wouldn’t this kind of AI be capable of figuring out loopholes and exploiting them until patching them is too late for humanity? Who would legally be in control of an AI that has improved far beyond what it was submitted for regulatory clearance?

There are so many questions simply because there is no AI, yet. We need to get back to the current epoch and have our companies and governments focus on issues that plague humanity today, and addressing which would make the world a better habitable place in the future.

Categories
Politics Tech and Culture

A Technology Proposal for Amsterdam’s New 1.5m Society

The city of Amsterdam recently invited (archive) proposals from residents and companies to help it plan the path ahead in the new normal — a 1.5m society. The goal was to invite creative ideas to help businesses deal with the changes while making sure that they stay in business. The odd thing about pitching ideas is that we’re in an unprecedented situation — there’s no collection of best practices or historical lessons that could be tweaked and turned into something applicable for the modern world.

At the same time, while many ideas would possibly revolve around an app for this, or an app for that, a delivery platform, or a new social networking app for business, I am not sure that’s the right way forward. Not after all the inequities proliferated by ‘big-tech’ in the last decade. The last thing anyone wants is one corporation being the gatekeeper of all physical commerce.

So, is the solution to instead trust the Government? I think, fundamentally, smaller government at the city level is a lot more trustworthy than national policy making. We do, after all, depend on the city to read our grievances when it comes to parking spaces or for sanitation of waste collection. Amsterdam is in a unique situation where it has a woman mayor and where a lot of the infrastructure surrounding business activities is already digitalized.

The proposal I submitted is below. I am not uniquely qualified or even have the organizational structure to action on it, but I do believe that something like this is the way forward in the near short term without succumbing to mission creep.

PS: I know there are grammatical errors :o) I typed it up at the last minute this morning.