AI — Artificial Intelligence — is a fascinating term. It’s been part of our lives, in one form or the other, ever since we were born. It’s only in the last decade that the usage of the term has become so ‘daily’. Nary a day goes by where you don’t run into AI impacting one or the other aspects of our lives.
What was previously a thing of a future life is suddenly something that we use and interact with every day. Are our computers suddenly intelligent, more so than they were 20 years ago, or has the term’s definition merely been rebased to their current capabilities in order to influence our behavior?
AI suddenly became a thing of this century a few years ago — as my memory recalls — during a Facebook conference that had a session about how the company was using ‘Artificial Intelligence’ to make the platform safer for its users. It is hard to manually moderate all the content being liked and shared by billions of users (they’re not customers) every hour, and so the company desperately needed a way to automate a huge portion of that grunt, and often, mentally disturbing work. This is where AI suddenly became a thing that exists today, instead of an utopian future.
That is not to say that AI, the term, is merely cunning marketing. I mean, even though your chances of being taken seriously in the money markets are greatly improved if you throw in the term in your communications material, a lot of it has to do with the quantum jump in our computers’ capabilities. Moore’s law is real, after all.
I just came across two articles in this morning’s newspaper — one about YouTube finally reverting to using more humans for content moderation as a result of the automated systems (read algorithms) falsely flagging content that was safe and letting through questionable content. They reached the conclusion that the financial burden of investigating moderation appeals was higher than any cost-savings coming from automating the moderation process.
The second was an opinion piece about how AI has ‘disappointed’ us during the ongoing pandemic. The premise is that the current state of AI works incredibly well if humans intervene early on and help decipher it right from wrong, good from bad, or wanted from unwanted. What we call AI today is largely just an aspect of big data that combines with advances in computing technology to stimulate ‘machine learning’, and like us humans, learning requires an early intervention to prevent learning all the wrong things.
There was another interesting event that went ‘viral’ on Twitter this weekend. Apparently, in order to generate a thumbnail preview of any posted pictures, the website uses an algorithm to center it around a key ingredient. In the case of pictures with people, the idea is to center the thumbnail around a key person so that the thumbnail is more relevant and informative in the post. There was a snag, though. The algorithm always preferred a white person to represent this key person whenever there was another non-white person.
Clearly, the current state of AI is not there, yet, and often leads to results that are actually bad. Some of this AI is also generating our Internet search results and is creating even more division and sowing hatred among people. AI also has potential to change lives of people long-term as was evinced by the algorithmic generation of school-leaving results in the UK a while ago.
But, AI is cool, and just by making it a part of your business plan could make the difference between being able to launch that company or watching the idea of it slowly withering away. Investments are largely buzzword-driven.
So, what are we to do?
As with politics, the clear way to make long term positive impact is to educate others. Part of this involves also keeping up with the evolution of AI yourself. We could help others in understanding that computers aren’t autonomous and will only learn what they’re taught. And like everyone else, these teachers will often make mistakes. Fact check. Seek answers. Clarify.
Computers can be wrong.