Categories
Life and Personal

Fun with Apple Photos

Go online and search for ‘what will my baby look like’ and you’d be faced with dozens of websites that claim to use AI and machine learning to guess your baby’s appearances. It’s almost sort of curious to find out your baby’s potential looks, judging by the kind of techniques used by these websites to lure you in and make you pay. One such website claimed to be free up until I uploaded pictures of myself and my wife, at which point, I was asked to sign up for a 3-day free trial. The weekly cost henceforth — €9 a week.

But what if you do have a child? There’s an even more fun game now, and that’s trying to figure out who your baby actually looks like. Does the nose belong to the mom … or dad? What about the eyes? It’s even fun for all your relatives and friends as they spend time staring at your child, and, in essence, judging your looks! 😄

This is where AI has a fun role to play, too. Case in point — I have a somewhat large collection of photos of myself and my wife over the years. After every vacation, I make it a point to ‘train’ the face recognition by actively helping it categorize a face as either mine or my wife’s. And, my ‘People’ album is only family, which means that the AI is theoretically very well trained.

Here’s the awesome part — for a long while, I have maintained that my daughter looks quite a lot like me. A few people agree, while some strongly disagree. The AI is on my side, as very frequently, especially as she gets older, the face recognition algorithm would flag a picture of mine and seek confirmation if it’s my daughter’s. The first time was an anomaly; the second co-incidence; but, the third was most definitely the algorithm on to something.

So, there we have it. She really does look quite a bit like me, and I have the screenshots to prove it!

Categories
Life and Personal Politics Tech and Culture

On Regulating AI

There has been quite a bit of chatter on all chatter-worthy platforms about the need to regulate AI. There have been a few epochs in modern history where the term ‘AI’ has almost gone mainstream and either threatened our very existence or helped us slouch just a little bit more towards utopia. I am not sure when the current one started, but I do remember that it was Facebook rolling out AI based content moderation systems that started it all. It’s been a few years and a few national elections since then, and AI still hasn’t solved the content moderation problem, mostly because moderation gets squarely in the way of monetization, but that’s a different topic.

What’s curious this time around is that the large technology companies that usually strongly dislike any oversight or are usually very reactive in their damage control are the ones calling for clearer laws and regulations. I strongly believe that this is an effect of lower trust in these companies driven by their monopolization of every business they get into as well as the strong desire to set a ceiling to how far they’re able to use any technological advanced to upend older and more important regulations around employment and intellectual property. What better way to gain positive press and intellectual debate sentiment than to ask to be regulated because you believe that you’re in control of dangerous technology and want to use it only for the public good, recent history notwithstanding.

This brings me to ask — what exactly is regulation? What do we even mean by regulating AI? Isn’t regulation supposed to be reactive based on public perception and impact instead of proactively built upon doomsday scenarios?

Today, I came across an essay (archive) by a renowned FT columnist where she points out that AI regulation ought to tackle three areas — it needs to be released from the shackles of rich technology companies, the technology behind it needs to be made public to help debate, and lastly, that regulation should be flexible yet enforceable.

As I see it, regulating something that isn’t even fully defined is way way ahead of its time. The EU has taken a giant leap by drafting an AI Act, and although, I haven’t read it, it seems to focus a lot on ‘high-risk AI systems’ and upon applications of AI rather than how it is developed and trained. There is a presentation to get the bigger details here. Furthermore, while the EU could limit applications of such high-risk systems, there is a lot of procedural lag to protect citizens were they to be exposed to a system’s output from another jurisdiction.

There are still a lot of questions around regulation. This led me to ask if there is a checklist for effective regulation that governments use. That turned out to be be a not so bad idea as, indeed, there is such a checklist used by OECD countries, at least in principle. When I look at it, I find that the current debate misses a lot of context. For example, how do people understand any regulation when they don’t even know what AI is capable of now other than remixing some public domain pictures or producing often-inaccurate chat transcripts. And then, how do we know if what helps someone doesn’t have a negative impact on another, which is often where regulatory actions fail? What about geopolitical differences? An AI platform in the EU might be designed to augment democratic systems while one in shadier geographies might be used to supplant any such setup. There is also focus upon the impact of AI systems but not on how they’re trained in the first place. While there is a cursory requirement to use high-quality training data, where does that come from?

And this is all before we even get into the realm of regulating an AI that is generic and self-advancing. Why wouldn’t this kind of AI be capable of figuring out loopholes and exploiting them until patching them is too late for humanity? Who would legally be in control of an AI that has improved far beyond what it was submitted for regulatory clearance?

There are so many questions simply because there is no AI, yet. We need to get back to the current epoch and have our companies and governments focus on issues that plague humanity today, and addressing which would make the world a better habitable place in the future.

Categories
Tech and Culture

Artificial Intelligence in 2020

AI — Artificial Intelligence — is a fascinating term. It’s been part of our lives, in one form or the other, ever since we were born. It’s only in the last decade that the usage of the term has become so ‘daily’. Nary a day goes by where you don’t run into AI impacting one or the other aspects of our lives.

What was previously a thing of a future life is suddenly something that we use and interact with every day. Are our computers suddenly intelligent, more so than they were 20 years ago, or has the term’s definition merely been rebased to their current capabilities in order to influence our behavior?

AI suddenly became a thing of this century a few years ago — as my memory recalls — during a Facebook conference that had a session about how the company was using ‘Artificial Intelligence’ to make the platform safer for its users. It is hard to manually moderate all the content being liked and shared by billions of users (they’re not customers) every hour, and so the company desperately needed a way to automate a huge portion of that grunt, and often, mentally disturbing work. This is where AI suddenly became a thing that exists today, instead of an utopian future.

That is not to say that AI, the term, is merely cunning marketing. I mean, even though your chances of being taken seriously in the money markets are greatly improved if you throw in the term in your communications material, a lot of it has to do with the quantum jump in our computers’ capabilities. Moore’s law is real, after all.

I just came across two articles in this morning’s newspaper — one about YouTube finally reverting to using more humans for content moderation as a result of the automated systems (read algorithms) falsely flagging content that was safe and letting through questionable content. They reached the conclusion that the financial burden of investigating moderation appeals was higher than any cost-savings coming from automating the moderation process.

The second was an opinion piece about how AI has ‘disappointed’ us during the ongoing pandemic. The premise is that the current state of AI works incredibly well if humans intervene early on and help decipher it right from wrong, good from bad, or wanted from unwanted. What we call AI today is largely just an aspect of big data that combines with advances in computing technology to stimulate ‘machine learning’, and like us humans, learning requires an early intervention to prevent learning all the wrong things.

There was another interesting event that went ‘viral’ on Twitter this weekend. Apparently, in order to generate a thumbnail preview of any posted pictures, the website uses an algorithm to center it around a key ingredient. In the case of pictures with people, the idea is to center the thumbnail around a key person so that the thumbnail is more relevant and informative in the post. There was a snag, though. The algorithm always preferred a white person to represent this key person whenever there was another non-white person.

Clearly, the current state of AI is not there, yet, and often leads to results that are actually bad. Some of this AI is also generating our Internet search results and is creating even more division and sowing hatred among people. AI also has potential to change lives of people long-term as was evinced by the algorithmic generation of school-leaving results in the UK a while ago.

But, AI is cool, and just by making it a part of your business plan could make the difference between being able to launch that company or watching the idea of it slowly withering away. Investments are largely buzzword-driven.

So, what are we to do?

As with politics, the clear way to make long term positive impact is to educate others. Part of this involves also keeping up with the evolution of AI yourself. We could help others in understanding that computers aren’t autonomous and will only learn what they’re taught. And like everyone else, these teachers will often make mistakes. Fact check. Seek answers. Clarify.

Computers can be wrong.

Categories
Featured Life and Personal Politics Tech and Culture

Blogging and the Spread of Truth in the Age of Platforms

A lot of people have proclaimed that blogging is dead, that it doesn’t generate any traffic, and that no one reads blogs anymore. Personally, I don’t know the last time I kept up with a blog on a regular basis like a few years ago. The problem is not the lack of people who share their ideas. Rather, as more people take to ‘social media’ and instant messaging, there remains very little incentive to write out a well thought-out post to be shared. This means that people now spend less time on long-form writing than they do on just sharing snippets.

Indeed, if you search for something of interest, you are more likely to find SEO-fied links on the first 2 result pages about products or advertising than any relevant read. As more and more advertising money flows into search advertising, there is an SEO economy being created where the only winners are websites with a huge advertising and/or SEO budget.

At the same time, a lot of platforms are being created to help people express themselves. Facebook being in the forefront, trailed by companies like Medium. There is no dearth of hosted blog providers who have adopted the Twitter approach of follows and likes to float more popular posts towards the top. A lot of companies boast of using ‘AI’ to figure out what content would prove to be sticker and hence generate more clicks for the authors.

People don’t even read newspapers anymore. On a recent Facebook exchange, I was reminded by a ‘mainstream media’ sceptic that newspapers, or dead-tree publications, as he likened them to, are not the only way to procure your dose of daily news. Indeed, what was once seen as blogging is now increasingly also the format used to report news. It’s the ease of sharing and embedding advertising that makes online blogging a wonderful substitute to subscribing to a printed/electronic newspaper.

So, why were blogs such a wonderful thing?

You could always count on a multitude of blogs positing different approaches to solving a particular problem or educating you about a topic from all perspectives. Stuck trying to figure out how your country’s foreign policy works? Just read up a few posts by passionate bloggers who breathe foreign policy and are eager to share their opinions and understanding.

Newspapers are feeling the heat, too. While a lot of them have established credible online and digital distribution systems, right down to monetization, they simply cannot compete with the phenomenon of click driven ‘fake news’. Whereas in the past people were careful to not treat a certain blogger or website as the face of truth, now that social media has made blogging a more mainstream way to distribute facts, now this area is getting murky. A lot of these websites are primarily content aggregators that they incredulously ingest from other similar websites or persons. What generates clicks are headlines. What’s the incentive to even hire and perform true journalism any more if truth is difficult to swallow and also does not sell well?

Using social media and these blogging platforms is much easier than ever because you don’t have to worry about the technical nitty-gritty like security and maintenance. At the same time, most of these platforms are free to publish on as they make money through advertising. Their currency is likes and followers. You, as an author, feel you’re getting enhanced reach.

Yesterday, I even watched live an incoming president of a developed country dismiss a credible and historic news channel as the purveyor of ‘fake news’.

There is a huge problem inherent with the ‘platformization’ of the web – censorship. While I have not had the pleasure of living in an authoritarian state, a lot of people have that misfortune. Platforms have to follow local laws, which change abruptly based on who is controlling the government. If they don’t follow these laws, they lose the market and hence the money. There are rumors that Facebook is working on a special censorship tool for the Chinese market that would allow them to enter it and hence make a ton of money from the world’s most populous market. Recently, they also started censoring posts and notes that were written unfavorably towards the government in The Philippines.

Apart from censorship, since you don’t control the platform and the laws change abruptly, you can never be sure that your news/content would outlive the platform or would not suddenly be deleted one day.

Solution – let’s get back to the basics. Have a friend set up your blog for you. Because if you control your platform you control your freedom of speech. If your hosting provider tries to censor you, there are others that would offer you refuge. The web was built to be run that way.

Here is something I shared on Facebook when the platform was accused of spreading fake news:

To say that the problem is just ‘fake news’ would be trivializing it. To say that the problem of ‘fake news’ could be solved technologically would be fooling everyone.

There are multiple issues – one of them being conflict of interest. Facebook makes money based on clicks. Fake and sensational information generates more clicks. Any technological solution would be at odds with the objective of maximizing clicks and visits.

AI is another example of Silicon Valley’s bubble. By nature, AI and henceFacebook‘s approach of creating algorithms, would always lag behind trends in society/pop culture. AI is a cute term for big data collection. This makes it implicit that any intelligence is created after the trends go mainstream. AI is the reason why everyone’s news feed is messed up and also why FB insists upon not showing posts chronologically. That impacts click-throughs. When FB talks about AI, translate it to – process of prioritizing paid posts and external links over user-generated content in a way that it’s less obvious and annoys you just a tad less than to the extent of making you quit. 

The best way to use FB is to use it like a repository or a blog. That way chronological order wouldn’t matter much. Stop using the feed. Organically search for posts and pictures. Facebook makes it near impossible, but switch your feed to show posts in their chronological order.

Most importantly, don’t make it the only place you seek out information. The web is huge.

If you have to share something, first consider the possibility of adding something more or even saying it in your own words. The less time and effort you put into what you share with friends and family, the easier it gets for any AI to win over humanity and to further the gap between the elites and the ‘losers’.

AI’s currency is your lack of time and effort. Make it clear that you’re the boss of your profile 

Now, more than ever, it is important to start reading credible news sources. If you can’t afford a newspaper subscription, find out the nearest library that has one. If you read something online, make sure you can verify its authenticity by checking other sources. If you’re still unsure, ask someone else.

When more people blog and share their ideas, rather than mere snippets or forwards, the whole country moves forward. Free exchange of ideas enables the society to move forward and to settle differences through intellectual exchange. More opinions enable better policies.

The least we can expect from a developed civilization is the facilitation of free and uncensored exchange of ideas.