How algorithms today are disconnected from real world situations

Advertisement
How algorithms today are disconnected from real world situations

  • The perils of automation were most recently seen during the Indonesian earthquake, when Facebook added balloons and confetti to posts talking about the deadly incident.
  • This isn’t the first time that algorithmic errors have been a result of inaccuracy when it comes to human interactions.
  • From the stock market to beauty contests, artificial intelligence and machine learning still require human oversight to function.
Recently, Facebook has issued an apology after adding balloons and confetti to posts referring to the deadly earthquake that struck the island of Lombok in Indonesia. The faux pax is being blamed on confusion over the word 'salemat', which can have multiple meanings - including ‘survived’ and ‘congratulations’.
Advertisement

In a statement to a news agency, a Facebook spokesperson expressed that, "We regret that it appeared in this unfortunate context and have since turned off the feature locally.”

It’s safe to assume that an algorithmic error is to blame for this mix up which resulted in users posting about being safe finding their messages covered in celebratory graphics - this automated feature is triggered in posts that contain the word ‘congratulations’.

Complimentary Tech Event
Transform talent with learning that works
Capability development is critical for businesses who want to push the envelope of innovation.Discover how business leaders are strategizing around building talent capabilities and empowering employee transformation.Know More
It’s also safe to assume Facebook’s algorithms don’t have a very good grasp of Indonesian, and there wasn’t any human oversight of what the algorithm was doing.

But this shouldn’t surprise anyone - automation has been caught out by the nuances of human language and interaction on many other occasions, from ride-sharing price surges, to stock market crashes.

Advertisement

Algorithms don’t always get the meaning

Algorithms are nowhere near as capable of understanding human behaviour and language as we’d want them to. A lot of what we say and do isn’t that clear or obvious even to fellow humans, so it’s not difficult to imagine a machine trying hard, and failing miserably.

Here’s a look at some other infamous incidents where algorithms completely missed the point - incidents that could have been avoided with smarter programming or human involvement.

1.
Uber fare surges after terror attack
No, it hasn’t happened just the one time. Back in 2014, Uber implemented a surge after a siege at a Sydney cafe where a gunman was holding hostages. We reckon algorithms responded to the rush of users (trying to get out of the affected area) by hiking fares.

And then again, in 2017, Uber found itself under fire after Londoners complained of a surge in fares immediately following the London Bridge terror attack. Uber did turn off the surge eventually, but not before angering loads of users.

Closer home, in India, Uber users have alleged Uber has invoked surge pricing during public transport strikes and the New Delhi government’s odd-even scheme.

Advertisement
2. Stock market drops after AP Twitter hack
Algorithmic trading, which offers faster responses than any human stockbroker, is a hot topic these days. But while stock trading algorithms can react faster to changing market conditions, they aren’t very good at dealing with human nature.

Case and point, in 2013, hackers took over the Twitter handle for the Associated Press and sent out a tweet claiming there had been an explosion at the White House. Following which, there was an immediate panic, with trading programs immediately dumping stock.

3. Beauty contest algorithm is clearly racist
In 2016, Beauty.AI, the ‘First International Beauty Contest Judged by Artificial Intelligence’, took place. The beauty contest was supposed to be judged by an algorithm that would crunch common ‘indicators’ of beauty (symmetry, skin tone, etc), and throw up a result.

Unfortunately, the ‘beauty bot’ turned out to be quite racist, and overwhelmingly preferred contestants with lighter skin.

4. Twitter users train Microsoft’s bot to become racist
Microsoft is in the mix as well. In 2016, the company ran an experiment with a automated, self-learning Twitter bot named Tay. Unfortunately, the Internet being what it is, Tay, which was designed to improve its conversational skills through interaction with others, began sending out some rather offensive tweets.

Advertisement
5. Images deleted after being tagged ‘obscene’
Facebook takes its ‘no-obscenity’ policy quite seriously. But, since it’s difficult to have actual humans check everything that’s uploaded, it relies on image processing algorithms. That, in turn, can have some side effects like pop-art, sketches by Renaissance paintings, beach photos being flagged. A simple search will lead you a lot of posts that were mistakenly judged obscene.

6. Automated DMCA requests make life difficult for content creators
The Digital Millennium Copyright Act (DMCA) has proven to be extremely controversial, and many content creators have expressed concern that automated DMCA takedown notices could stifle creativity.

Google has also revealed that an astounding 99.5% of DMCA takedown notices sent through its system are sent by bots. And, as many YouTube channel owners have realised, these bots sometimes lack the ability to distinguish fair use, or even new content, from existing or copyrighted material.

Better algorithms and human oversight needed

So what’s the way out? Considering the sheer volume of new information generated every day, it’s just not feasible to do away with algorithms entirely. However, greater human oversight could help in many of these cases - algorithms could be programmed to call in a human before taking action on ‘sensitive’ triggers.

At the same time, as computing power increases, and algorithms grow more sophisticated, they should become better at dealing with humans’ ambivalence and not-always-logical actions.
Advertisement
{{}}