Twitter details plan to understand if its algorithms are causing ‘unintentional harms’

Advertisement
Twitter details plan to understand if its algorithms are causing ‘unintentional harms’
Representational imageUnsplash
  • Twitter’s algorithms have been accused of racial bias in the past.
  • The social networking service has also come under attack from people of different political ideologies of favouring one over the other.
  • The company has set up a META team that will study the issues and implement solutions.
  • Twitter has promised that the study will be in-depth and details will be revealed in a transparent manner.
Advertisement
Social networking giant Twitter has announced that it has set up a team of data scientists and engineers to figure out if its algorithms are causing “unintentional harm”.

The plan – dubbed as the “Responsible Machine Learning Initiative” – will study Twitter’s existing algorithms that offer content recommendations and crop images on the platform.

“We’re conducting in-depth analysis and studies to assess the existence of potential harms in the algorithms we use,” the company noted in its blog post.

Complimentary Tech Event
Transform talent with learning that works
Capability development is critical for businesses who want to push the envelope of innovation.Discover how business leaders are strategizing around building talent capabilities and empowering employee transformation.Know More
To ensure the solutions arrived at are well-founded, Twitter says that its team comprises experts from different areas – this includes engineers, researchers, and members from the trust and safety as well as the product teams.

So, what exactly is “Responsible Machine Learning”?

Responsible machine learning consists of assigning responsibility, achieving equity and being transparent about the way Twitter’s machine learning-driven algorithms function.
Advertisement


In essence, the intended outcome of this study is to ensure Twitter users better understand these algorithms and how these algorithms shape the content that they see on the platform.

Twitter has formed what it calls a META team – ML Ethics, Transparency and Accountability – that is tasked with undertaking this study and improving the user experience based on the findings of the study.

Why does Twitter need Responsible Machine Learning?

Twitter has faced backlash in the past for several issues, and one of them includes the image cropping algorithm which has been found to have a racial bias in favour of people with lighter skin.

One of the popular examples includes a set of two images with photos of former US President Barack Obama and Senator Mitch McConnell. Twitter’s algorithm picks up McConnell’s face to show in the cropped image, ignoring Obama in both instances.

Advertisement

Another example highlights the problem even better – wherein Twitter’s algorithms pick up Obama’s face with a lighter tone over a darker one.


Apart from this, Twitter has also said that it will study home timeline as well as content recommendations across racial subgroups, and political ideologies across seven countries.

SEE ALSO:

Saudi Arabia imprisoned an aid worker for running an anti-government Twitter page. His sister says it shows MBS is testing Biden's pledge to be tough on the kingdom.

Donald Trump's Twitter suspension will prevent the National Archives from preserving his account on the platform

Leaked documents reveal how Amazon built a Twitter army to defend itself in a secret project codenamed 'Veritas'
{{}}