Latest news about Bitcoin and all cryptocurrencies. Your daily crypto news habit.
By Olga Grinina
Why should we care about AI bias at all?
Along with recent notorious data security breaches AI today is probably the most controversial, yet most cherished subject in tech: it is looked upon as either blessing by the likes of Mark Zuckerberg, or being cursed and questioned by Elon Musk and Steven Hawking. Interestingly, those who are raising those concerns, are not really feared about robots taking over and massacring the humanity in ‘Terminator 2: Judgement Day’ best tradition. What the AI-opponents are really worried about lies a bit deeper down: raising awareness of responsible deployment of AI in the era of machine learning algorithms and software becoming mass available. Securing unbiased AI that is not affected by human prejudice about race or gender has long been discussed by researchers. But is there any solid grounding for this fuss in the first place?
It’s all about the data!
AI is trained with big data that human developers put into it. Simply, if artificial intelligence is trained with biased data, it is doomed to make biased decisions. For example, deploying AI-search algorithm in an online recruitment search service recently showed that it put male candidates higher than female one in the search results! Another instance is one of those chat-bots that are created by dozens for a number of purposes, like ‘Tay’ — originally created for innocent chit-chats, it became a racist — if not fascist — monster sprawling out offensive comments in less than 24 hours! Remember, AI is learning from whatever we, humans, throw down to it. And it is learning very fast. With this kind of software powered by machine learning algorithms becoming increasingly accessible, there’s a growing risk of biased AI pertaining to mainstream software, thus promoting discrimination. Google, for instance, has already rolled out an open-source machine-learning software library now free for anyone to use. Turns out, the so-called algorithmic bias with AI manifesting exact same biases as humans, might be becoming more prominent in everyday decisions that humans make.
Is there any way to somehow fight this algorithmic bias? Well, it’s a logical leap to suggest that since AI is fed with data, altering the data itself might be of help. A huge amount of data is needed to train one AI-algorithm. But where do developers turn to for this filtered unbiased data? Is it for free and who really owns it? Facebook data privacy scandal and subsequent refusal to comply with EU Data Protection Directive worldwide only confirms the fact that a tech-giant hold on to the right to dispose of users’ data at its own discretion. Isn’t it also because they are not only monetizing on data, but need it desperately to maintain the development of their AI? Data today is new oil really: tech-giants are not giving it away despite raising concerns and calls for giving users their personal data back among tech-community. But is it even possible to gather unbiased data to train unbiased AI in the first place?
Is AI of any help when it comes to fake news?
Introducing AI to platforms the very essence of which is providing an authentic and unprejudiced opinion is to face even more challenges. The whole point of deploying AI on opinion-driven markets is its capability to identify and exclude biased opinions. We’re having some sort of a catch-22 situation here: technology created to overcome the bias is running the risk of becoming biased itself. Speaking of online reviews market and scoring industry, review platforms like yelp.com have long been a place to share valued opinions being a driving force for businesses to gather customer feedback. But as we all know those platforms over time started to lose credibility, since they had a bunch of common issues. Geared up with AI to fight bias and fake news, a new generation of scoring platforms are now emerging. But what if AI itself is becoming biased? Where do we get enough data to train AI to analyze adequately opinions’ authenticity? Should developers use existing Yelp-generated reviews bank? In cases where AI is working with vast amounts existing data in the endless sea of online information, countering bias becomes much more difficult. It looks like those aspiring to come up with an AI filtering mechanism, first need to accumulate their own ‘quality’ data pack. Product owners and designers should be very aware of those risks when deploying AI to any type of systems. It is the duty of machine learning engineers to come up with safer developer tools suggesting a better way to design algorithms that don’t discriminate on gender, race, and other attributes. All of these concerns are specifically applied to the industries depending on AI as their last resort of authenticity. Adopting high level of responsibility when creating AI and setting building an unbiased AI as an ultimate goal is absolutely essential not only to researchers, but also to those who are in fact bringing those algorithms to the mass market — business leaders and media influencers. Algorithmic bias can not only lead to the spread of human biases, but the amplification of it! Sadly, most of the humans are not yet aware of software bias and tend to blindly trust AI judgement.
Is AI capable of tackling biased opinion? was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.
Disclaimer
The views and opinions expressed in this article are solely those of the authors and do not reflect the views of Bitcoin Insider. Every investment and trading move involves risk - this is especially true for cryptocurrencies given their volatility. We strongly advise our readers to conduct their own research when making a decision.