3 Major Ethical Concerns of AI

As artificial intelligence continues to advance, it is becoming increasingly clear that this technology is not without its problems. From the potential to violate every aspect of your privacy to manipulating human behavior.

AI has the potential to cause unprecedented levels of damage to our society. So join me as we uncover three major problems with this novel technology.

Violation of privacy

First off, the potential for AI to infringe on our privacy is a troubling concern that threatens to undermine our fundamental right to control our personal information.

As AI becomes more prevalent, it will have access to an unprecedented amount of data about our lives, including our personal information, our online activity, and even our physical movements.

This data could be used to make predictions about our behavior and target us. This will inevitably be used by governments to monitor and control their citizens or even be accessed by hackers.

Imagine a world where your every move is tracked and recorded by AI systems, and where that information is used to make decisions about your life without your knowledge.

This was evident by the AI company Clearview which collected photos of individuals without their consent to supply their AI facial recognition system and for mass surveillance. Hell, they even supplied this information to Canadian police forces. Pretty scary if you ask me.

Not to mention, companies could use this data to make money by selling it to third parties or by using it to build increasingly sophisticated profiles of their users.

They could also use it to manipulate our behavior, steering us towards certain products or services or even manipulating our emotions.

Recent documents show that Facebook even uses AI to predict your future actions to then sell to advertisers. Ultimately, big tech will use AI at every opportunity to monetize your data.

Amplifying human biases

Another concern is the potential for AI to perpetuate and amplify existing biases. At the end of the day, AI systems are shaped by the data that is fed to them. Therefore, if the data is poorly compiled, manipulated or undersampled it could cause major problems.

These biases can have far-reaching and devastating consequences. This issue was shown in 2016 when Microsoft released a chatbot that often reacted with offensive and racist messages.

Users fed the AI algorithm racist information to change the bias of the system. As a result, the system became skewed and returned hateful messages. Often times, if you keep feeding AI systems with certain data they will start to prioritize it.

This problem is also shown in facial recognition technology. This technology is often used by law enforcement and other government agencies for purposes such as identifying criminal suspects and tracking individuals in public spaces.

However, studies have consistently shown that these systems are less accurate when it comes to identifying minorities. For instance, the non-profit organization ProPublica found that the AI system COMPAS labeled black inmates at higher risk than their white peers.

This was despite the white inmates displaying a higher probability to re-offend based on their criminal history. In such cases, it is often because the information for minorities is underrepresented.

Ultimately, if there is less data for the AI system for minorities it will likely skew the results and often return incorrect information.

The black box problem

Finally, there is the concern of the “black box” problem, which refers to the fact that many AI systems operate in a way that is difficult for humans to understand.

In most cases, we know the input of AI algorithms such as a photo of a church, and then the output. For example, the label “Church” for said photo. But we have no clue how the AI system turned the input into the output.

This lack of transparency can make it nearly impossible to hold these systems accountable for their actions and can undermine trust in the technology.

Imagine a world where the machines that are increasingly shaping our lives are making decisions that we do not understand and cannot challenge. This is a world where we have lost control, where we are at the mercy of the machines that are supposed to serve us.

This is a world where anything could happen, and where the consequences of those actions could be catastrophic.

If you want to view this article in a more visual format then please check out my website below:

Final thoughts

Overall, the ethical concerns of AI are complex, and addressing them will require a concerted effort from researchers, policymakers, and society as a whole. Thanks for reading.

Want to know why AI will make society rich? Click here to read my previous article.