Tag: artificial intelligence

  • The Bite Back: Five reasons you should limit your AI use

    The Bite Back: Five reasons you should limit your AI use

    We live in a world where people turn to AI for everything. Need a pithy headline? AI. Need to write a wedding toast? AI. Need to diagnose a health issue, design a logo, or make your child’s science project? AI again.

    But in our rush to automate life, many of us are ignoring the fact that this technology is silently reshaping how we think, create, work, and manage our well-being. And not always for the better. In fact, I’d even argue that it’s often for the worse

    Look, AI has its place. Especially traditional AI and Machine Learning (ML) technologies. It can sift through massive data sets to spot security threats. It can transcribe interviews flawlessly, identifying speaker names and pulling soundbites aligned to a specified theme. It can help people with disabilities communicate more easily. But handing over every last cognitive task to a machine comes with consequences that reach far beyond ‘enhanced productivity.’

    Here are five reasons we need to limit AI use. Not because we should hate and eradicate all AI technologies, but because we should still value what it means to be human.

    1. AI is biased just like humans are biased.

    AI is trained to act like a human, but humans are biased. We have always been biased. When that bias sinks into training data, AI learns it too. It replicates it, then reinforces it. And as humans who have been trained to rely heavily on technology for all information we consume, particularly via our smartphones in the last decade, we often view what comes up in our Google searches as the truth without much fact-checking (unless you’re a trained journalist or huge research nerd like me).

    That means AI is fully capable of validating harmful stereotypes, spreading misinformation, and amplifying racism, antisemitism, misogyny, and other hate. It does not understand the context behind the prompts and information we feed it. It does not understand morality or ethics. It understands probability. So, when AI is treated like an all-knowing authority instead of a tool, it becomes dangerous.

    When we say to our friends or coworkers ‘just use AI for that’ or we validate claims by saying ‘well, AI told me this,’ we’re introducing a mentality that AI is the end all be all–when in reality, it is only predicting the next most likely response that will satisfy the user by mirroring our collective noise and consciousness while lacking an understanding of logical facts or emotional intelligence, let alone a moral compass.

    Read the full blog on the Bite Back Substack.