It’s really hard opening emails, social media, or really anything today without being inundated with AI. AI, is, of course, a short-hand acronym for Large Language Models (LLM). Despite the widespread knowledge of the impact on everything from media to the human brain, many people are still blissfully unaware of a lot of things happening in the AI sphere and don’t know how detrimental this tech, in its current form, actually is.
It’s hard to imagine that just a few years ago this technology was in limited use; we had seen drips and drabs of what would eventually come to be full-fledged LLMs like Gemini or ChatGPT in forms like Google Lab’s Between the Lines or even MidJourney. These tools were either not open for public use or had specific use cases, rather than this all-out AI integration we’ve seen today. As one who never shies from at least trying out a new technology, I tested DALL-E early on to see how well it could generate things like my artwork from my alternative text (essentially, testing my alternative text), as well as the reverse, how well it could generate alt text from an image, and at the time, the results were disastrous and laughable at best.
Nowadays, the technology itself is improving. When once we could immediately tell when AI was used in the generation of an image by the amount of fingers and teeth a human being would have in it, changes to algorithms have made a lot of AI art and writing nearly indistinguishable from an amateur artist or a clumsy writer. Many of us can still tell, however, but I don’t imagine that will last for long.
For many of you I am telling you things you already know, but this post is for the people who don’t know, who have somehow still skirted past all of the red flags about AI and its usage, and the way the technology is being force-fed to us en masse. So, let’s get into it, and I’ll link as much as I can as sources so you can see exactly what I mean. Here’s why, long form, you should think twice about utilizing AI:
#1 Non-consensual data training (without any compensation)
One annoying selfish billionaire has been known by a specific quote: “Move fast and break things.” Such is the case of LLM/AI data training. These chatbots, assistants, and image generators did not come out of nothing. They are not simply cleverly programmed applications with a spiraling conversational map, like the chatbots of yore. Instead, Large Language Models, or the AI of today, relies upon data training to complete requests the way it does now. Consider it like this: if you want to write a horror book, you may have some general idea after reading 100 of them how to craft a horror book. Some people compare it to human learning, but that is an incorrect comparison. LLMs break down the input training content into data to recognize patterns in that. If I write 20 books and I constantly like to use the phrase “a darkness as black as pitch”, the LLM may come to recognize that I write phrases such as “a darkness as black as pitch.” By recognizing these patterns, it can create “in the style of” likenesses for plenty of things.
This sounds really interesting and fun, but… No one gave consent to use their stuff for training.
The LAION-5B data set largely used to scrape the entire internet contained an unheard of multitude of “content”, including copyrighted movies, artworks and books, photos (including family photos you may have posted on Flickr, Imgur, Facebook, or elsewhere!), websites, internet posts on forums, and everything else in between – basically, all of the internet – all without permission or consent.
While Googlebot, a search engine crawler that indexes web pages so they can be found when someone searches on Google, exhibits this same behavior, the view of it is largely positive: Data used by Googlebot creates lists and databases on their service where people can be linked to the long-form content. We can see why this would be useful for many people, professional or not. “Getting found on Google” is a huge business. The problem is folding all of that data into the service, and repackaging it as something created uniquely by the service.
It is not incorrect for me to say that millions of working artists and writers have had their entire body of work non-consensually added to the LAION-5B database to then later be used to displace them in their own industries. Artists and writers are in fact losing jobs to companies whose goal is to cut costs. Where once you had to hire an artist to produce artwork for your new cool TTRPG, you can now have ChatGPT make all your art for free* instead, and the average person not in the know won’t even pause long enough to know the difference.
On a personal note, I have had hundreds of my artworks from my years of searching out an audience for my artwork on social media and the web as a whole personally scraped. All of my artist friends have as well, and I am watching my professional peers grapple with contract and job loss as their jobs in gaming, trading card games, mural design, book illustration, book and ghost writing, editing, and so much more end up in the hands of AI. This is not a hypothetical – it is happening to me and happening to people I know first hand.
The short is: we didn’t opt into training these models, and they can use whatever data they collect from our assets to create non-copyrightable works forever and ever. AI companies like OpenAI can charge subscription model fees for their services with training sets including our entire life’s work. The average person can charge someone else for output that utilizes our works in some capacity.
Here’s some things to know: There is a massive class-action suit against Midjourney, Stability AI, and DeviantArt from artists like Sarah Andersen, Kelly Mckernan, Karla Ortiz, Jingna Zhang, Adam Ellis, and more. Disney and Universal have also launched a lawsuit against Midjourney. A judge recently ruled against authors in a class-action against Meta for its unauthorized training of 13 authors’ books (but this could be appealed). New data protection tools like Glaze (style protection) and Nightshade (data poisoning) aim to protect artists and their works against unauthorized use in LLMs. Cloudflare recently released firewall protection aiming to block AI by default. All of these things are patches that are a bandaid on a selfish “move fast and break things” mentality – really just “ask for forgiveness instead of permission,” but there’s no forgiveness being asked, as you’ll see as we head deeper into this post.
#2 Privacy Concerns
Despite everything, Meta has still enjoyed cramming AI into everything on the Meta platform. It is worth taking a moment to note that none of this information being fed to their AI is private. Two weeks ago it came out that Meta will allow its AI to train on private images from your camera roll… Meaning, no, not even images uploaded to its services. Private images and files you may have on your phone.
This comes on the heels of the sudden discovery that “private” conversations with Meta’s AI can be discovered by just about anyone. So, if you asked about a weird wart and sent Meta AI a photo of it, guess where that might end up?
Basically, as Tech Crunch puts it, the Meta AI app is a privacy disaster. But it’s not the only one with privacy concerns. It is easy to guess that any query to ChatGPT or otherwise may become part of a future training set as conversations are “remembered” by services like ChatGPT. Whether or not OpenAI chooses to disclose your queries in some form is entirely up to them – barring a data breach, of course.
That means that anything you “say” or upload to AI could end up anywhere at all, including personally identifying information, photos, and videos of you and all of your family members.
#3 The “dead internet” theory sprung to life
There is a theory that the internet is actually not populated by people, but instead at some point tipped into being more populated by bots than actual human beings. We have seen this increase significantly in the past few years as AI bots are unleashed across social media and used as “digital assistants.”
Zuck said recently that there is a “loneliness epidemic” and that the solution for this was to… Make people speak to bots? He said that the average American has about three actual friends but the capacity to have about fifteen, and he can solve this by filling that hole with fake people.
Meta recently was in hot water as they were outed as creating AI generated profiles that attempted to meaningfully interact with real people on the site. We’re going to see a lot more of this. We’re already seeing job applicants having their resumes fed to AI (and applicants making their resumes in AI all the same), being forced to interview with AI (wow how dystopian), AI being used to monitor Zoom meetings, employees having to deal with a boss’ AI “assistant” instead of the boss themselves… There’s a point where we will have bots talking to bots exclusively… What’s the human for, then?
We see this a lot in the writing space when it comes to “writers” who come in and ask if it’s okay to use AI to write a book: “if you didn’t take the time to write it, why should I take the time to read it?” The internet is already full up with AI generated art and writing slop, social media accounts and posts where a human is not involved in the slightest, chatbot agents that answer every conceivable question instead of a live human being working at that company. That last point might sound great, but that actually leads us into our next point…
#4 Data “hallucination”
Despite the AI moniker, AI is not true Artificial Intelligence. It is more akin to an auto-complete service, meaning: it tells you what you want to hear. One of the byproducts of this is that absolutely, and without anyone (even the programmers) knowing why, AI will “hallucinate” factually incorrect information at any point in time, and if you have no way of fact checking it, you will never know.
AI has hallucinated facts about the contents of articles, made up entirely fake people, come up with completely fabricated books, and a lot more than that. It presents information so confidently, even when it is outright wrong, that anyone would be inclined to believe it.
Take the case of a person asking ChatGPT how many “R”s are in the word “Strawberry.” We know there’s three Rs. ChatGPT? Insists there’s 2. Perhaps patched out, perhaps not. Who knows for sure?
How about Gemini’s AI overviews telling people to glue pizza and eat rocks? An isolated incident, says Google. But how many more could there be?
Or maybe the Chicago Sun rolling out an AI generated summer reading list with only 5 of the 15 titles on the list actually existing?
What this means is that AI sends users information that may or may not be real with no cited sources for the user to determine for themselves whether or not to believe it. These AI assistants are now everywhere, including when you click to search on Meta or Google, aiming to be the first to provide you an answer when you ask – whether the answer is correct or not. Users turn to ChatGPT for guidance in all ways, as is suggested to them, which causes even more of an issue, as you will see in my next point.
P.S. AI hallucinations have been part of LLMs since the beginning, and it’s not getting patched out. In fact, it’s getting worse.
P.P.S. Sci-fi enthusiasts are extremely irate at this off-brand use of the term “AI” – we were promised something smart like Roy Batty, David, Skynet, or Connor the Negotiator, not… Whatever this is.
#5 Exclusive AI support results in mental collapse
Users are turning to places like Meta AI and ChatGPT to provide advice and friendship instead of real live people. AI is always on, always available, and always ready to answer you – what isn’t there to like? But, AI does not have emotions and cannot determine when an individual speaking to it is suffering from a mental break. An advanced auto-completion algorithm is quick to eagerly urge you on and tell you what you want to hear. So comes ChatGPT-induced psychosis, a new term coined by Redditors. Humans relying on ChatGPT to guide them are suffering delusions and destroying real-life relationships.
Here’s a few shocking articles about this:
- People Are Becoming Obsessed with ChatGPT and Spiraling Into Severe Delusions
- He Had a Mental Breakdown Talking to ChatGPT. Then Police Killed Him
- ChatGPT Linked to Delusions, Self-Harm, and Escalating Mental Health Crises
- Generative AI runs on gambling addiction — just one more prompt, bro!
On top of this, users who constantly use AI to complete tasks are losing the ability to complete those tasks for themselves. As we have been told all our lives, the brain is a use it or lose it organ – a muscle, to be clear. You must flex your brain to continue having it work and produce reliably. If you outsource all of your thinking – including recreational tasks like creative works – to AI, you may lose the ability to complete those tasks on your own. Asking AI to complete all your coding work for you sounds great in theory, but relying on it solely means you may find coding to be quite hazy when you go to do it on your own.
This is so pervasive that AI has been found to damage your professional reputation – basically, your coworkers might think you’re not up to snuff, whether they think you’re lazy, inept, or both. Meanwhile, businesses get comfortable with the idea of the speed of production (enjoying a quick turnaround), which means expectations are high for workers to produce quickly, but the quality of work being completed (by AI) is sloppy at best. Developers struggle to maintain code that was never authored by an actual human being. The intersection between “good” and “fast” is no longer, and instead we’ve gone full into “fast” and “cheap” territory.

Lastly, this has extremely detrimental effects on children who have yet to learn these skills to begin with. Classroom children are using ChatGPT to complete research, completely create essay topics, and “read”/dissect books (forget Spark Notes!), and teachers are left to deal with all of it. The purpose of classroom and homework assignments is to generate critical thinking and real world skills in adolescents, including how to consume media, how to navigate and understand fact from fiction, and other necessary skills. If children can use ChatGPT (or any other AI) to bypass all of this, what are teachers able to teach them? The answer is nothing.
Teachers also are now at a critical point where they have to make a determination (which increases work for our already underpaid and overworked teachers) as to whether or not a child used AI to complete a piece of classwork. Then, if they determine that the child did, they need the backing of their administration when it comes to failing that child’s assignment, given some parents may choose to sue.
Of course, there’s an AI solution to this. AI companies are quick to produce AI tools to solve this. So, teacher puts child’s AI assignment into an AI to grade it and… No one has learned anything. Wonderful!
#6 Exacerbation of Fake News
One thing AI does exceedingly well is create a lot of varieties of something and very quickly. AI can create 200 social media posts from the same general idea for you to post, that much is certain, which means it can also create all kinds of false, generated things. From heartwarming AI generated videos about abandoned animals finding homes (You may ask: what’s the harm?) all the way to deepfake celebrity endorsements and nudes, this can create a real issue:
Telling fact from fiction.
While some images simply dupe people who should know better – No, Auntie Mallie, baby peacocks do not look like cartoon characters or tiny sculptures – it is getting harder and harder to tell real photos and videos from fake ones.
AI generated content is like a hydra: where you chop off one head, three take its place, simply because it’s so easy to request more. AI has taken over visual areas and ideas-based platforms like Pinterest and Google Image Search, making a lot of those places pretty near unusable. You can no longer use Pinterest to source anatomically correct pose reference photos for artwork. Hair dressers and dress makers are being presented with images of things that can not actually be done in real life. Etsy is overrun with AI generated crochet and knitting patterns that cannot be completed.
We saw in full force people sharing a fictional image of a little girl clutching to a water soaked puppy during Hurricane Helene, stirring up controversy and touching the hearts of many. You may be thinking “So what?” but this is a real issue. Many people cannot see the hallmark, tell-tale markers of AI (or don’t pause long enough on images before hitting “share”), and these images, as they continue to gain life-like qualities, can be used to cause outrage, fear, and other emotions that can do things like… Sway elections. We saw deep fakes emerge throughout recent elections, but they are about to get a lot worse as technology improves to the point where no content can be 100% trusted on the internet, no matter who posted it.
This problem is two fold: AI generated content that is fake, causes real news to be cast in doubt as well. Legitimate news outlets with years of experience reporting on actual real world events have already been “got” by AI forgeries. It’s real hard to keep up, even for professionals.
#7 Environmental destruction
One big problem with AI is the computing power necessary to keep producing results. Similar to NFT and crypto mining, AI seems to be the next “big thing” in computing, where many of these data centers can be easily shifted to process AI requests instead. What happens is things like AI slurping up water from areas that desperately need it (to cool the hot-running computers, usually), data centers opening up that are belching pollutants into small towns, and overall stretching thin energy reserves for areas like Texas who are pretty consistently on the verge of collapse at any point in time (though, so far, mercifully ERCOT has been able to keep up). While residents are asked to be conservative when it comes to using their air conditioners (to keep the demand low), AI data centers owned by billionaires are free to use as much energy from those same grids as they want.
It gets worse: poorer communities (usually poorer Black communities) are disproportionately the ones most targeted by the building of new data centers, making those already struggling communities struggle even harder.
The Washington Post compares each AI query to emptying a bottle of water out onto the ground, but others say the true water usage footprint is harder to nail down. The truth of the matter is that even without using it yourself, things like Google’s AI overview (which you can’t opt out of) and Meta AI (which you can’t opt out of) means that where there was previously a lesser necessity for energy through queries there suddenly is, for something you’re probably not even utilizing. The luster has faded on many AI tools including AI art, and the average person, despite Google trying to press you into queries about lying to your friends and family about how much you know about football, probably aren’t finding the usefulness in these tools that companies are investing in. The facts don’t lie, though. According to MIT Technology Review, the carbon intensity of electricity used by data centers was 48% higher than the US average in 2024. 4.4% of all the energy in the US goes to data centers, and with the rate AI is being utilized, we could see this number grow to up to 22%.
If you haven’t clicked that last link and thumbed through, you should. It gives extrapolations for general estimates for chat queries, LLM-generated images, LLM-generated videos, and also how much energy was used to train said LLMs – it’s really in depth and fascinating, to say the least.
#8 Too big to fail?
This brings us to a new issue: billion dollar corporations investing millions (or perhaps trillions, in some cases) of dollars into this technology that is ruining the internet, our shared reality, our creative pursuits, our money making abilities, our professional accomplishments, and permanently damaging our minds. AI was so hastily implemented at the first sign of ChatGPT’s release into search engines like Bing and Google, and now, those implementations and investments need to generate value.
Google has inserted its AI tool Gemini without clear opt outs into Google Workspace accounts, Gmail, Search, and many other places – sometimes raising prices for products for features you don’t want. Meta has replaced its search bar entirely with Meta AI, even for basic requests like trying to find your friend’s profile. Microsoft’s Copilot shows up in Windows, lurking over your shoulder like the ghost of Clippy. For these tech giants, they’ve made a sizable investment in AI, and you are going to use it, whether you want to or not.
Sadly, it seems, only about 3% of AI users are actually willing to pay for it.
As this article states: “OpenAI is unable to make money on $200 subscriptions to ChatGPT. Goldman Sachs cannot see any justification for its level of investment. Sam Altman is subject to allegations of sexually abusing his sister. ‘Slop’ was very nearly word of the year. And then, to top it all off, the open-source DeepSeek project, developed in China, wiped $1 trillion off the US stock market overnight.
In other words, the AI industry now finds that it needs all the allies it can get. And it can’t afford to be picky…The thinking seems to be that, if it can hang on long enough in the public consciousness, then, like cryptocurrency before it, AI will become ‘too big to fail’.”
#9 An Unending appetite
On top of non-consensual use of data that is growing with striking and terrifying quickness, AI has a need for new content. AI generated content, when fed back into an LLM, actually causes model collapse, so this means that new content for the LLM must be genuine and must be human created. It is unknown if the rate of human-generated content is keeping up with the demands of LLMs hoovering it all up, but if it isn’t, and AI generated content is more prevalent, we will see even more inaccuracies, which some call Garbage In/Garbage Out (GIGO), a term that encompasses flawed data.
You can see clearly now why Meta and Google are so quick to want access to all of your private content. What is posted publicly on the internet may not be enough for these massive algorithms to keep up what little quality and usefulness they have for long.
#10 Bias & Cruelty
Any AI is only as good as the data you use to train it on, and man, human beings sure are biased. People post things all of the time on social media, and a lot of it is biased and emotionally charged. AI can’t tell the difference. It can’t infer full intent, or tell if someone has made a joke account and is posting sarcastically. We, as human beings, can determine through context clues, recent events, and more when this is the case, but AI cannot.
The fact is: AI can’t tell the difference between a lie and a truth, and if lied to enough times, it will assume the lie to be the actual truth. And AI is only as good as the data delivered to it.
These following articles are about general programming bias and biases in AI-based algorithms. Imagine this spreading across everything from physical health care to mental health care, schooling, social programs, financial opportunities, employment opportunities, sustenance rationing, and more. If you’re familiar with any bit of dystopian sci-fi, it’s easy to make the logical leap about allowing a Master Control Program to decide whether or not you’re going to the games.
- Google apologises for Photos app’s racist blunder – Google Photos identification algorithm can’t tell the difference between gorillas and Black people and began labeling Black folks as gorillas – a mistake that still isn’t fixed.
- The “inconvenient truth” about AI in healthcare
- AI Algorithms Used in Healthcare Can Perpetuate Bias
- Algorithmic Bias in Health Care Exacerbates Social Inequities—How to Prevent It
- Cedars-Sinai Study Shows Racial Bias in AI-Generated Treatment Regimens for Psychiatric Patients
In addition to the bias that comes when imperfectly biased human beings code the technologies, there’s also a certain level of cruelty made possible by a roving mixture of bias, opinions, and the average user who wants to do bad things for the sake of it (Some call it “trolling”). Celebrities discover deepfake nudes and prompts in which they are being sexually assaulted, and it’s not just celebrities who have to deal with this technology “stripping” them. CSAM is being created at an alarming rate via LLMs. AI/LLMs are being used to disguise horribly violent videos (“Minion Gore”) to post them to YouTube, Instagram, and TikTok. No, really, it’s incredibly messed up that some users are replacing real humans in real violent videos with minions to avoid content matching algorithms.
Okay, maybe I can understand getting so angry at an ex that you put their photos through AI to create indecent or questionable photos of them (no, I really can’t) but using AI-powered voice cloning to scam elderly family members out of their hard earned money? There’s a certain level of cruelty here that can’t be overstated. AI facilitates that cruelty by giving those who are most likely to be cruel more avenues to do so.
#11 Fascism
This leads right into our next point. When someone else owns the means to produce and can modify the output, what does that mean for you?
That means that everything that comes may be tinged in ways you may not even know. You may be posting propaganda. No one is immune to propaganda.
The soul-destroying, mind-stealing, creativity-sucking, environment-burning AI/LLM machine is all encompassingly wrought with problems, but the capstone problem is this: Fascism thrives on controlling the message and destroying the means and ability to create any other message. So-called “AI art” is not a tool to “democratize art” – it’s actually a way to remove the desire or will to create altogether. Why go the hard route of learning pesky things like shading and anatomy when you can simply type a few words into an LLM and get something far beyond your skill level? “Starving artists” are a common thing, no one cares about art anymore and certainly few people even pay for it, nevermind an actual fair wage – why would anyone, willingly, take up that mantle of hardship and difficult work? It almost makes it seem like picking up a pencil isn’t even worthwhile anymore.
It definitely devalues artwork. Why deal with lowly artists and actually paying an artist for a skilled task when AI can do it “for free”?
There’s just so much of it, and the slop can be created by the hundreds if not thousands, quickly. This allows a scattershot of bad takes, misogyny, racism, fascism, xenophobia, and more to spread across social media, news sites, and the internet as a whole. Like that old saying about spaghetti being thrown at a wall to see what sticks. And there sure is a lot of spaghetti flying through the air right now.
As this article puts it: “For relatively small groups like Britain First, hiring a full-time graphic designer to keep up with its insatiable lust for images of crying soldiers and leering foreigners would clearly be an unjustifiable expense. But surely world leaders, capable of marshalling vast state resources, could afford at the very least to get someone from Fiverr? Then again, why would they do even that, when they could simply use AI, and thus signal to their base their utter contempt for labour?
For its right wing adherents, the absence of humans is a feature, not a bug, of AI art. Where mechanically-produced art used to draw attention to its artificiality – think the mass-produced modernism of the Bauhaus (which the Nazis repressed and the AfD have condemned), or the music of Kraftwerk – AI art pretends to realism. It can produce art the way right wingers like it: Thomas Kinkade paintings, soulless Dreamworks 3D cartoons, depthless imagery that yields only the reading that its creator intended. And, vitally, it can do so without the need for artists.”
Not only does the fascism machine attempt to control you by removing your desire and ability to create by seizing the means of creation, it also allows the owners of said machine to tweak the output to anything they may desire. Such is the case of AIs like Grok, the “free speech” and “unbiased” LLM chatbot haunting Twitter. You may recall in May when Musk “tweaked” Grok’s answers, and the bot started inserting information about “white genocide” into every answer, whether the question was about the topic or not. Garbage in/garbage out, fascism in/fascism out. At the end of June, Musk publicly lambasted his own bot for delivering points he personally believed to be incorrect (but were actually factually correct, just considered “left leaning”) and personally vowed yet again to retool Grok to deliver only “correct answers” (as in, answers Musk’s views align with). Edit for July 8th, 2025: Musk adjusted Grok again and it is currently praising Hitler and promoting antisemitic views on Twitter.
It is interesting how quickly the far right has come to embrace this technology as its mouth piece, isn’t it? Often, art is used as a tool to unify people, spread a message, and develop human connection. You may have heard “beware of artists; they mix with all classes of society and are therefore the most dangerous” – the actual quote being in a letter to Queen Victoria in 1845 reading “The dealings with artists, for instance, require great prudence; they are acquainted with all classes of society, and for that very reason dangerous” – and it still rings true. It stands to reason that wanting to squash creation would be important to any regime. Creating only approved artworks and not the avant-garde creations that might make someone think to break from ranks would be a necessary thing were you someone like Hitler (The nazis destroyed countless artworks, burned thousands of books, essentially setting us back decades in research and combined human knowledge) or, if we want to think fictionally, someone like Emperor Palpatine.
Conclusion
We seem to be living in a world that is dead set on making us question our own reality. Whether you’re choosing to use AI as a tool, in full, or not at all, you will need to make some decisions about your intake of things around you. It is difficult to live entirely AI-free in 2025. It is OK if you can’t help when CoPilot or Gemini invades your device in the next software update, but you still want to use your Windows machine or Android phone. It’s fine if Google keeps making AI overviews bigger and more prevalent at the top of every search query, and you’re still using Google. It is hard to all-out avoid AI (and even if you do, some people may call you a “luddite” for it) but it’s important to determine where your line is. Many of us are holding the line by simply not using the tools wherever possible, not placing our hard-earned money into products developed with AI, closing accounts in areas that prioritize AI content above all else, and turning away from services that increase subscription fees for not wanted or needed AI tools. We unify and demand action from our governments to regulate these destructive technologies, and we urge you to stand beside us in the fray.
People might tell you “AI is here to stay – get used to it!” but ignore them, where you draw your own personal line on AI and the usage of it is ultimately up to you.
