– AI (Minus AI)

A few years ago, a man went on a trip, faithfully following the navigation system’s instructions. After a few curves and recalculations, the device happily announced: “You have reached your destination.” The problem? He stared at a huge expanse of water – as if the navigation system wanted to say: “Come on, dive in!”

The actual destination was miles away. In the end, he dispensed with technology and looked for the way with the good old paper map.

AI behaves similarly when it makes a mistake. IT does not hesitate. IT doesn’t say, “Hmm, I’m not sure about that.” IT gives you an answer with absolute certainty, as if IT were absolutely sure, even if that’s not true.

That’s the hard part, the tone, the formulation, the polish, everything seems trustworthy, but just because something sounds right doesn’t mean it’s right. Trust is not the same as rightness—and with AI, we have to keep that in mind.

So why does AI still make mistakes despite its computing power?

Usually, it boils down to one in three culprits. Think of them as the “usual suspects” in any AI failure.

1. Data bias

AI is trained on large amounts of data. This helps to identify patterns, make predictions, and provide answers based on what it has learned. The quality, variety, and balance of this data shape the lens through which AI sees the world. It’s like an old saying from my family: “Until a child visits the farmland of his friend’s father, he will always think his father has the largest farmland in the city.”

The situation is similar with AI if it is not trained with diverse and balanced data. IT sees the world only in a limited or even distorted way. IT may sound intelligent and confident, but IT might miss the bigger picture, which can lead to unfair or inaccurate answers.

Imagine that an AI trained to screen applicants sees predominantly male applicants in the past. There is a danger that IT will unconsciously prefer male candidates. (Oh, goodness, that’s like in real life, my bad.)

2. Poor or incomplete data

AI is only as intelligent as the data it is fed with. If the data is outdated, incorrect or incomplete, the results are also inaccurate. It’s the classic “garbage in, garbage out” problem.

A few months ago, someone developed an AI-powered global alert system that sends updates to a Telegram channel. Originally, the model referred to Donald Trump as a “former US president”. That would have been true some time ago, but not today. The model was either trained on data that was once correct but is now outdated, or the knowledge base used to improve the model response was outdated. When I switched to another model trained with updated data or an updated knowledge base, the problem was resolved.

The same is happening in other areas. A travel chatbot might suggest a great little café to you—only to find out that it closed three years ago. Again, the AI is not to blame. It simply works with old information.

3. Misinterpretation (or “hallucination”)

If the AI doesn’t have the right information, it doesn’t say “I don’t know” – it just makes something up and says it confidently. (A lot like social media?)

If you are preparing for a speech and ask ChatGPT for some inspiring, well-known quotes on the subject, you most likely will get a handful… and they sound great. The only problem? Some of them will be fictitious.

Fortunately, I have learned to always double-check everything. But imagine if someone stood on stage and quoted someone who never said those words. The AI does not warn you of such slip-ups. The same goes for the internet in all its glory. Especially, quotes are so often mistakenly (or purposely) credited to the wrong person.

In 2023, a lawyer used ChatGPT for legal research. The app invented fake case quotes, and I imagine they were convincing enough to convince the lawyer in court. I’m sure you already know how the whole thing ended.

I used AI to modify pictures I had taken, mostly of furniture projects I had completed for customers and felt proud of. I have a small display area in my workroom, even have a backdrop to take the perfect picture, but it’s all too time-consuming to set it all up. When I took a photo of a chair I had restored and hoped to just ‘clean up’ the messy background, AI did just that. Voila, the result was the perfect photography for my portfolio. It was my work, my workroom, but a different background.

When I wanted to share a picture of myself, but didn’t want to show the certificate with my full name in the background, I also altered the wall. In both instances, I asked a software program if AI had been used, and both times I got the answer that this was an AI picture. See, that’s where it’s wrong. It doesn’t know what has been altered, so it assumes it’s all artificial and not real. According to AI, I am not real, nor is the chair.

Then I got more curious and gave my book manuscript to AI. I wanted to know the verdict. Goodness, it answered all my questions. AI gave me two pages of answers about my writing, the tone, the people, the feel, and the emotions. It was an interesting read. Funny acording to AI, there were no mistakes in the manuscript, but that’s where it’s wrong. Two punctuation errors have been overlooked and can be found, guess where? In my book! 🙂 AI could not predict the outcome, nor could it explain a few paragraphs, when asked about the relations.

AI thinks my book is flawless, bless its little artificial … (heart?)

When you type a few pages of famous literature classics, for example, “The Tale of Two Cities,” especially the beginning of the book, which includes the famous line “It was the best of times, it was the worst of times,” you will learn that according to artitificial intelligence, the whole writing of the first paragraphs is not correct. There are simply too many mistakes in it—grammar, punctuation, and the length of the sentences. AI would butcher this famous book. AI can’t feel. AI can’t tell you how much is ‘altered’ or what part, so it just assumes it’s all wrong, or all … right.

Why do people need to understand that AI makes mistakes?

AI is not magic, but mathematics. If we don’t understand how and why AI makes mistakes, we give it more authority than it deserves and stop questioning it. This is risky. The way they shovel AI down our throat is dangerous. It’s downright frightening how many people DO NOT question AI.

Imagine how people use AI in research and are left with false facts. Think about the impact if an applicant is rejected by an algorithm because the training data was wrong or incomplete.

Imagine the consequences if patients ask a chatbot for medical insights and get wrong advice.

AI doesn’t know if it’s telling the truth. When we trust it blindly, it tacitly influences our decisions, beliefs, and even our rights. But if we understand why it makes mistakes, we remain in control—not the algorithm.

How to protect yourself:

  • Ask for sources
    If you’re using AI to perform specific tasks, especially research-based tasks, ask for sources.
  • Cross-check result 
    Compare with a trusted, human-verified source, or ask for a second-person review and confirmation.
  • Beware of overestimating yourself
    Polished is not the same as correct.
  • Use AI to design, not to make decisions
    Let it suggest, you confirm. You are the boss!

So, what should we think when we increasingly use AI? Well, it’s simple: “AI isn’t there to replace human intelligence; it is there to complement it.” The responsibility for oversight and review remains with us, and I hope that it will remain so.

AI will put websites like Wikipedia out of business if we let it happen. Why would I go to the Mayo Clinic or other trusted sources if AI gives me the same answers and I don’t have to scroll down? Well, that’s for you to decide. I know what I need to do. I keep on scrolling.

To be honest, I search online and add -AI (Minus AI) behind all my search phrases, because I still want to see the original sources, and it works like a charm.

For the moment, I am cured of AI but will continue using it to change a background in a picture, and get logical, analytical answers to questions.

AI will not take my job, nor can it compete with my crazy design choices. It’s a tool! It can help me find what I want, but it can’t see my imagination because sometimes even I don’t see the result before it’s actually finished. Do I make sense?

Yes! But don’t ask AI.

13 Comments

  1. Unknown's avatar AI fïed said:

    I’ve seen this firsthand. The issue isn’t AI making mistakes, it’s how confidently those mistakes are delivered.

    January 8, 2026
    Reply
  2. Unknown's avatar leigha66 said:

    This is a very interesting post about AI. It is growing so fast into every aspect of life. There should be rules or laws about its use. I am guilty of relying too blindly on information I have received through AI. I will think twice about it now. I also will use the ‐AI trick after searches. Thanks for sharing this eye-opening piece.

    January 7, 2026
    Reply
  3. Unknown's avatar Carolyn Page said:

    My partner, a lover of Earth’s minerals, geology, etc., tested AI (along with his piers) by asking AI about a certain ‘fictitious’ mineral. AI went into a lengthy focus upon said mineral. What?..

    January 3, 2026
    Reply
  4. Unknown's avatar lisaapaul said:

    A wonderful, informative post.

    January 3, 2026
    Reply
  5. Unknown's avatar Debra said:

    You’ve compiled a very interesting overview of the AI world we are all being asked to join! I am certainly not running to embrace this technology. I am curious about it when I hear of the possibilities in medicine and science, but curious and cautious aren’t the same. AI imagery has ruined my relationship with any social media, simply because I can’t tell what is “real” and what isn’t. It’s sometimes hard recognizing that all the dangers I was warned about in books I read growing up, to which I was were fantasy, is starting to emerge as the future. Do you have driverless cars where you live? We do…Waymo…all over the place, but not for me!

    January 2, 2026
    Reply
    • I feel the same way, I don’t even want to watch videos anymore because so many of them are fake. It’s a complete turnoff for me. The other day I listened to music on a blog, and when I asked, he confessed that it was AI that had played it, but his lyrics. Also, again, a complete turnoff. So far, and I know this because I just published, all publishing houses (including Kdp/Amazon and Barnes and Noble) ask if AI was used to complete the project. Also, all literary agents ask the same questions. I hate how quickly AI came along and how willingly so many just accept it. The internet, or missing laws, have destroyed society (in my humble opinion). I needed to test AI myself and I am not crazy about it. Too much can go wrong/is going wrong.

      January 3, 2026
      Reply
  6. Unknown's avatar Eha Carr said:

    I am not a paranoid person . . . not by a long shot . . . but have been caught a few times this year ! I now have come to mistrust and look twice at a lot I did not before . . . not happy but, I guess, I have to go ‘with the flow’ but ‘my’ and very guarded way!

    January 2, 2026
    Reply
    • I am not paranoid either, but like you, I take everything with a ‘grain of salt’ and I am cautious. I am accepting AI to the point of being a tool, but don’t like how it’s taking over. I don’t want to watch YouTube anymore, because of all the fake videos. It’s a complete turn-off.
      I long for reality and real things and hate what this all is doing to society.

      January 3, 2026
      Reply
  7. What a wonderful post, both interesting and informative. Your point about searching with a proviso of minus AI is a useful habit to adopt. The increasing problem for everyone is that we no longer know when AI is involved in what we are reading/hearing/viewing, and as a result, many people accept what is presented as fact. It is particularly worrying when considering what young minds are subjected to before they have learned reasoning power. I can see no solution to this monster we have created, and that worries me immensely.

    January 2, 2026
    Reply
    • I am so pleased that you like my post.
      You are spot on, with AI we created a monster and the rich and powerful will make sure that it will be used, mainly against us, because many of us will be replaced by AI. Why make a movie if AI can do it? Why bother to write a book for months and years, if AI can do it in a day? Why even think and research, if AI gives us all the answers? Why waste my time and take 30 photos, if I can just dictate what I want to see?
      I think we have made dramatic mistakes when we forgot to put laws and regulations in order when the world wide web started. The internet has changed our behavior, had demolished morality and has sadly changed the way how we interact. AI will change us, if we let it.
      I use -AI on every one of my searches because I only accept AI as a tool. I am bossy. 🙂

      January 3, 2026
      Reply
  8. Unknown's avatar dawnkinster said:

    That’s very interesting. So far I haven’t delibertly used AI, but it’s possible it’s had some effect on my writing or reading without me knowing. I’m not even curious about it. But I fear I will have to get used to it being out there and become more skeptical about things I see.

    January 2, 2026
    Reply
    • I have to admit, I am curious about AI. I hoped for a slow introduction, but got alarmed when I saw the amount of money that is pumped into it. Who will gain and how?
      I wanted to test it before I came to a conclusion and now I have. 🙂

      January 3, 2026
      Reply

Leave a Reply