Does AI Really “Know” Things?
Welcome back to our little corner of the internet!
You’ve probably heard about AI doing fascinating things like driving cars and chatting with you, but does it actually “know” things the way we do? Let’s take a closer look to figure out if AI is as smart as it seems.
“Artificial Intelligence is not based on natural laws of intelligence. As a consequence, AI is not intelligent by itself, by which AI is limited to perform repetitive tasks, and machine learning is limited to pattern recognition. So, AI is unable to know anything”
They do in fact have a point in saying this. But what is the definition of true knowledge? Let’s dive deeper.
Epistemology
There’s been a whole ethical debate about the theory of knowledge, or Epistemology, going on for centuries. The general definition of true knowledge is “awareness of facts or as practical skills, and may also mean familiarity with objects or situations”.
How Does AI Know Things?
Think of AI like an incredibly clever parrot. It learns from a bunch of examples and figures out how to do things based on those examples. But, here’s the catch — does the parrot really understand what it’s saying, or is it just copying sounds? AI is a bit like that parrot. It’s fantastic at copying human-like tasks, but understanding is a whole different story.
The Illusion of Understanding
Imagine somebody tells you that there’s a new fruit around called a “mangoberry.” Now, if you see that fruit again, you’d most likely feel obligated to call it a “mangoberry”, right? But do you really know what a mangoberry is?
AI works in a similar way. It looks at lots of pictures and information, then labels things based on what it’s seen before. But it’s not the same as truly understanding what those things are. For example, it might be great at chess, but that doesn’t mean it knows the whole story of chess like a human does.
In fact, a good example was when a rules-based (it had been taught the rules seperately) AI played a Machine-Learning AI at chess. The Machine Learning AI didn’t know all of the rules on paper, but instead knew the basic concepts thanks to looking at a lot of examples of chess games. Would this still be a true understanding?
A Weakness?
Have you ever tried to tell a robot a joke? It might not have got it without a lot of prompting because it lacks human experiences. AI can’t catch hints, for example, when you say something that means the opposite. It’s not good at understanding content that’s not in context. So this brings us back to our first point on Epistemology. AI is not self-aware or aware of certain situations.
It’s important to remember that AI’s “knowledge” is impressive, but it’s not the same as human understanding. It’s good at copying patterns and doing specific jobs, but it doesn’t have our deep understanding, creativity, or common sense. So, next time you see AI doing something smart, remember it’s a bit like a parrot — clever, but not quite the same as human intelligence.
Remember that this is solely my interpretation. Feel free to comment if you feel differently!
I do realise this is quite a bit shorter than the usual, but I wanted to get straight to the point. If you liked this one, leave a few claps, a comment and a follow behind. See you in the next one!