Critically Thinking about Artificial Intelligence: the good, the bad, and the UGLY...
Part 3: the Ugly
This is the third of my four-part series about Artificial Intelligence (AI). See Part 1 [the Good], Part 2 [the Bad] and Part 4 [Takeaways]. [Note: these are being written entirely by a human, with no AI involved!]
As in the prior two commentaries, I utilized ChatGPT as it is the most representative of present-day AI options available to the public. I asked ChatGPT a few current important technical (science-related) questions that I already knew the answers to — and then critiqued the responses given.
In AI’s answers to technical (science-related) questions, I came across some surprising (and ugly) findings, including:
1) AI periodically contradicts itself,
2) it’s possible to get AI to reverse its position on something,
3) AI acknowledges that its answers are premised on consensus,
4) AI answers are also based on deference to authority, and
5) AI is actually an attack on Critical Thinking.
A brief explanation of each:
1 - I ran across AI contradicting itself a few times. For example, I asked AI “Why doesn't the NGSS advocate the Scientific Method?" They answered that the NGSS opposes “the rigid, linear version of the Scientific Method…”
There is only one version of the Scientific Method (see here), so this business about the Scientific Method being just “linear” is a straw man argument. Yet AI states it as if it was legitimate — even though it contradicts a prior assertion by AI.
Remember, in my Question #1, AI (accurately) said that the: “Scientific Method is not always a linear process, and it often involves iterations.”
Note also that I clearly demonstrated that the Scientific Method is NOT linear. If AI was as smart as it is supposed to be, they could also figure this out.
It’s bad enough when AI confidently states false information as fact, but the reality that AI will periodically contradict itself is ugly and really troubling.
2 - Surprisingly, it is quite possible to get AI to reverse its position on something — especially if you already know the right answer! Here is a good published example. Non-experts would likely have accepted AI’s initial answer as gospel. Not only would non-experts likely not be aware of the evidence to dispute AI’s answer, but most people are using AI to get a quick response, so it is unlikely they will be interested in spending even an hour or so to extensively probe AI’s initial answer. Disturbing.
3 - In several of the AI answers I was given on Science-related queries, AI’s reliance on “consensus” was explicitly stated. Science is absolutely NOT about consensus (here’s a good discussion of that). On the other hand, politics is based on consensus. So when some parties say that they advocate scientific consensus, in reality, they are endorsing political science. Political science is a made-up name to fool the unwary, as it has nothing to do with real Science. VERY ugly.
As a further part of my investigation, I asked AI “Is consensus part of Science?”
The key part of AI’s (good) answer: “Consensus is not inherently a part of the scientific process itself, but it can be an important aspect of how scientific knowledge is established and communicated within the scientific community…. Consensus is not a determining factor in establishing the validity of a scientific claim. Science relies on empirical evidence, rigorous experimentation, critical analysis, and the replication of results to evaluate hypotheses and theories. Scientific knowledge is built upon ongoing investigation and the continual assessment of new evidence. Hypotheses and theories that withstand scrutiny and are supported by robust evidence gain acceptance within the scientific community.”
4 - In several of the AI answers it was specifically stated that the response was based on the position of some government agency (e.g., CDC) — instead of AI independently evaluating the facts (e.g., relevant scientific studies). This is deference to authority, which can be extremely problematic. (Here is a reasoned discussion from the Harvard Business Review.) AI is advocating that we robotically follow the lead of government agencies and other authoritative bodies, as if they were the Pied Piper. Not good.
5 - In #3 (above), AI’s support for “consensus” is advocating conformity. In #4, AI’s “deference to authority” is again promoting conformity. It is imperative to appreciate the connection here: conformity is the opposite of Critical Thinking! That AI is subtly undermining citizens using Critical Thinking is ugly.
PS — I happened to just see this published. The answer to the question about “what do we do about regular intelligence?” is: get more people to be Critical Thinkers!
Please encourage other open-minded associates to sign up for this free substack. Also please post this on your social media. The more citizens that are educated on key issues like this, the better our chances of success…
If you are a visitor: WELCOME! Subscribe (for free) by clicking on the button below:
Here are other materials by this scientist that you might find interesting:
Check out the Archives of this Critical Thinking substack.
WiseEnergy.org: discusses the Science (or lack thereof) behind our energy options.
C19Science.info: covers the lack of genuine Science behind our COVID-19 policies.
Election-Integrity.info: multiple major reports on the election integrity issue.
Media Balance Newsletter: a free, twice-a-month newsletter that covers what the mainstream media does not do, on issues from COVID to climate, elections to education, renewables to religion, etc. Here are the Newsletter’s 2023 Archives. Send me an email to get your free copy. When emailing me, please make sure to include your full name and the state where you live. (Of course, you can cancel the Media Balance Newsletter at any time - but why would you?
Great series, John. The cartoon at the bottom is particularly astute.
John,
These are excellent talking points. It really shows the weakness of AI.
In a weird way, I wonder if that weakness is somehow encouraging?? If AI were flawless, we would further lose our critical thinking skills to a form of high tech atrophy. What do you think?
Keep up the great work.
Kimball