Critically Thinking about Artificial Intelligence: the GOOD, the BAD, and the UGLY.
Part 4: Some Takeaways
This is the fourth of my four-part series about Artificial Intelligence (AI). See Part 1 [the Good], Part 2 [the Bad], and Part 3 [the Ugly]. [Note: all of these commentaries were written entirely by a human, not AI!]
In each of these, I utilized ChatGPT as it is the most representative of present-day AI options available to the public. I asked it a few current technical (science-related) questions I already knew the answers to — and then critiqued the responses given.
I expected to get much better results than doing a typical Internet search. For example, when I queried the Internet: “Is wind energy a net societal benefit?" I was given 182 million hits. Clearly, I can not look at (or process) that number of references. Here are some of my basic expectations when I asked AI current technical (science-related) questions:
a) AI will go through what is on the Internet, and accurately separate the wheat from the chaff. There is a lot of garbage out there, and for AI to be useful with current science-related queries, it needs to be able to cull out the duplicates, those that are marketing BS, inaccurate/poorly done materials, etc.
b) AI will digest and report on relatively current Internet material — e.g., being no more than a month out-of-date.
c) AI’s summary will be objective. In other words, it should convey a realistic summary of quality data, not emphasize what is politically correct.
d) My phrasing of a question should not be critically important, as AI should “understand” what I’m asking.
With this in mind, and after applying some Critical Thinking to the AI technical responses I received, some of my observations are:
1 - AI is most useful when you already know the answer! This may seem counter-intuitive, but the real benefit of AI is to help you refine your thinking on something. For example, did you forget a good reason, or can AI explain a reason better?
2 - Conversely, AI is most dangerous when you have little knowledge of a topic. You will be given what seems like a detailed “factual” answer. With limited knowledge, it will be very hard to grasp when you are being sold a bill of goods.
3 - The exact wording of the question asked can result in MAJOR differences in the answer. AI should be smart enough so that there are no such variations.
4 - AI is NOT like using GPS on your car.
The GPS may not always take you on the optimum route, but 99+% of the time it will get you there. In other words, if the objective is to get from A to B, it will essentially always get the answer right. On the other hand, the accuracy of AI ranges from 0% to 99+% — and without warning! Consider the usefulness of your vehicle’s GPS if its accuracy ranged from 0% to 99+%, without any indication about the accuracy of what it is currently telling you… It would be useless…
5 - It seems that when the user asks that the AI answer be regenerated, AI does NOT start over from a clean slate. Instead, it appears that the initial AI answer is massaged in some (unknown) way. This is less useful than starting over.
6 - The famous assertion of “garbage in, garbage out” applies here. Since the AI answer is dependent on a database of information, if that database is incomplete, inaccurate, distorted, etc., then the response will be likewise.
7 - AI will not be truly useful until it can assure that its database is complete, accurate, objective, etc. The likelihood of that happening in the near future is extremely remote.
8 - Regarding the “complete” part, I repeatedly got a notice that the AI research data ended on September 2021.
For how purportedly sophisticated AI is, that does not cut it. For example, there have been tons of COVID developments since then. It would seem that new data would be continuously inputted/processed, and AI answers be no more than one month out-of-date. For many uses of AI, this could be a MAJOR flaw.
9 - Also regarding the “complete” part, multiple times I could see that important information prior to September 2021 was also missing. I’ve seen no explanation as to why such key data would not have been incorporated into AI’s answers.
10-Regarding the “objective” part, I repeatedly saw very biased adjectives, etc. subtly incorporated into AI responses. Most people would not pick up on these.
11-Also regarding the “objective” part, AI acknowledges reliance on consensus and authority when formulating its technical answers.
Neither of which has anything to do with real Science. Therefore, their answers on ANYTHING potentially controversial (Climate to COVID, Elections to Education, Renewables to Religion) should be taken with tons of salt — i.e., considered as coming from a Left-wing source.
We have come to expect that there will be Left-leaning bias on information regarding national issues. Why isn’t there any AI conservative bias?
12-AI seems light on specific references. When I asked it technical questions, I would expect some citations (e.g., to pertinent studies) to support its answer. I saw none!
Suggestion: there should be large print disclosures that AI answers on controversial technical topics have a built-in undeclared bias, plus they absolutely do NOT reflect all (or even most) relevant studies prior to September 21, 2021, etc.
13-It does not seem that AI ascribes to the objective that its answers should be short and on-point. Rather it seems (in an apparent effort to impress users), that AI can be verbose and repetitive.
Suggestion: maybe AI should give us an option when we are asking a question: do we want a condensed answer, or an expanded one?
14-AI is an extreme threat to getting more citizens to become critical thinkers. We all live busy lives and have been taught to cut to the chase. Just like our propensity to digest things down to soundbites, AI will bring that superficiality to the next level.
Consider how good the younger generations are at understanding basic arithmetic. Ask one of them to divide 222 by 6 without a calculator! Ask them to do it on paper, so that you see their methodology. I can do it in my head (and on paper), but 99% of our younger generations would likely not be able to. The reason is that their reliance on a calculator has taken away their ability to understand and process math. The exact same thing will almost certainly happen with AI…
15-For similar reasons, AI is profoundly dangerous to students — who are likely to be major users. For example, when given a paper to write, it will be an almost irresistible enticement to have AI do it for them. (They would then massage the AI response to make it look like their own work.) The net effect of students: 1) not doing their own research, and 2) not applying critical thinking, is extraordinarily bad.
Suggestion: all AI-generated documents should be stamped with a “Draft” watermark to minimize inappropriate copying and pasting.
…………………………
A synopsis of my AI investigation as it relates to getting quality answers to technical science-related questions is: AI is no panacea, but is more like a Trojan horse. Again, regarding technical topics, unless you are an expert you are a babe in the woods.
The overall score I’m giving ChatGPT for my technical questions is 50%: good effort on select things, but not ready for primetime.
…………………………
PS — If you want some good examples of why I gave AI some credit, see this short video, about ten things ChatGPT can do. Just keep in mind that although AI will answer interesting questions, unless you are an expert you will have no idea whether their response is right — which could be a very big deal.
PPS — I did some limited comparisons of ChatGPT to over a dozen of the AI alternatives available to the public. Most scored poorly on the sample technical questions I asked them — probably because they share things with ChatGPT.
However, some did markedly better in my limited test. These are GPT-4, Claude+, Claude-Instant-100k, and Claude-Instant. Here is a sample answer to the question I asked in Part 2 — and it should be clear that it’s MUCH better than ChatGPT’s.
Unfortunately, it is a bit difficult to access these alternative AIs directly. One convenient solution is to use Poe which connects you with these and more. However, after a seven-day free trial, Poe costs $200 a year. Stay tuned for new developments, but hone your critical thinking skills, as they will be more important than ever!
Please encourage other open-minded associates to sign up for this free substack. Also please post this on your social media. The more citizens that are educated on key issues like this, the better our chances of success…
If you are a visitor: WELCOME! Subscribe (for free) by clicking on the button below:
Here are other materials by this scientist that you might find interesting:
Check out the Archives of this Critical Thinking substack.
WiseEnergy.org: discusses the Science (or lack thereof) behind our energy options.
C19Science.info: covers the lack of genuine Science behind our COVID-19 policies.
Election-Integrity.info: multiple major reports on the election integrity issue.
Media Balance Newsletter: a free, twice-a-month newsletter that covers what the mainstream media does not do, on issues from COVID to climate, elections to education, renewables to religion, etc. Here are the Newsletter’s 2023 Archives. Send me an email to get your free copy. When emailing me, please make sure to include your full name and the state where you live. (Of course, you can cancel the Media Balance Newsletter at any time - but why would you?
John...good overall assessment...useful in many situations but the user needs to be aware it doesn’t think, but it can help in one’s thinking. It will get better rapidly and access more recent info. I hope someone invents an alternative right-leaning AI that someone else can than create an unbiased AI by using the left biased and right biased AI.
Thanks for the awesome 10 minute video on ten good things it can do.
To keep an AI up to date in the future would require complete unrestricted real-time access to the internet...
...essential all data
Probably wise to wait...