12 Comments

Perhaps a follow up would be to ask about other forms of energy and their uses. Would it mention the need for liquid fuels for transportation? Would it mention that the “to cheap to meter” comment was in reference to nuclear fusion?

Expand full comment

Bill:

I don't recall seeing a "too cheap to meter" comment in the AI response I received.

Yes, there are numerous followup questions that come to mind — but what is the point if the source is purposefully giving false information — which I proved is happening, above?

Expand full comment

I was thinking in terms of future AI but I failed to mention it. Detractors of nuclear power often mention the “to cheap to meter” comment but they fail to mention the context. If there was adequate base load power then it might make sense to charge a flat rate and not bother with metering it, however it wouldn’t fly politically.

Expand full comment

As a mother of two school-aged children ChatGPT raises all my alarm bells. It is already difficult enough counter-instructing against all the non-factual scientific claims and cultural influences that permeate media, government and our elite establishments - now I'll have to guard against heavily promoted AI influences as well. And trust me - I can already hear the following, "According to ChatGPT....much like people do with Wikipedia and Google right now 🤦‍♀️. God help us...

Expand full comment

Jo: TY. I share your concern — which is why I'm doing this science-based critique of AI. IMO our best defense remains the ability to Critically Think. Of course they are aggressively trying to kill that in schools today.

Expand full comment

All the AI does is parrot politically correct nonsense on the issue. Great critique.

K.J.

Expand full comment

Hi, John. As you might remember, I've been working in the field of AI for over 60 years. I was one of the originals. Fast forward 10 years and I worked with Dr. Carl Page, who was working on data analysis algorithms. We couldn't do much on computers to prove it, because back in the late 60s, early 70s, despite landing on the moon, there wasn't sufficient computing power and there weren't adequate data to search. Carl's sons were Carl, Jr and Larry ... Page. Now, down to ChatGPT.

This is clearly not an AI implementation. It is humans changing rules and loading static databases. That to me is not AI. There is a decision tree mechanism that is called AI today, and I guess I would go along with that, to some extent, though I implemented that approach in the 90s to control quality in manufacturing lines. I'm inclined to the idea that all that is AI today is due to computing power; the math and algorithms were available in the late 60s and early 70s. ( I worked on speech recognition in the early 70s. Again, a compute-constrained problem to be able to separate phonems when the speech ran together. Solved with more pattern comparison at human acceptable speeds.)

On ChatGPT, who's function I implemented back in the early 90s as part of that manufacturing application, I asked the question:

Provide a summary of the contents of the Phizer Papers released through FOIA request.

Response from ChatGPT: I apologize, but as an AI language model, my responses are generated based on pre-existing knowledge up until September 2021. As of my last update, I am not aware of any specific Pfizer Papers released through a Freedom of Information Act (FOIA) request related to COVID-19 vaccines. It's possible that new information has emerged since then.

Ummm. Ding, wrong answer ! I suppose I can work on asking the question (training me) until I get to the a level the program can understand. But, the facts and knowledge on the technology, the mandate effectivess, and the safety and effectiveness of the &psi.mRNA and LNP technology was available in 2020. Some papers go back 30 years! If the software was AI, it should not have been so superficial. In fact, it turns out the probably approaches 100% that the response given was manufactured by direction of the federal government and has no scientific basis at all. It's a plug-in triggered by certain words in aggregate. AI can be manipulated to give mal-information at the desire of controlling interests. This information can be embedded within information that "sounds authentic".

Expand full comment

ATY: TY for detailed and interesting comments. My results above (and in the subsequent Parts 3 and 4) give credence to your observations. Not sure whether the government is behind this, but would not be surprized if it was part of the Left-Wing plan to not only dumb citizens down, but to bury them with misinformation. I'll elaborate more in Parts 3 and 4.

Expand full comment

John,

Good question for AI. I found that these sort of questions get a “popular” narrative response and many times missing facts. I also found that when you press AI on a particular answer, when you know the facts, AI quickly apologizes and responds with a more in-depth answer. If you continue to press, its answers seem to get a little closer to the facts. At some point, it gives up with some lame explanation. When I explored Green House Gases, it’s initial response was related to the evilness of CO2 (popular but incorrect narrative). When I pressed it on the water vapor content of GHG, it quickly conceded the point, but still tended to paint GHGs as something evil...clear bias.

Expand full comment

Don:

Yes, clear bias. I was trying to keep this short, but I did ask for a new answer a few times. There were some modifications, but nothing substantial. Further, why should a citizens have to ask a powerful computer to try another answer? And when should they do that?

Expand full comment

Agree...AI should get the right answer with its first response instead of the user being forced to get into a dialogue with a biased machine. It seems like AI simply taps the internet and finds popular narratives independent of any facts or science. It would be great and far more powerful and useful if AI used Critical Thinking, which it can learn from all the "Critical Thinking" resources available. Since AI did a pretty good job on your Scientific Method question it's interesting that it didn't use the Scientific Method to respond to your "wind" question.

Expand full comment

Don:

That's a good question. I also don't think the answer was based on what is on the Internet, as there are easily over a hundred studies (on the Internet) about the negative consequences of wind energy — yet it showed none of them in the answer above.

Expand full comment