12 Comments

Perhaps a follow up would be to ask about other forms of energy and their uses. Would it mention the need for liquid fuels for transportation? Would it mention that the “to cheap to meter” comment was in reference to nuclear fusion?

Expand full comment

As a mother of two school-aged children ChatGPT raises all my alarm bells. It is already difficult enough counter-instructing against all the non-factual scientific claims and cultural influences that permeate media, government and our elite establishments - now I'll have to guard against heavily promoted AI influences as well. And trust me - I can already hear the following, "According to ChatGPT....much like people do with Wikipedia and Google right now 🤦‍♀️. God help us...

Expand full comment

All the AI does is parrot politically correct nonsense on the issue. Great critique.

K.J.

Expand full comment

Hi, John. As you might remember, I've been working in the field of AI for over 60 years. I was one of the originals. Fast forward 10 years and I worked with Dr. Carl Page, who was working on data analysis algorithms. We couldn't do much on computers to prove it, because back in the late 60s, early 70s, despite landing on the moon, there wasn't sufficient computing power and there weren't adequate data to search. Carl's sons were Carl, Jr and Larry ... Page. Now, down to ChatGPT.

This is clearly not an AI implementation. It is humans changing rules and loading static databases. That to me is not AI. There is a decision tree mechanism that is called AI today, and I guess I would go along with that, to some extent, though I implemented that approach in the 90s to control quality in manufacturing lines. I'm inclined to the idea that all that is AI today is due to computing power; the math and algorithms were available in the late 60s and early 70s. ( I worked on speech recognition in the early 70s. Again, a compute-constrained problem to be able to separate phonems when the speech ran together. Solved with more pattern comparison at human acceptable speeds.)

On ChatGPT, who's function I implemented back in the early 90s as part of that manufacturing application, I asked the question:

Provide a summary of the contents of the Phizer Papers released through FOIA request.

Response from ChatGPT: I apologize, but as an AI language model, my responses are generated based on pre-existing knowledge up until September 2021. As of my last update, I am not aware of any specific Pfizer Papers released through a Freedom of Information Act (FOIA) request related to COVID-19 vaccines. It's possible that new information has emerged since then.

Ummm. Ding, wrong answer ! I suppose I can work on asking the question (training me) until I get to the a level the program can understand. But, the facts and knowledge on the technology, the mandate effectivess, and the safety and effectiveness of the &psi.mRNA and LNP technology was available in 2020. Some papers go back 30 years! If the software was AI, it should not have been so superficial. In fact, it turns out the probably approaches 100% that the response given was manufactured by direction of the federal government and has no scientific basis at all. It's a plug-in triggered by certain words in aggregate. AI can be manipulated to give mal-information at the desire of controlling interests. This information can be embedded within information that "sounds authentic".

Expand full comment

John,

Good question for AI. I found that these sort of questions get a “popular” narrative response and many times missing facts. I also found that when you press AI on a particular answer, when you know the facts, AI quickly apologizes and responds with a more in-depth answer. If you continue to press, its answers seem to get a little closer to the facts. At some point, it gives up with some lame explanation. When I explored Green House Gases, it’s initial response was related to the evilness of CO2 (popular but incorrect narrative). When I pressed it on the water vapor content of GHG, it quickly conceded the point, but still tended to paint GHGs as something evil...clear bias.

Expand full comment