ChatGPT comments on CHD’s transgender coverage
-
Some of you might find VERY interesting ChatGPT’s response to investigative reporter Jon Rapport’s question about The Defender’s transgender coverage:
Log in to reply.
0.0311980247498 seconds
on February 18, 2025 at 9:01 pm ESTSome of you might find VERY interesting ChatGPT’s response to investigative reporter Jon Rapport’s question about The Defender’s transgender coverage:
0.0262281894684 seconds
replied 3 months, 3 weeks ago 6 Members · 7 Repliesquite frightening, not just because of the avoidance of the question asked, but the hostile, judgemental characterization of CHD. It is worse than Wikipedia which censors information on a number of issues with similar negative commentary. This feels like censorship, not just the removal of information but the direct attack on a source of information sought. The most frightening part of this is that people are being directed to use these AI sources for information and they are doing it.
As this response demonstrates, people need to be very wary of anything that comes out of AI, any ‘conclusions’ it draws are directly related to the data fed to it; since the dawn of computers, IT people have appreciated the term “gigo” – “garbage in, garbage out”. An increasing number since Covid have realized that most of our reality has been deliberately falsified by MSM directed to do so by it’s elite wealthy owners, via what Benjamin Franklin described as “the most diabolically heinous of all lies” – ‘half-truths’. (And much of our so-called “medical care”, as well, with many trusted “doctors” being as badly gas-lighted by their own sources, and sadly, others practicing procedures and protocols they KNOW to be harmful!) AI conclusions depend on similar sources of “informed history”, treating it as “fact”; sincere mis-information abounds, and deliberately creating phony “facts” – always endemic – has become a more and more rampant practice in business, in ‘governments’, and even in science (“Trust the science”!). See where this leads? The most dangerous part of AI will be the humans that abdicate decisions to it and the magnitude of the harm that will occur as a result. The best it can ever do is make a suggestion that responsibly will still need to be fully vetted. But all too often, it won’t be.
Thank you for posting, I shared it with the team at IMA.
If people haven’t tried Perplexity, I would recommend doing so. Where you can adjust your models to find one that works for your use case. Perplexity cites all sources and prints out steps and it’s “thinking” along the way, so you can vet and scrutinize yourself.
Here are some examples doing the same prompt:
Perplexity + Grok2:
https://www.perplexity.ai/search/looking-for-articles-published-9RKQadkvQHGvfohJR1dViQ
Perplexity + Deep Research:
https://www.perplexity.ai/search/looking-for-articles-published-1mvy2ccTQ7.Ug9VXpJBGEA
All examples still put some level of “misinformation” disclaimer, but they get you the information without giving a big speech and with much greater depth.
Grok3 + DeepSearch came out with the most impressive results so far:
Log in to reply.
There was a problem reporting this post.
Please confirm you want to block this member.
You will no longer be able to:
Please note: This action will also remove this member from your connections and send a report to the site admin. Please allow a few minutes for this process to complete.