ChatGPT comments on CHD’s transgender coverage

  • Tanya Marquette

    Member
    February 18, 2025 at 9:51 pm EST

    quite frightening, not just because of the avoidance of the question asked, but the hostile, judgemental characterization of CHD. It is worse than Wikipedia which censors information on a number of issues with similar negative commentary. This feels like censorship, not just the removal of information but the direct attack on a source of information sought. The most frightening part of this is that people are being directed to use these AI sources for information and they are doing it.

  • John Kelch

    Member
    February 19, 2025 at 11:15 am EST

    As this response demonstrates, people need to be very wary of anything that comes out of AI, any ‘conclusions’ it draws are directly related to the data fed to it; since the dawn of computers, IT people have appreciated the term “gigo” – “garbage in, garbage out”. An increasing number since Covid have realized that most of our reality has been deliberately falsified by MSM directed to do so by it’s elite wealthy owners, via what Benjamin Franklin described as “the most diabolically heinous of all lies” – ‘half-truths’. (And much of our so-called “medical care”, as well, with many trusted “doctors” being as badly gas-lighted by their own sources, and sadly, others practicing procedures and protocols they KNOW to be harmful!) AI conclusions depend on similar sources of “informed history”, treating it as “fact”; sincere mis-information abounds, and deliberately creating phony “facts” – always endemic – has become a more and more rampant practice in business, in ‘governments’, and even in science (“Trust the science”!). See where this leads? The most dangerous part of AI will be the humans that abdicate decisions to it and the magnitude of the harm that will occur as a result. The best it can ever do is make a suggestion that responsibly will still need to be fully vetted. But all too often, it won’t be.

  • IMA-HelenT

    Organizer
    February 20, 2025 at 12:03 am EST

    Thank you for posting, I shared it with the team at IMA.

  • ima-eric

    Member
    February 20, 2025 at 11:06 am EST

    If people haven’t tried Perplexity, I would recommend doing so. Where you can adjust your models to find one that works for your use case. Perplexity cites all sources and prints out steps and it’s “thinking” along the way, so you can vet and scrutinize yourself.

    Here are some examples doing the same prompt:

    Perplexity + Grok2:

    https://www.perplexity.ai/search/looking-for-articles-published-9RKQadkvQHGvfohJR1dViQ

    Perplexity + Deep Research:

    https://www.perplexity.ai/search/looking-for-articles-published-1mvy2ccTQ7.Ug9VXpJBGEA

    All examples still put some level of “misinformation” disclaimer, but they get you the information without giving a big speech and with much greater depth.

    • IMA-GregT

      Member
      February 24, 2025 at 11:28 am EST

      👍

  • ima-eric

    Member
    February 20, 2025 at 11:23 am EST

    Grok3 + DeepSearch came out with the most impressive results so far:

    https://x.com/i/grok/share/YaivWO8gEXQdeLIodMfPx3gvb

    • IMA-GregT

      Member
      February 24, 2025 at 11:28 am EST

      👍

Log in to reply.