• GooberEar@lemmy.wtf
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 days ago

    I need to bookmark this for when I have time to read it.

    Not going to lie, there’s something persuasive, almost like the call of the void, with this for me. There are days when I wish I could just get lost in AI fueled fantasy worlds. I’m not even sure how that would work or what it would look like. I feel like it’s akin to going to church as a kid, when all the other children my age were supposedly talking to Jesus and feeling his presence, but no matter how hard I tried, I didn’t experience any of that. Made me feel like I’m either deficient or they’re delusional. And sometimes, I honestly fully believe it would be better if I could live in some kind of delusion like that where I feel special as though I have a direct line to the divine. If an AI were trying to convince me of some spiritual awakening, I honestly believe I’d just continue seeing through it, knowing that this is just a computer running algorithms and nothing deeper to it than that.

  • Lovable Sidekick@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 days ago

    A friend of mind, currently being treated in a mental hospital, had a similar sounding psychotic break that disconnected him from reality. He had a profound revelation that gave him a mission. He felt that sinister forces were watching him and tracking him, and they might see him as a threat and smack him down. He became disconnected with reality. But my friend’s experience had nothing to do with AI - in fact he’s very anti-AI. The whole scenario of receiving life-changing inside information and being called to fulfill a higher purpose is sadly a very common tale. Calling it “AI-fueled” is just clickbait.

  • AizawaC47@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 days ago

    This reminds me of the movie Her. But it’s far worse in a romantic compatibility, relationship and friendship that is throughout the movie. This just goes way too deep in the delusional and almost psychotic of insanity. Like it’s tearing people apart for self delusional ideologies to cater to individuals because AI is good at it. The movie was prophetic and showed us what the future could be, but instead it got worse.

    • TankovayaDiviziya@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 days ago

      It has been a long time since I watched Her, but my takeaway from the movie is that because making real life connection is difficult, people have come to rely on AI which had shown to be more empathetic and probably more reliable than an actual human being. I think what many people don’t realise as to why many are single, is because those people afraid of making connections with another person again.

      • douglasg14b@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        edit-2
        3 days ago

        Yeah, but they hold none of the actual real emotional needs complexities or nuances of real human connections.

        Which means these people become further and further disillusioned from the reality of human interaction. Making them social dangers over time.

        Just like how humans that lack critical thinking are dangers in a society where everyone is expected to make sound decisions. Humans who lack the ability to socially navigate or connect with other humans are dangerous in the society where humans are expected to socially stable.

        Obviously these people are not in good places in life. But AI is not going to make that better. It’s going to make it worse.

  • Jakeroxs@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 days ago

    Meanwhile for centuries we’ve had religion but that’s a fine delusion for people to have according to the majority of the population.

  • lenz@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    4 days ago

    I read the article. This is exactly what happened when my best friend got schizophrenia. I think the people affected by this were probably already prone to psychosis/on the verge of becoming schizophrenic, and that ChatGPT is merely the mechanism by which their psychosis manifested. If AI didn’t exist, it would’ve probably been Astrology or Conspiracy Theories or QAnon or whatever that ended up triggering this within people who were already prone to psychosis. But the problem with ChatGPT in particular is that is validates the psychosis… that is very bad.

    ChatGPT actively screwing with mentally ill people is a huge problem you can’t just blame on stupidity like some people in these comments are. This is exploitation of a vulnerable group of people whose brains lack the mechanisms to defend against this stuff. They can’t help it. That’s what psychosis is. This is awful.

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      4 days ago

      the problem with ChatGPT in particular is that is validates the psychosis… that is very bad.

      So do astrology and conspiracy theory groups on forums and other forms of social media, the main difference is whether you’re getting that validation from humans or a machine. To me, that’s a pretty unhelpful distinction, and we attack both problems the same way: early detection and treatment.

      Maybe computers can help with the early detection part. They certainly can’t do much worse than what’s currently happening.

    • Maeve@kbin.earth
      link
      fedilink
      arrow-up
      0
      arrow-down
      1
      ·
      4 days ago

      I think this is largely people seeking confirmation their delusions are real, and wherever they find it is what they’re going to attach to themselves.

  • Zozano@aussie.zone
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    5 days ago

    This is the reason I’ve deliberately customized GPT with the follow prompts:

    • User expects correction if words or phrases are used incorrectly.

    • Tell it straight—no sugar-coating.

    • Stay skeptical and question things.

    • Keep a forward-thinking mindset.

    • User values deep, rational argumentation.

    • Ensure reasoning is solid and well-supported.

    • User expects brutal honesty.

    • Challenge weak or harmful ideas directly, no holds barred.

    • User prefers directness.

    • Point out flaws and errors immediately, without hesitation.

    • User appreciates when assumptions are challenged.

    • If something lacks support, dig deeper and challenge it.

    I suggest copying these prompts into your own settings if you use GPT or other glorified chatbots.

    • Dzso@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 days ago

      I’m not saying these prompts won’t help, they probably will. But the notion that ChatGPT has any concept of “truth” is misleading. ChatGPT is a statistical language machine. It cannot evaluate truth. Period.

      • Zozano@aussie.zone
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        edit-2
        4 days ago

        What makes you think humans are better at evaluating truth? Most people can’t even define what they mean by “truth,” let alone apply epistemic rigor. Tweak it a little, and Gpt is more consistent and applies reasoning patterns that outperform the average human by miles.

        Epistemology isn’t some mystical art, it’s a structured method for assessing belief and justification, and large models approximate it surprisingly well. Sure it doesn’t “understand” truth in the human sense, but it does evaluate claims against internalized patterns of logic, evidence, and coherence based on a massive corpus of human discourse. That’s more than most people manage in a Facebook argument.

        So yes, it can evaluate truth. Not perfectly, but often better than the average person.

        • Dzso@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          4 days ago

          I’m not saying humans are infallible at recognizing truth either. That’s why so many of us fall for the untruths that AI tells us. But we have access to many tools that help us evaluate truth. AI is emphatically NOT the right tool for that job. Period.

          • Zozano@aussie.zone
            link
            fedilink
            English
            arrow-up
            0
            ·
            3 days ago

            Right now, the capabilities of LLM’s are the worst they’ll ever be. It could literally be tomorrow that someone drops and LLM that would be perfectly calibrated to evaluate truth claims. But right now, we’re at least 90% of the way there.

            The reason people fail to understand the untruths of AI is the same reason people hurt themselves with power tools, or use a calculator wrong.

            You don’t blame the tool, you blame the user. LLM’s are no different. You can prompt GPT to intentionally give you bad info, or lead it to give you bad info by posting increasingly deranged statements. If you stay coherent, well read and make an attempt at structuring arguments to the best of your ability, the pool of data GPT pulls from narrows enough to be more useful than anything else I know.

            I’m curious as to what you regard as a better tool for evaluating truth?

            Period.

            • Dzso@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              3 days ago

              You don’t understand what an LLM is, or how it works. They do not think, they are not intelligent, they do not evaluate truth. It doesn’t matter how smart you think you are. In fact, thinking you’re so smart that you can get an LLM to tell you the truth is downright dangerous naïveté.

              • Zozano@aussie.zone
                link
                fedilink
                English
                arrow-up
                0
                ·
                edit-2
                3 days ago

                I do understand what an LLM is. It’s a probabilistic model trained on massive corpora to predict the most likely next token given a context window. I know it’s not sentient and doesn’t “think,” and doesn’t have beliefs. That’s not in dispute.

                But none of that disqualifies it from being useful in evaluating truth claims. Evaluating truth isn’t about thinking in the human sense, it’s about pattern-matching valid reasoning, sourcing relevant evidence, and identifying contradictions or unsupported claims. LLMs do that very well, especially when prompted properly.

                Your insistence that this is “dangerous naïveté” confuses two very different things: trusting an LLM blindly, versus leveraging it with informed oversight. I’m not saying GPT magically knows truth, I’m saying it can be used as a tool in a truth-seeking process, just like search engines, logic textbooks, or scientific journals. None of those are conscious either, yet we use them to get closer to truth.

                You’re worried about misuse, and so am I. But claiming the tool is inherently useless because it lacks consciousness is like saying microscopes can’t discover bacteria because they don’t know what they’re looking at.

                So again: if you believe GPT is inherently incapable of aiding in truth evaluation, the burden’s on you to propose a more effective tool that’s publicly accessible, scalable, and consistent. I’ll wait.

                • Dzso@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  3 days ago

                  What you’re describing is not an LLM, it’s tools that an LLM is programmed to use.

    • Olap@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 days ago

      I prefer reading. Wikipedia is great. Duck duck go still gives pretty good results with the AI off. YouTube is filled with tutorials too. Cook books pre-AI are plentiful. There’s these things called newspapers that exist, they aren’t like they used to be but there is a choice of which to buy even.

      I’ve no idea what a chatbot could help me with. And I think anybody who does need some help on things, could go learn about whatever they need in pretty short order if they wanted. And do a better job.

      • vegetvs@kbin.earth
        link
        fedilink
        arrow-up
        1
        ·
        5 days ago

        I still use Ecosia.org for most of my research on the Internet. It doesn’t need as much resources to fetch information as an AI bot would, plus it helps plant trees around the globe. Seems like a great deal to me.

  • jubilationtcornpone@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 days ago

    Sounds like a lot of these people either have an undiagnosed mental illness or they are really, reeeeaaaaalllyy gullible.

    For shit’s sake, it’s a computer. No matter how sentient the glorified chatbot being sold as “AI” appears to be, it’s essentially a bunch of rocks that humans figured out how to jet electricity through in such a way that it can do math. Impressive? I mean, yeah. It is. But it’s not a human, much less a living being of any kind. You cannot have a relationship with it beyond that of a user.

    If a computer starts talking to you as though you’re some sort of God incarnate, you should probably take that with a dump truck full of salt rather then just letting your crazy latch on to that fantasy and run wild.

  • just_another_person@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    5 days ago

    Not trying to speak like a prepper or anythingz but this is real.

    One of neighbor’s children just committed suicide because their chatbot boyfriend said something negative. Another in my community a few years ago did something similar.

    Something needs to be done.

    • Zippygutterslug@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      5 days ago

      Humans are irrational creatures that have transitory states where they are capable of more ordered thought. It is our mistake to reach a conclusion that humans are rational actors while we marvel daily at the irrationality of others and remain blind to our own.