When German journalist Martin Bernklautyped his name and location into Microsoft’s Copilot to see how his articles would be picked up by the chatbot, the answers horrified him. Copilot’s results asserted that Bernklau was an escapee from a psychiatric institution, a convicted child abuser, and a conman preying on widowers. For years, Bernklau had served as a courts reporter and the AI chatbot had falsely blamed him for the crimes whose trials he had covered.

The accusations against Bernklau weren’t true, of course, and are examples of generative AI’s “hallucinations.” These are inaccurate or nonsensical responses to a prompt provided by the user, and they’re alarmingly common. Anyone attempting to use AI should always proceed with great caution, because information from such systems needs validation and verification by humans before it can be trusted.

But why did Copilot hallucinate these terrible and false accusations?

    • @wintermute@discuss.tchncs.de
      link
      fedilink
      English
      103 hours ago

      Exactly. LLMs don’t understand semantically what the data means, it’s just how often some words appear close to others.

      Of course this is oversimplified, but that’s the main idea.

      • @vrighter@discuss.tchncs.de
        link
        fedilink
        English
        4
        edit-2
        2 hours ago

        no need for that subjective stuff. The objective explanation is very simple. The output of the llm is sampled using a random process. A loaded die with probabilities according to the llm’s output. It’s as simple as that. There is literally a random element that is both not part of the llm itself, yet required for its output to be of any use whatsoever.

    • @Rivalarrival@lemmy.today
      link
      fedilink
      English
      -33 hours ago

      It’s a solveable problem. AI is currently at a stage of development equivalent to a 2-year-old, just with better grammar. Everything it is doing now is mimicry and babbling.

      It needs to feed it’s own interactions right back into it’s training data. To become a better and better mimic. Eventually, the mechanism it uses to select the appropriate data to form a response will become more and more sophisticated, and it will hallucinate less and less. Eventually, it’s hallucinations will be seen as “insightful” rather than wild ass guesses.

      • @vrighter@discuss.tchncs.de
        link
        fedilink
        English
        52 hours ago

        also, what you described has already been studied. Training an llm its own output completely destroys it, not makes it better.

      • @vrighter@discuss.tchncs.de
        link
        fedilink
        English
        3
        edit-2
        2 hours ago

        The outputs of the nn are sampled using a random process. Probability distribution is decided by the llm, loaded die comes after the llm. No, it’s not solvable. Not with LLMs. not now, not ever.

  • Queen HawlSera
    link
    fedilink
    English
    127 hours ago

    It’s a fucking Chinese Room, Real AI is not possible. We don’t know what makes humans think, so of course we can’t make machines do it.

    • @KairuByte@lemmy.dbzer0.com
      link
      fedilink
      English
      -25 hours ago

      You forgot the ever important asterisk of “yet”.

      Artificial General Intelligence (“Real AI”) is all but guaranteed to be possible. Because that’s what humans are. Get a deep enough understanding of humans, and you will be able to replicate what makes us think.

      Barring that, there are other avenues for AGI. LLMs aren’t one of them, to be clear.

      • @PhlubbaDubba@lemm.ee
        link
        fedilink
        English
        45 hours ago

        I actually don’t think a fully artificial human like mind will ever be built outside of novelty purely because we ventured down the path of binary computing.

        Great for mass calculation but horrible for the kinds of complex pattern recognitions that the human mind excels at.

        The singularity point isn’t going to be the matrix or skynet or AM, it’s going to be the first quantum device successfully implanted and integrated into a human mind as a high speed calculation sidegrade “Third Hemisphere.”

        Someone capable of seamlessly balancing between human pattern recognition abilities and emotional intelligence while also capable of performing near instant multiplication of matrices of 100 entries of length in 15 dimensions.

  • @Ilovethebomb@lemm.ee
    link
    fedilink
    English
    2410 hours ago

    I’d love to see more AI providers getting sued for the blatantly wrong information their models spit out.

    • @catloaf@lemm.ee
      link
      fedilink
      English
      010 hours ago

      I don’t think they should be liable for what their text generator generates. I think people should stop treating it like gospel. At most, they should be liable for misrepresenting what it can do.

      • @RvTV95XBeo@sh.itjust.works
        link
        fedilink
        English
        259 hours ago

        If these companies are marketing their AI as being able to provide “answers” to your questions they should be liable for any libel they produce.

        If they market it as “come have our letter generator give you statistically associated collections of letters to your prompt” then I guess they’re in the clear.

      • @TheFriar@lemm.ee
        link
        fedilink
        English
        119 hours ago

        So you don’t think these massive megacompanies should be held responsible for making disinformation machines? Why not?

      • @Ilovethebomb@lemm.ee
        link
        fedilink
        English
        1310 hours ago

        I want them to have more warnings and disclaimers than a pack of cigarettes. Make sure the users are very much aware they can’t trust anything it says.

      • Stopthatgirl7OP
        link
        fedilink
        English
        79 hours ago

        If they aren’t liable for what their product does, who is? And do you think they’ll be incentivized to fix their glorified chat boxes if they know they won’t be held responsible for if?

        • @lunarul@lemmy.world
          link
          fedilink
          English
          -58 hours ago

          Their product doesn’t claim to be a source of facts. It’s a generator of human-sounding text. It’s great for that purpose and they’re not liable for people misusing it or not understanding what it does.

          • Stopthatgirl7OP
            link
            fedilink
            English
            6
            edit-2
            6 hours ago

            So you think these companies should have no liability for the misinformation they spit out. Awesome. That’s gonna end well. Welcome to digital snake oil, y’all.

            • @lunarul@lemmy.world
              link
              fedilink
              English
              05 hours ago

              I did not say companies should have no liability for publishing misinformation. Of course if someone uses AI to generate misinformation and tries to pass it off as factual information they should be held accountable. But it doesn’t seem like anyone did that in this case. Just a journalist putting his name in the AI to see what it generates. Nobody actually spread those results as fact.

      • @kibiz0r@midwest.social
        link
        fedilink
        English
        29 hours ago

        If we’ve learned any lesson from the internet, it’s that once something exists it never goes away.

        Sure, people shouldn’t believe the output of their prompt. But if you’re generating that output, a site can use the API to generate a similar output for a similar request. A bot can generate it and post it to social media.

        Yeah, don’t trust the first source you see. But if the search results are slowly being colonized by AI slop, it gets to a point where the signal-to-noise ratio is so poor it stops making sense to only blame the poor discernment of those trying to find the signal.

  • @deegeese@sopuli.xyz
    link
    fedilink
    English
    4713 hours ago

    It’s frustrating that the article deals treats the problem like the mistake was including Martin’s name in the data set, and muses that that part isn’t fixable.

    Martin’s name is a natural feature of the data set, but when they should be taking about fixing the AI model to stop hallucinations or allow humans to correct them, it seems the only fix is to censor the incorrect AI response, which gives the implication that it was saying something true but salacious.

    Most of these problems would go away if AI vendors exposed the reasoning chain instead of treating their bugs as trade secrets.

    • 100
      link
      fedilink
      1012 hours ago

      just shows that these “ai”'s are completely useless at what they are trained for

      • @catloaf@lemm.ee
        link
        fedilink
        English
        1310 hours ago

        They’re trained for generating text, not factual accuracy. And they’re very good at it.

  • @tiramichu@lemm.ee
    link
    fedilink
    English
    1712 hours ago

    The worrying truth is that we are all going to be subject to these sorts of false correlations and biases and there will be very little we can do about it.

    You go to buy car insurance, and find that your premium has gone up 200% for no reason. Why? Because the AI said so. Maybe soneone with your name was in a crash. Maybe you parked overnight at the same GPS location where an accident happened. Who knows what data actually underlies that decision or how it was made, but it was. And even the insurance company themselves doesn’t know how it ended up that way.

    • @catloaf@lemm.ee
      link
      fedilink
      English
      710 hours ago

      We’re already there, no AI needed. Rates are all generated by computer. Ask your agent why your rate went up and they’ll say “idk computer said so”.

  • @AbouBenAdhem@lemmy.world
    link
    fedilink
    English
    012 hours ago

    Or… maybe he really is a criminal mastermind, desperately trying to do damage control after AI blew his cover. /s

  • sunzu2
    link
    fedilink
    -212 hours ago

    These are not hallucinations whatever thay is supposed to mean lol

    Tool is working as intended and getting wrong answers due to how it works. His name frequently had these words around it online so AI told the story it was trained. It doesn’t understand context. I am sure you can also it clearify questions and it will admit it is wrong and correct itself…

    AI🤡

        • chiisanaA
          link
          English
          -18 hours ago

          The models are not wrong. The models are nothing but a statistical model that’s really good at predicting the next word that is likely to follow base on prior information given. It doesn’t have understanding of the context of the words, just that statistically they’re likely to follow. As such, all LLM outputs are correct to their design.

          The users’ assumption/expectation of the output being factual is what is wrong. Hallucination is a fancy word in attempt make the users not feel as upset when the output passage doesn’t match their assumption/expectation.

          • snooggums
            link
            fedilink
            English
            28 hours ago

            The users’ assumption/expectation of the output being factual is what is wrong.

            So randomly spewing out bullshit is the actual design goal of AI models? Why does it exist at all?

            • @ApexHunter@lemmy.ml
              link
              fedilink
              English
              23 hours ago

              They’re supposed to be good a transformation tasks. Language translation, create x in the style of y, replicate a pattern, etc. LLMs are outstandingly good at language transformer tasks.

              Using an llm as a fact generating chatbot is actually a misuse. But they were trained on such a large dataset and have such a large number of parameters (175 billion!?) that they passably perform in that role… which is, at its core, to fill in a call+response pattern in a conversation.

              At a fundamental level it will never ever generate factually correct answers 100% of the time. That it generates correct answers > 50% of the time is actually quite a marvel.

              • chiisanaA
                link
                English
                1
                edit-2
                26 minutes ago

                If memory serves, 175B parameters is for the GPT3 model, not even the 3.5 model that caught the world by surprise; and they have not disclosed parameter space for GPT4, 4o, and o1 yet. If memory also serves, 3 was primarily English, and had only a relatively small set of words (I think 50K or something to that effect) it was considering as next token candidates. Now that it is able to work in multiple languages and multi modal, the parameter space must be much much larger.

                The amount of things it can do now is incredible, but our perceived incremental improvements on LLM will probably slow down (due to the pace fitting to the predicted lines in log space)… until the next big thing (neural nets > expert systems > deep learning > LLM > ???). Such an exciting time we’re in!

                Edit: found it. Roughly 50K tokens for input output embedding, in GPT3. 3Blue1Brown has a really good explanation here for anyone interested: https://youtu.be/wjZofJX0v4M