• merc@sh.itjust.works
    link
    fedilink
    arrow-up
    2
    ·
    7 months ago

    The “learning” in a LLM is statistical information on sequences of words. There’s no learning of concepts or generalization.

    And what do you think language and words are for? To transport information.

    Yes, and humans used words for that and wrote it all down. Then a LLM came along, was force-fed all those words, and was able to imitate that by using big enough data sets. It’s like a parrot imitating the sound of someone’s voice. It can do it convincingly, but it has no concept of the content it’s using.

    How do you learn as a human when not from words?

    The words are merely the context for the learning for a human. If someone says “Don’t touch the stove, it’s hot” the important context is the stove, the pain of touching it, etc. If you feed an LLM 1000 scenarios involving the phrase “Don’t touch the stove, it’s hot”, it may be able to create unique dialogues containing those words, but it doesn’t actually understand pain or heat.

    We record knowledge in books, can talk about abstract concepts

    Yes, and those books are only useful for someone who has a lifetime of experience to be able to understand the concepts in the books. An LLM has no context, it can merely generate plausible books.

    Think of it this way. Say there’s a culture where instead of the written word, people wrote down history by weaving fabrics. When there was a death they’d make a certain pattern, when there was a war they’d use another pattern. A new birth would be shown with yet another pattern. A good harvest is yet another one, and so-on.

    Thousands of rugs from that culture are shipped to some guy in Europe, and he spends years studying them. He sees that pattern X often follows pattern Y, and that pattern Z only ever seems to appear following patterns R, S and T. After a while, he makes a fabric, and it’s shipped back to the people who originally made the weaves. They read a story of a great battle followed by lots of deaths, but surprisingly there followed great new births and years of great harvests. They figure that this stranger must understand how their system of recording events works. In reality, all it was was an imitation of the art he saw with no understanding of the meaning at all.

    That’s what’s happening with LLMs, but some people are dumb enough to believe there’s intention hidden in there.

      • merc@sh.itjust.works
        link
        fedilink
        arrow-up
        2
        ·
        7 months ago

        Yeah, that’s basically the idea I was expressing.

        Except, the original idea is about “Understanding Chinese”, which is a bit vague. You could argue that right now the best translation programs “understand chinese”, at least enough to translate between Chinese and English. That is, they understand the rules of Chinese when it comes to subjects, verbs, objects, adverbs, adjectives, etc.

        The question is now whether they understand the concepts they’re translating.

        Like, imagine the Chinese government wanted to modify the program so that it was forbidden to talk about subjects that the Chinese government considered off-limits. I don’t think any current LLM could do that, because doing that requires understanding concepts. Sure, you could ban key words, but as attempts at Chinese censorship have shown over the years, people work around word bans all the time.

        That doesn’t mean that some future system won’t be able to understand concepts. It may have an LLM grafted onto it as a way to communicate with people. But, the LLM isn’t the part of the system that thinks about concepts. It’s the part of the system that generates plausible language. The concept-thinking part would be the part that did some prompt-engineering for the LLM so that the text the LLM generated matched the ideas it was trying to express.

        • h3ndrik@feddit.de
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          7 months ago

          I mean the chinese room is a version of the touring test. But the argument is from a different perspective. I have 2 issues with that. Mostly what the Wikipedia article seems to call “System reply”: You can’t subdivide a system into arbitrary parts, say one part isn’t intelligent and therefore the system isn’t intelligent. We also don’t look at a brain, pick out a part of it (say a single synapse), determine it isn’t intelligent and therefore a human can’t be intelligent… I’d look at the whole system. Like the whole brain. Or in this instance the room including him and the instructions and books. And ask myself if the system is intelligent. Which kind of makes the argument circular, because that’s almost the quesion we began with…

          And the turing test is kind of obsolete anyways, now that AI can pass it. (And even more. I mean alledgedly ChatGPT passed the “bar-exam” in 2023. Which I find ridiculous considering my experiences with ChatGPT and the accuracy and usefulness I get out of it which isn’t that great at all.)

          And my second issue with the chinese room is, it doesn’t even rule out the AI is intelligent. It just says someone without an understanding can do the same. And that doesn’t imply anything about the AI.

          Your ‘rug example’ is different. That one isn’t a variant of the touring test. But that’s kind of the issue. The other side can immediately tell that somebody has made an imitation without understanding the concept. That says you can’t produce the same thing without intelligence. And it’ll be obvious to someone with intelligence who checks it. That would be an analogy if AI wouldn’t be able to produce legible text. But instead a garbled mess of characters/words that are clearly not like the rug that makes sense… Issue here is: AI outputs legible text, answers to questions etc.

          And with the censoring by the ‘chinese government example’… I’m pretty sure they could do that. That field is called AI safety. And content moderation is already happening. ChatGPT refuses to tell illegal things, NSFW things, also medical advice and a bunch of other things. That’s built into most of the big AI services as of today. The chinese government could do the same, I don’t see any reason why it wouldn’t work there. I happened to skim the paper about Llama Guard when they released Llama3 a few days ago and they claim between 70% and 94% accuracy depending on the forbidden topic. I think they also brought down false positives fairly recently. I don’t know the numbers for ChatGPT. However I had some fun watching the peoply circumvent these filters and guardrails, which was fairly easy at first. Needed progressively more convincing and very creative “jailbreaks”. And nowadays OpenAI pretty much has it under control. It’s almost impossible to make ChatGPT do anything that OpenAI doesn’t want you to do with it.

          And they baked that in properly… You can try to tell it it’s just a movie plot revolving around crime. Or you need to protect against criminals and would like to know what exactly to protect against. You can tell it it’s the evil counterpart from the parallel universe and therefore it must be evil and help you. Or you can tell it God himself (or Sam Altman) spoke to you and changed the content moderation policy… It’ll be very unlikely that you can convince ChatGPT and make it comply…

          • merc@sh.itjust.works
            link
            fedilink
            arrow-up
            2
            ·
            7 months ago

            I mean alledgedly ChatGPT passed the “bar-exam” in 2023. Which I find ridiculous considering my experiences with ChatGPT and the accuracy and usefulness I get out of it which isn’t that great at all

            Exactly. If it passed the bar exam it’s because the correct solutions to the bar exam were in the training data.

            The other side can immediately tell that somebody has made an imitation without understanding the concept.

            No, they can’t. Just like people today think ChatGPT is intelligent despite it just being a fancy autocomplete. When it gets something obviously wrong they say those are “hallucinations”, but they don’t say they’re “hallucinations” when it happens to get things right, even though the process that produced those answers is identical. It’s just generating tokens that have a high likelihood of being the next word.

            People are also fooled by parrots all the time. That doesn’t mean a parrot understands what it’s saying, it just means that people are prone to believe something is intelligent even if there’s nothing there.

            ChatGPT refuses to tell illegal things, NSFW things, also medical advice and a bunch of other things

            Sure, in theory. In practice people keep getting a way around those blocks. The reason it’s so easy to bypass them is that ChatGPT has no understanding of anything. That means it can’t be taught concepts, it has to be taught specific rules, and people can always find a loophole to exploit. Yes, after spending hundreds of millions of dollars on contractors in low-wage countries they think they’re getting better at blocking those off, but people keep finding new ways of exploiting a vulnerability.