• MotoAsh@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    11 天前

    “…leans too heavily on its training data…” No, it IS its training data. Full srop. It doesn’t know the documentation as a separate entity. It doesn’t reason what so ever for where to get its data from. It just shits out the closest approximation of an “acceptable” answer from the training data. Period. It doesn’t think. It doesn’t reason. It doesn’t decide where to pull an answer from. It just shits it out verbatim.

    I swear… so many people anthropomorphise “AI” it’s ridiculous. It does not think and it does not reason. Ever. Thinking it does is projecting human attributes on to it, which is anthropomorphizing it, which is lying to yourself about it.

    • okwhateverdude@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      11 天前

      Ackually 🤓, gemini pro and other similar models are basically a loop over some metaprompts with tool usage including using search. It will actually reference/cite documentation if given explicit instructions. You’re right, the anthropomorphization is troubling. That said, the simulacrum presented DOES follow directions and it’s (meaning the complete system of LLM + looped prompts) behavior can be interpreted as having some kind of agency. We’re on the same side, but you’re sorely misinformed, friend.

      • MotoAsh@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 天前

        I’m not misinformed. You’re still trying to call a groomed LLM something that reasons when it literally is not doing that in any meaningful capacity.