Tea@programming.dev to Technology@lemmy.worldEnglish · 5 months agoReasoning models don't always say what they think.www.anthropic.comexternal-linkmessage-square6linkfedilinkarrow-up145arrow-down119cross-posted to: technology@lemmy.zip
arrow-up126arrow-down1external-linkReasoning models don't always say what they think.www.anthropic.comTea@programming.dev to Technology@lemmy.worldEnglish · 5 months agomessage-square6linkfedilinkcross-posted to: technology@lemmy.zip
minus-squareGibibit@lemmy.worldlinkfedilinkEnglisharrow-up11arrow-down1·5 months agoSo chain of thought is an awful experiment that doesn’t let you know how an AI reasons. Instead of admitting this, AI researchers anthropomorphize yet another test result and turn it into the model hiding their thought process from you. Whatever.
So chain of thought is an awful experiment that doesn’t let you know how an AI reasons. Instead of admitting this, AI researchers anthropomorphize yet another test result and turn it into the model hiding their thought process from you. Whatever.