

I’m sceptical about AI programming Pacman, but it’s fairly obvious that the New York Times is leaning into AI journalism …
I’m sceptical about AI programming Pacman, but it’s fairly obvious that the New York Times is leaning into AI journalism …
Yeah that’s fair
Hoo boy that’s pretty light on details about scale, there’s a few buzz words in there too, I hope they can develop it enough to make it viable in large capacities.
“The days of stupidity are over!” says the man defunding the education system …
“If you feel bad about about the last holocaust, maybe repeating it will help!”
I like that they gloss over the whole “transmitted to earth” portion of the system.
It’s literally a guess machine …
So dragon slayer is a job now?
I didn’t think I’d ever have a favourite map but here we are.
I felt like this guy was doing the right thing but the amount of visual pollution he must have enabled in his career is wild. I hope it keeps him up at night.
Especially given the suspicious actions around the last US election …
Especially given the suspicious actions around the last US election …
Australia does too
Hot damn Woody Allen is a fucking moron.
What kind of a maniac wants to grow palm trees?!
Weird, ICP has always been slop of one form or another, seems odd to get mad about it now …
Err haven’t they already killed 400k people?
Nah so their definition is the classical “how confident are you that you got the answer right”. If you read the article they asked a bunch of people and 4 LLMs a bunch of random questions, then asked the respondent whether they/it had confidence their answer was correct, and then checked the answer. The LLMs initially lined up with people (over confident) but then when they iterated, shared results and asked further questions the LLMs confidence increased while people’s tends to decrease to mitigate the over confidence.
But the study still assumes intelligence enough to review past results and adjust accordingly, but disregards the fact that an AI isnt intelligence, it’s a word prediction model based on a data set of written text tending to infinity. It’s not assessing validity of results, it’s predicting what the answer is based on all previous inputs. The whole study is irrelevant.
I guess, but it’s like proving your phones predictive text has confidence in its suggestions regardless of accuracy. Confidence is not an attribute of a math function, they are attributing intelligence to a predictive model.
People here are gonna be real mad if they can’t say “but it’s a dry heat” when talking about how hot it is …