

Ah yes Basilisk’s Roko, the thought experiment where we simulate infinite AIs so that we can hurl insults at them
Only Bayes Can Judge Me
Ah yes Basilisk’s Roko, the thought experiment where we simulate infinite AIs so that we can hurl insults at them
Author works on ML for DeepMind but doesn’t seem to be an out and out promptfondler.
Quote from this post:
I found myself in a prolonged discussion with Mark Bishop, who was quite pessimistic about the capabilities of large language models. Drawing on his expertise in theory of mind, he adamantly claimed that LLMs do not understand anything – at least not according to a proper interpretation of the word “understand”. While Mark has clearly spent much more time thinking about this issue than I have, I found his remarks overly dismissive, and we did not see eye-to-eye.
Based on this I’d say the author is LLM-pilled at least.
However, a fruitful outcome of our discussion was his suggestion that I read John Searle’s original Chinese Room argument paper. Though I was familiar with the argument from its prominence in scientific and philosophical circles, I had never read the paper myself. I’m glad to have now done so, and I can report that it has profoundly influenced my thinking – but the details of that will be for another debate or blog post.
Best case scenario is that the author comes around to the stochastic parrot model of LLMs.
E: also from that post, rearranged slightly for readability here. (the […]* parts are swapped in the original)
My debate panel this year was a fiery one, a stark contrast to the tame one I had in 2023. I was joined by Jane Teller and Yanis Varoufakis to discuss the role of technology in autonomy and privacy. [[I was] the lone voice from a large tech company.]* I was interrupted by Yanis in my opening remarks, with claps from the audience raining down to reinforce his dissenting message. It was a largely tech-fearful gathering, with the other panelists and audience members concerned about the data harvesting performed by Big Tech and their ability to influence our decision-making. […]* I was perpetually in defense mode and received none of the applause that the others did.
So also author is tech-brained and not “tech-fearful”.
It’s all true, I made millions of dollars by using ChatGPT to place bets on the ponies. I started with billions
Ah yes let’s use AI to get rid of the drudgery and toil so humanity can do the most enjoyable activity of writing OKRs
Announcing my sneerclub follow up to MAPLE: “Man, All These Losers Are Bonkers” aka MATLAB
sick reference. I don’t even know how I knew this.
Ah thanks! On mobile the main page gets redirected to spam, but the site is navigable from the archive.
has only a few minor credits[…], he always came off as a bit of a weirdly combative nerd who thought he was right and the smartest in the room and didn’t get that people didn’t agree with his definitions/assumptions. He is a big idea guy for example.
gosh i’m sure glad that these kinds of people disappeared from the internet /s
Never read AMD (and shan’t). The author’s site appears to be live.
8BF’s site has been taken over by bots, and I can’t be bothered to find an alternate source. Dead internet go brrrrr. Otherwise, the creator, Brian Clevinger, appears to have had a long career in comics, and has written many things for Marvel.
obligatory reminder that “dath ilan” is misspelled “thailand” and I still don’t know why. Working theory is Yud wants to recolonise thailand
also: The int-maxxing and overinflated ego of it all reminds me of the red mage from 8-bit theater, a webcomic based on final fantasy about the LW (light warriors) that ran from 2001-2010
E: thinking back on it, reading this webcomic and seeing this character probably in some part inoculated me against people like yud without me knowing
calling this shit “AI”
I blame Paul Simon /s
ah no they are in a totalitarian state ruled by the literal forces of hell, places that totally praise merit based upwards mobility.
Hey, write what you know
Anyone found with a non-cube platonic solid will be lockerized indefinitely
Skip the unobtainium I say, let’s just implement the human instrumentality project and tangify us all into a collective goo.
DAE remember when neel armstrong invented nasa and said his famous quote “That’s one small step for man, ok let’s fill this motherfucker up with GPUs”
Previous thread
E: we didn’t fucking know
We’ve definitely sneered at this before, i do not recall if it was known that KP was the cowriter in this weird forum RP fic
E: googling “lintamande kelsey piper” and looking at a reddit post digs up the inactive since 2018 AO3. A total just shy of 130k words, a little marvel stuff, most of it LOTR based, and some of it tagged “Vladmir Putin/Sauron”. How fun!
No judgement from me, tbh. Fanfic be fanficking. I aint gonna read that shit tho.
Everyone agrees that the release of GPT-5 was botched. Everyone can also agree that the direct jump from GPT-4o and o3 to GPT-5 was not of similar size to the jump from GPT-3 to GPT-4, that it was not the direct quantum leap we were hoping for, and that the release was overhyped quite a bit.
a quantum leap might actually be accurate
Belligerents