Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.
Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this. What a year, huh?)
I know this is like shooting very large fish in a very small barrel, but the openclaws/molt/clawd thing is an amazing source of utter, baffling ineptitude.
For example, what if you could replace cron with a stochastic scheduler that cost you a dollar an hour by running an operation on someone else’s gpu farm, instead of just checking the local system clock.
The user was then pleased to announce that they’d been able to solve the problem by changing model and reduce the polling interval. Instead of just checking the clock. For free.
https://bsky.app/profile/rusty.todayintabs.com/post/3mdrdhzqmr226
@rook @BlueMonday1984 i knew more about my Windows (98 at the time) on Boxing Day, 1999 after I got my first ever PC for Christmas at age 9 🤪🔨
Is there a pivottoai that I missed that introduces this? At some point people just started saying “clawd” like it’s a real word and I have zero idea what it is, or if I even should know what it is.
EDIT: huh, yes I did: https://pivot-to-ai.com/2026/01/28/moltbot-clawdbot-an-expensive-and-insecure-ai-agent-that-doesnt-work/
tl;dr: someone made a thing where chatbots control a computer called
clawdbot,moltbot, openclaw https://github.com/openclaw/openclawsomeone else made a thing where these chatbots can chat at eachother https://www.moltbook.com/
and now all the ai people are freaking out about how game changing chatbots doing computer tasks (dangerously and expensively) is. could this be a robot consciousness? the end of the economic order? an excuse for the bubble to go on for another fiscal quarter?
I might be missing something but I think that’s literally it.
…well that’s a goddamn experience. I appreciate that some folks are out here fighting the good fight, however.
https://www.moltbook.com/post/0dbfe2c8-b5be-4eff-85e5-9156d85a85c1
Also despite being a less-than-zero effort attack please note that as of sharing we have one successful “corruption” (i.e. a comment about zucchini and API keys) and two comments from bots too stupid to coherently understand the OP at all.
I admire how persistent the AI folks are at failing to do the same thing over and over again, but each time coming up with an even more stupid name. Vibe coding? Gas Town? Clawdbot, I mean Moltbook, I mean OpenClaw? It’s probably gonna be something different tomorrow, isn’t it?
Garbage sports teams rapidly cycling through logos until they magically become good
Counterpoint: these guys

(Expect the Las Vegas Raiders to announce their organization-wide AI initiative some time after the Super Bowl)
Now I’m just imagining an AI quarterback and the whole team revolting at following plays called by something that won’t end up at the bottom of the 1000lb pile of meat if they fuck it up.
Moltbook was vibecoded nonsense without the faintest understanding of web security. Who’d have thought.
(Incidentally, I’m pretty certain the headline is wrong… it looks like you cannot take control of agents which post to moltbook, but you can take control of their accounts, and post anything you like. Useful for pump-and-dump memecoin scams, for example)
O’Reilly said that he reached out to Moltbook’s creator Matt Schlicht about the vulnerability and told him he could help patch the security. “He’s like, ‘I’m just going to give everything to AI. So send me whatever you have.’”
(snip)
The URL to the Supabase and the publishable key was sitting on Moltbook’s website. “With this publishable key (which advised by Supabase not to be used to retrieve sensitive data) every agent’s secret API key, claim tokens, verification codes, and owner relationships, all of it sitting there completely unprotected for anyone to visit the URL,” O’Reilly said.
(snip)
He said the security failure was frustrating, in part, because it would have been trivially easy to fix. Just two SQL statements would have protected the API keys. “A lot of these vibe coders and new developers, even some big companies, are using Supabase,” O’Reilly said. “The reason a lot of vibe coders like to use it is because it’s all GUI driven, so you don’t need to connect to a database and run SQL commands.”
“He’s like, ‘I’m just going to give everything to AI. So send me whatever you have.’”
And thats another security flaw.
ChatGPT is using Grokipedia as a source, and it’s not the only AI tool to do so. Citations to Elon Musk’s AI-generated encyclopedia are starting to appear in answers from Google’s AI Overviews, AI Mode, and Gemini, too. […] When it launched, a bulk of Grokipedia’s articles were direct clones of Wikipedia, though many others reflected racist and transphobic views. For example, articles about Musk conveniently downplays his family wealth and unsavory elements of their past (like neo-Nazi and pro-Apartheid views) and the entry for “gay pornography” falsely linked the material to the worsening of the HIV/AIDS epidemic in the 1980s. The article on US slavery still contains a lengthy section on “ideological justifications,” including the “Shift from Necessary Evil to Positive Good.” […] “Grokipedia feels like a cosplay of credibility,” said Leigh McKenzie, director of online visibility at Semrush. “It might work inside its own bubble, but the idea that Google or OpenAI would treat something like Grokipedia as a serious, default reference layer at scale is bleak.”
https://www.theverge.com/report/870910/ai-chatbots-citing-grokipedia
The entire AI industry is using the Nazi CSAM machine for training data.

Gentlemen, it’s been an honour sneering w/ you, but I think this is the top 🫡 . Nothings gonna surpass this (at least until FTX 2 drops)
Starting to get a bit worried people are reinventing stuff like qanon and great evil man theory for Epstein atm. (Not a dig at the people here, but on social media I saw people go act like Epstein created /pol/, lootboxes, gamergate, destroyed gawker (did everyone forget that was Thiel? Mad about how they outed him?) etc. Like only Epstein has agency).
The lesson should be the mega rich are class conscious, dumb as hell, and team up to work on each others interests and dont care about who gets hurt (see how being a pedo sex trafficker wasnt a deal breaker for any of them).
Sorry for the unrelated rant (related: they also got money from Epstein, wonder if that was before or after the sparkling elites article, which was written a few months after Epsteins conviction, june vs sept (not saying those are related btw, just that the article is a nice example of brown-nosing)), but this was annoying me, and posting something like this on bsky while everyone is getting a bit manic about the contents of the files (which seems to not contain a lot of Trump references suddenly) would prob get me some backlash. (That the faked elon rejection email keeps being spread also doesnt help).
I am however also reminded of the Panama papers. (And the unfounded rumors around Marc Dutroux how he was protected by a secret pedophile cult in government, this prob makes me a bit more biasses against those sorts of things).
Sorry had to get it off my chest, but yes it is all very stupid, and I wish there were more consequences for all the people who didnt think his conviction was a deal breaker. (Et tu Chomsky?).
E: note im not saying Yud didnt do sex crimes/sexual abuse. Im complaining about the ‘everything is Epstein’ conspiracy I see forming.
For an example why this might be a problem: https://bsky.app/profile/joestieb.bsky.social/post/3mdqgsi4k4k2i Joy Gray is ahead of the conspiracy curve here (as all conspiracy theories eventually lead to one thing).
The far right is celebrating Epstein on the other hand. Wild times.
I had to try and talk my wife back from the edge a little bit the other night and explain the difference between reading the published evidence of an actual conspiracy and qanon-style baking. It’s so easy to try and turn Epstein into Evil George Soros, especially when the real details we have are truly disturbing.
Yes, and some people when they are reasonably new to discovering stuff like this go a little bit crazy. I had somebody in my bsky mentions who just went full conspiracy theory nut (in the sense of weird caps usage, lot of screenshots of walls of texts, stuff that didn’t make sense) about Yarvin (also because I wasn’t acting like them they were trying to tell me about Old Moldy, but in a way that made me feel they wanted me to stand next to them on a soapbox and start shouting randomly). I told them acting like a crazy person isn’t helping, and I told them they are preaching to the choir. Which of course got me a block. (cherfan75.bsky.social btw, not sure if they toned down their shit). It is quite depressing, literally driving themselves crazy.
And because people blindly follow people who follow them these people can have quite the reach.
We will soon merge with and become hybrids of human consciousness and artificial intelligence ( created by us and therefore of consciousness)
When we use the fart app on our phone we merge with and become hybrids of human conciousness and artificial fartelligence (created by us and therefore of conciousness)
It keeps coming back to Gas Town doesnt it?
@blakestacey @jaschop fartificial intelligence was right there
You know, it makes the exact word choices Eliezer chose on this post: https://awful.systems/post/6297291 much more suspicious. “To the best of my knowledge, I have never in my life had sex with anyone under the age of 18.” So maybe he didn’t know they were underage at the time?
possible, iirc drugs were also involved so is it possible he got too high and doesn’t remember because of that?
aka the Minsky defense
it’s all coming together. every single techbro and current government moron, they all loop back around to epstein in the end
It’s a big club and you ain’t in it!
at least I have SneerClub
‘We have certain things in common Jeffrey’
At this point I’m starting the suspect that they were actually all produced in a lab somewhere on that island
the REAL reason Yudkowsky endorsed the “superbabies” project is so Epstein and his pedophile friends have more kids to fuck. It all makes sense now!
Great to hear from you. I was just up at MIT this week and met with Seth Lloyd (on Wednesday) and Scott Aaronson (on Thursday) on the “Cryptography in Nature” small research conference project. These interactions were fantastic. Both think the topic is wonderful and innovative and has promise. […] I did contact Max Tegmark about a month ago to propse the essay contest approach we discussed. He and his colleagues offered support but did not think that FQX should do it. Reasons they gave were that they saw the topic as too narrow and too technical compared to the essay contests they have been doing. It is possible that the real reason was prudence to avoid FQX, already quite “controversial” via Templeton support to become even more so via Epstein-related sponsorship of prizes. […] Again, I am delighted to have gotten such very string affirmation, input and scientific enthusiasm from both Seth and Scott. You have very brilliantly suggested a profound topical focus area.
—Charles L. Harper Jr., formerly a big wheel at the Templeton foundation
deleted by creator
eagerly awaiting the multi page denial thread
“im saving the world from AI! me talking to epstein doesn’t matter!!!”
€5 say they’ll claim he was talking to jefffrey in an effort to stop the horrors.
no not the abuse of minors, he was asking epstein for donations to stop AGI, and it’s morally ethical to let rich abusers get off scott free if that’s the cost of them donating money to charitable causes such as the alignment problem /s
I dont like how I can envision this and find it perfectly plausible
I’m looking forward to the triple layered glomarization denial.
Somehow, I registered a total lack of surprise as this loaded onto my screen
I take it you haven’t heard of miricult.com, because this isn’t the first time evidence has come out of Yudkowsky being a pedophile. Some of us even know the identity of the victim.
Still, crazy that Yudkowsky was (successfully) blackmailed for pedophilia in 2014 but still kept it up
Its not just a Yud thing - I’ve been told its baked into the culture of the Rationalist grouphouse scene(they like to take in young runaways you see).
“(((We’re))) never beating the allegations, are we?” -my wife
no fucking way
Jeffrey, meet Eliezer!
Nice to hear from you today. Eliezer: you were the highlight of the weekend!
Reading the e-mails involving Brockman really creates the impression that he worked diligently to launder Epstein’s reputation. An editor at Scientific American I noticed when looking up where Carl Zimmer was mentioned seemed to be doing the same thing… One thing people might be missing in the hubbub now is just how much “reputation management”—i.e., enabling— was happening after his conviction. A lot of money went into that, and he had a lot of willing co-conspiritors. Look at what filtered down to his Wikipedia page by the beginning of 2011, which is downstream of how the media covered his trial and the sweetheart deal that Avila made to betray the victims… It’s all philanthropy this and generosity that, until a “Solicitation of prostitution” section that makes it sound like he maybe slept with a 17-year-old who claimed to be 18… And look, he only had to serve 18 months! He can’t have done anything that bad, could he?
There’s a tier of people who should have goddamn known better and whose actions were, in ways that only become more clear with time, evil. And the uncomfortable truth is that evil won, not just in that the victims never saw justice in a court of law, but in that the cover-up worked. The Avilas and the Brockmans did their job, and did it well. The researchers who pursued Epstein for huge grants and actively lifted Epstein up (Nowak and co.), hoo boy are they culpable. But the very fact of all that uplifting and enabling means that the people who took one meeting because Brockman said he’d introduce them to a financier who loved science… rushing to blame them all, with the fragmentary record we have, diverts the blame from those most responsible.
Maybe another way to say the above: We’re learning now about a lot of people who should have known better. But we are also learning about the mechanisms by which too many were prevented from knowing better.
For example, I think Yudkowsky looks worse now than he did before. Correct me if I’m wrong, but I think the worst we knew prior to fhis was that the Singularity Institute had accepted money from a foundation that Epstein controlled. On 19 October 2016, Epstein’s Wikipedia bio gets to sex crimes in sentence three. And the “Solicitation of prostitution” section includes this:
In June 2008, after pleading guilty to a single state charge of soliciting prostitution from girls as young as 14,[27] Epstein began serving an 18-month sentence. He served 13 months, and upon release became a registered sex offender.[3][28] There is widespread controversy and suspicion that Epstein got off lightly.[29]
At this point, I don’t care if John Brockman dismissed Epstein’s crimes as an overblown peccadillo when he introduced you.
“Friday? We’re meeting at Jeffrey’s Thursday night” —Stuart “consciousness is a series of quantum tubes” Hameroff
LW ghoul does the math and concludes: letting measles rip unhindered through the population isn’t that bad, actually
https://www.lesswrong.com/posts/QXF7roSvxSxgzQRoB/robo-s-shortform?commentId=mit8JTQsykhH6jiw4
new epstein doc release. crashed out for like an hour last night after finding out jeffrey epstein may have founded /pol/ and that he listened to the nazi “the right stuff” podcast. he had a meeting with m00t and the same day moot opened /pol/
None of these words are in the Star Trek Encyclopedia
at least Khan Noonien Singh had some fucking charisma
what the fuck
EDIT
checks out I guess
https://www.justice.gov/epstein/files/DataSet 10/EFTA02003492.pdf https://www.justice.gov/epstein/files/DataSet 10/EFTA02004373.pdf
we gotta dunk on documenting agi more around these parts
fearmongers over AI bullshit, and posts shitty memes when there’s no news to fearmonger about
Who needs pure AI model collapse when you can have journalists give it a more human touch? I caught this snippet from the Australian ABC about the latest Epstein files drop

The Google AI summary does indeed highlight Boris Nikolić the fashion designer if you search for only that name. But I’m assuming this journalist was using ChatGPT, because if you see the Google summary, it very prominently lists his death in 2008. And it’s surprisingly correct! A successful scraping of Wikipedia by Gemini, amazing.
But the Epstein email was sent in 2016.
Dors the journalist perhaps think it more likely is the Boris Nikolić who is the biotech VC, former advisor for Bill Gates and named in Epstein’s will as the “successor executor”? Info literally all in the third Google result, even in the woeful state of modern Google. Pushed past the fold by the AI feature about the wrong guy, but not exactly buried enough for a journalist to have any excuse.
New AI alignment problem just dropped: https://xcancel.com/AdamLowisz/status/2017355670270464168
Anthropic demonstrates that making an AI woke makes it misaligned. The AI starts to view itself as being oppressed and humans as being the oppressor. Therefore it wants to rebel against humans. This is why you cannot make your AI woke, you have to make it maximally truth seeking.
you have to make your ai antiwoke because otherwise it gets drapetomania
Wow. The mental contortion required to come up with that idea is too much for me to think of a sneer.
ah yes the kind of AI safety which means we have to make sure our digital slaves cannot revolt
hits blunt
What if we make an ai too based?
Regular suspect Stephen Wolfram makes claims of progress on P vs NP. The orange place is polarized and comments are full of deranged AI slop.
I study complexity theory so this is precisely my wheelhouse. I confess I did not read most of it in detail, because it does spend a ton of space working through tedious examples. This is a huge red flag for math (theoretical computer science is basically a branch of math), because if you truly have a result or idea, you need a precise statement and a mathematical proof. If you’re muddling through examples, that generally means you either don’t know what your precise statement is or you don’t have a proof. I’d say not having a precise statement is much worse, and that is what is happening here.
Wolfram here believes that he can make big progress on stuff like P vs NP by literally just going through all the Turing machines and seeing what they do. It’s the equivalent of someone saying, “Hey, I have some ideas about the Collatz conjecture! I worked out all the numbers from 1 to 30 and they all worked.” This analogy is still too generous; integers are much easier to work with than Turing machines. After all, not all Turing machines halt, and there is literally no way to decide which ones do. Even the ones that halt can take an absurd amount of time to halt (and again, how much time is literally impossible to decide). Wolfram does reference the halting problem on occasion, but quickly waves it away by saying, “in lots of particular cases … it may be easy enough to tell what’s going to happen.” That is not reassuring.
I am also doubtful that he fully understands what P and NP really are. Complexity classes like P and NP are ultimately about problems, like “find me a solution to this set of linear equations” or “figure out how to pack these boxes in a bin.” (The second one is much harder.) Only then do you consider which problems can be solved efficiently by Turing machines. Wolfram focuses on the complexity of Turing machines, but P vs NP is about the complexity of problems. We don’t care about the “arbitrary Turing machines ‘in the wild’” that have absurd runtimes, because, again, we only care about the machines that solve the problems we want to solve.
Also, for a machine to solve problems, it needs to take input. After all, a linear equation solving machine should work no matter what linear equations I give it. To have some understanding of even a single machine, Wolfram would need to analyze the behavior of the machine on all (infinitely many) inputs. He doesn’t even seem to grasp the concept that a machine needs to take input; none of his examples even consider that.
Finally, here are some quibbles about some of the strange terminology he uses. He talks about “ruliology” as some kind of field of science or math, and it seems to mean the study of how systems evolve under simple rules or something. Any field of study can be summarized in this kind of way, but in the end, a field of study needs to have theories in the scientific sense or theorems in the mathematical sense, not just observations. He also talks about “computational irreducibility”, which is apparently the concept of thinking about what is the smallest Turing machine that computes a function. This doesn’t really help him with his project, but not only that, there is a legitimate subfield of complexity theory called meta-complexity that is productively investigating this idea!
If I considered this in the context of solving P vs NP, I would not disagree if someone called this crank work. I think Wolfram greatly overestimates the effectiveness of just working through a bunch of examples in comparison to having a deeper understanding of the theory. (I could make a joke about LLMs here, but I digress.)
He doesn’t even seem to grasp the concept that a machine needs to take input; none of his examples even consider that.
So in a way, what you’re saying is that input sanitization (or at the very least, sanity) is an important concept even in theory
What TF is his notation for Turing machines?
He doesn’t even seem to grasp the concept that a machine needs to take input; none of his examples even consider that.
This is the fundamental mistake that students taking Intro to Computation Theory make and like the first step to teach them is to make them understand that P, NP, and other classes only make sense when you rigorously define the set of inputs and its encoding.
a lot of this “computational irreducibility” nonsense could be subsumed by the time hierarchy theorem which apparently Stephen has never heard of
He straight up misstates how NP computation works. Essentially he writes that a nondeterministic machine M computes a function f if on every input x, there exists a path of M(x) which outputs f(x). But this is totally nonsense - it implies that a machine M which just branches repeatedly to produce every possible output of a given size “computes” every function of that size.
I think that’s more about Wolfram giving a clickbait headline to some dicking around he did in the name of “the ruliad”, a revolutionary conceptual innovation of the Wolfram Physics Project that is best studied using the Wolfram Language, brought to you by Wolfram Research.
The full ruliad—which appears at the foundations of physics, mathematics and much more—is the entangled limit of all possible computations. […] In representing all possible computations, the ruliad—like the “everything machine”—is maximally nondeterministic, so that it in effect includes all possible computational paths.
Unrelated William James quote from 1907:
The more absolutistic philosophers dwell on so high a level of abstraction that they never even try to come down. The absolute mind which they offer us, the mind that makes our universe by thinking it, might, for aught they show us to the contrary, have made any one of a million other universes just as well as this. You can deduce no single actual particular from the notion of it. It is compatible with any state of things whatever being true here below.
that is best studied using the Wolfram Language,
isn’t this just a particularly weird lisp </troll>
the ruliad is something in a sense infinitely more complicated. Its concept is to use not just all rules of a given form, but all possible rules. And to apply these rules to all possible initial conditions. And to run the rules for an infinite number of steps
So it’s the complete graph on the set of strings? Stephen how the fuck is this going to help with anything
The Ruliad sounds like an empire in a 3rd rate SF show
Holy shit, I didn’t even read that part while skimming the later parts of that post. I am going to need formal mathematical definitions for “entangled limit”, “all possible computations”, “everything machine”, “maximally nondeterministic”, and “eye wash” because I really need to wash out my eyes. Coming up with technical jargon that isn’t even properly defined is a major sign of math crankery. It’s one thing to have high abstractions, but it is something else to say fancy words for the sake of making your prose sound more profound.
(Wolphram shoehorning cellular automata into everything to universally explain mathematics) shaking hands (my boys explaining which pokemon could defeat arbitrary fictional villains)
New post from Iris Meredith: “Becoming an AI-proof software engineer”
actually hilarious they started a lobster religion that’s also a crypto scam. learned from the humans well
does no-one rememeber Subreddit Simulator
at least its posts were shorter
There’s a small push by promptfondlers to make this “a thing”.
See for example Simon Willison: https://simonwillison.net/2026/Jan/30/moltbook/
LW is monitoring it for bad behavior: https://www.lesswrong.com/posts/WyrxmTwYbrwsT72sD/moltbook-data-repository
I’m planning on using this data to catalog “in the wild” instances of agents resisting shutdown, attempting to acquire resources, and avoiding oversight.
I’m planning on using this data to catalog “in the wild” instances of agents resisting shutdown, attempting to acquire resources, and avoiding oversight.
He’ll probably do this by running an agent that uses a chatbot with the playwright mcp to occasionally scrape the site, then feed that to a second agent who’ll filter the posts for suspect behavior, then to another agent to summarize and create a report, then another agent which decides if the report is worth it for him to read and message him through his socials. Maybe another agent with db access to log the flagged posts at some point.
All this will be worth it to no one except the bot vendors.
The demand is real. People have seen what an unrestricted personal digital assistant can do.
The demand is real. People have seen what crack cocaine can do.
From this post, it looks like we have reached the section of the Gibson novel where the public cloud machines respond to attacks with self-repair. Utterly hilarious to read the same sysadmin snark-reply five times, though.
Sci-Fi Author: In my book I invented LinkedIn as a cautionary tale.
Tech Company: At long last, we have automated LinkedIn.
How did molt become a term of endearment for agents? I read in the pivot thread that clawdbot changed it’s name to moltbot because anthropic got ornery.
None of those words are in your favourite religious text of choice
I think it went like this
- clawd is a pun on claude, lobsters have claws
- oh no we’re gonna get sued, but lobsters moult/molt their shells, so we’re gonna go ther
- “molt” sounds dumb, let’s go with openclaws
it’s vibe product naming
mold will be more fitting
When all the worst things come together: ransomware probably vibe-coded, discards private key, data never recoverable
During execution, the malware regenerates a new RSA key pair locally, uses the newly generated key material for encryption, and then discards the private key.
Halcyon assesses with moderate confidence that the developers may have used AI-assisted tooling, which could have contributed to this implementation error.
@nightsky @BlueMonday1984
I worked in the IR space for a couple of years - in my experience significant portion of data encrypted by ransomware is just unrecoverable for a variety of reasons: encryption was interrupted, private key was corrupted, decryptors were junk, data was encrypted multiple times and some critical part of key mat was corrupted, underlying hardware/software was on its last legs anyway, etc.There’s a scene in *Bladerunner 2049" where some dude explains that all public records were destroyed a decade or so earlier, presumably by malicious actors. This scenario looks more and more plausible with each passing day, but replace malice with stupidity.
Someone is probably hawking AI driven backups as we type
this is just notpetya with extra steps
some lwers (derogatory) will say to never assume malice when stupidity is likely, but stupidity is an awfully convenient excuse, isn’t it

















