Want to wade into the sandy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.
Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
LW stalwart discovers kids get sniffles from daycare, obviously this means women have to stay at home to take care of kids and not work:
https://www.lesswrong.com/posts/byiLDrbj8MNzoHZkL/daycare-illnesses
BTW almost every person born after 1970 in Sweden has been to daycare as a kid, if daycare illnesses had long-term consequences it would be showing up here
Sweden is an interesting example because they pioneered the let-it-rip approach to COVID. That was less disastrous than it could have been but not great even in a country with a lot of detached housing and nuclear families. https://kevinmd.com/2025/01/swedens-controversial-covid-19-strategy-lessons-from-higher-mortality-rates.html I would not have recommended putting children in daycare without strict indoor-air-quality standards between 2020 and 2024.
Covid is an exception, and believe me, if the main victims of Covid had been kids instead of old people stuffed into elder-care facilities, forgotten by everyone, the dynamics around masking and vaccines and lockdowns would have been a lot different.
My point is that most kids in Sweden go to daycare, “daycare sickness” (where the whole family comes down with enteritis etc) is a common thing, and as far as I know the country doesn’t stand out in health stats.
You can argue that the loss of productivity from this is a factor, but as you mention in a parallell comment, the authorities can demand better hygiene and air quality in preschools and schools, and it would be cheaper than outfitting every single home.
They do have consequences! Crowded schools and preschools and daycare spread all kinds of dangerous infectious diseases and there are consequences of getting them. With the arrival of COVID and the decline in vaccination risks are rising.
‘Getting sick builds resistance’ is another of the folk medicine beliefs which we in the infection control community have been fighting since 2020. Some diseases are milder the second or third time, but generally you want to get as few infectious diseases as possible.
ok the takes on the attempted firebombing of sama’s mansion are coming in from the rats and those that watch them. Credit to letting stuff marinate , I guess, and/or not working on a weekend
no clue who this dude is, has a slobsuck with .ai domain but makes sense:
https://www.campbellramble.ai/p/the-rational-conclusion
Weird MtG scarecrow Zvi plays moral philosopher, invokes multiple authorities on Xhitter:
https://thezvi.substack.com/p/political-violence-is-never-acceptable
the worst kind of violence, the sort against people like me
all those other deaths? those aren’t violence
we also need to care more about property
The Zvi post really pisses me off for continuing to normalize Eliezer’s comments (in a way that misrepresents the problems with them).
This happened quite a bit around Eliezer’s op-ed in Time in particular, usually in highly bad faith, and this continues even now, equating calls for government to enforce rules to threats of violence, and there are a number of other past cases with similar sets of facts.
Eliezer called for the government to drone strike data centers, even of foreign governments not signatories to international agreements, and even if doing so risked starting nuclear war.
Pacifism is at least a consistent position, but instead rationalists like Zvi want to simultaneously disown the radical actions, but legitimizes the US’s shit show of a foreign policy.
Another thing that pisses me off is the ahistorical claim by rationalist that such actions are ineffective and unlikely to succeed. Asymmetric warfare and terrorist tactics have obtained success many times in history! The kkk successfully used terrorism to repress a population for a century. The black panthers got gun control passed in California and put pressure on political leaders to accept the more peaceful branch of the civil rights movement. The IRA got the Good Friday agreement. The US revolution! All the empires that have withdrawn from Afghanistan!
Overall though… I guess this is a case of two wrongs making a sorta right. They are dangerously wrong about AI doom, but at least they are also wrong about direct action and so usually won’t take the actions implied by their beliefs. (But they are still, completely predictably, inspiring stochastic terrorists).
Yeah, what the fuck is this passage
If you believe that If Anyone Builds It, Everyone Dies, then you should say that if anyone builds it, then everyone dies. Not moral blame. Cause and effect. Note that this is importantly different from ‘anyone who is trying to build it is a mass murderer.’
(note the rat-tic of using “importantly” as an adjective)
This deftly evades the main question - how do we ensure that no-one builds it? There’s a host of options, and political violence is one of them. I guess categorically stating it’s off the table is a start, but Zvi has the moral gravitas of a dormouse. If I was of the political voilence bent I’d probably commit some just to spite him.
Excited for the labor and contract law disputes that this will spawn when the model makes promises that the person won’t keep https://www.theverge.com/tech/910990/meta-ceo-mark-zuckerberg-ai-clone
But as as expected, this is another zuck project that doesn’t have the leg(itimation)s
[ai booster voice] if you last tried a head of AI more than six months ago, you need to try the new model. https://old.reddit.com/r/apple/comments/1ska7kn/apples_ai_chief_john_giannandrea_departs_this_week/
There is research on evangelicals in the USA who interpret Trump tweets like their heretical pastors teach them to interpret the Bible. Is there something similar on the vast contradictory off-the-cuff social media outputs of people like Yud and Islamic treatment of the Hadiths?
You’ve reminded me about the whole edifice of Qanon lore where they would try to combine 4chan (and later even sketchier sites like 8chan) hints with whatever Trump was posting at the moment to decode secret knowledge about stuff like when the military tribunals executing all the democrats would be.
Anyway, in Eliezer’s case, I kind of get the feeling the lesswrong rationalists have somewhat moved on from him? They are still excessively deferential to him, but the vibe I get from hate-browsing lesswrong is that the majority of the rationalists there put much lower odds on AI doom? (It’s hard to tell exactly because Eliezer has avoided committing to timelines or hard probabilities on AI doom despite all his talk about putting probabilities on everything in the sequences). Lesswrong occasionally references his tweets but not that often. Like I think sneerclub actually references them more often?
The places to find Yudkowsky stans are Substack, Twitter, and the meetups and foundations in the Bay Area. Some of them accept his teachings but reject him because he became a doomer and they want to build God and conquer Death tomorrow.
Around 2012 or 2013, Yudkowsky passed the forum off to CFAR and stopped posting much there. For a while he used it as one mouth of his recruitment funnel (the other mouth being Effective Altruism). That is a really common fate for mailing lists, forums, and comments sections, just like its really common that people take what they want from the Sacred Texts. People who post and post are not always helpful for raising money or creating offline events.
e/ The people who respond to Yudkowsky’s tweets and whom he retweets are TESCREAL figures.
Not really a sneer, just wondering what to make of it, if it doesn’t belong here please remove.
The Financial Times goes with a study which ostensibly demonstrates that ca. half a million of potential coding jobs were directly eliminated by AI, not any other factors or general industry slowdown. The idea is it’s mainly junior positions which aren’t tightly “bundled” with other domains or just years of programming experience & intuition which are harder for AI to replace. So is AI really fully replacing juniors in the hundreds of thousands, or is there more going on?
that looks like a heaping pile of correlation, without mentioning the general downturn
So I don’t have time to read the full paper and I probably don’t have the background to make an informed critique of the methodology once I do (not that that’s gonna stop me). But I feel like the challenge here is in mapping the distinction between junior and senior coding roles. To what extent do the senior coders get treated like a distinct job as opposed to being junior-but-seasoned?
Based on a quick amateur read of the abstract it looks like they’re assuming the first option, that junior and senior developers are separate roles that can be largely disentangled. But if the other option is true, then in the event of a general industry downturn (say, after over hiring during recent periods of unsustainable growth) then it might make sense to look at the cuts to junior roles as simply removing the less efficient and effective people from the development role, rather than specifically cutting the juniors because they’re uniquely exposed to AI replacement.
I don’t know which model is more accurate to how the industry treats these roles or whether it varies by organization or what, but that’s what seems like the most likely alternate explanation for the observed shift towards a very senior-heavy workforce.
And seniors obviously grow on senior trees (assuming that this take is actually true)
“All of those embodied agents are seat opportunities,” Jha said, envisioning organizations with more agents than humans — each effectively a user that must pay for a software license, or “seat” in industry lingo.
A company with 20 employees might buy 20 Microsoft 365 licenses today. If each employee gets five AI agents, and the workforce shrinks to 10 people, that could still mean 50 paid seats.
Also, it’s apparently enough for an LLM endpoint to be paired with an email inbox to be considered an “embodied agent”, words mean nothing.
It’s ludicrous to pay taxes on the wealth your robots make, but it’s savvy business to charge each software-delimited robot as a separate being - just like charging per-cpu-core was!
Ah right, I need to get a 365 license for word, which comes with a free copilot agent, who needs a 365 license for its copy of word, which comes with a free copilot agent, who needs a …
Now that we’ve got the concept of recursive per-seat licensing established, allow me to invite you to contemplate the possibility of the “licensing macro”
Do you get a refund when an “agent” inevitably blows out its context window and starts emitting deranged output, or does that automatically get rolled over into starting up the next “agent”
JFC at least wait until you have a de-facto monopoly before musing about extracting the rents! This is capitalism 101.
Ahh sh*t if all my rent-seeking employee-reducing dreams come true, i’ll lose money on my product subscriptions rents! Quick! I should come up with bullshit that will solve everything!
There’s a pretty wide divide between the speculations of the motives of the alleged arsonist of Sam Altman’s SF residence last week.
LW has handled the issue obliquely, but the main concern seems to be that they are pretty convinced the dude acted out of fear of AI-induced x-risk. The optics is that his actions would paint EA in a bad light.
HN (based on this big heap of comments https://news.ycombinator.com/item?id=47724921) is more focused on the idea that Altman and co. are fomenting class hatred and the attack is more akin to Luigi Mangione’s attack on a health insurance CEO. (Searching for “exinction” and “doom” in the threads doesn’t throw up much)
Neither forum links to the dude’s alleged slobslack.
My conclusion is that “AGI-driven X-risk” is a position too extreme at the moment for HN.
Also I believe the alleged attacker is not an avatar of a popular movement but a confused individual self-radicalized online.
Edit It’s good to know that if you are a radicalized person thinking about committing violence against people or property, LW will be happy to provide you with a safe space to vent, with guarantees on your anonymity. C.f. habryka’s comment here https://www.lesswrong.com/posts/igEogGD9TAgAeAM7u/jimrandomh-s-shortform?commentId=zdMRHRqWDcjswhA3i
more akin to Luigi Mangione’s
Didnt he also have sort of links to the LW idea space?
His substack reads like LW.
The optics is that his actions would paint EA in a bad light.
I think SBF’s Scamfest Spectacular did a good enough job of that already :P
HN is more focused on the idea that Altman and co. are fomenting class hatred and the attack is more akin to Luigi Mangione’s attack on a health insurance CEO.
Altman and co’s antics have repeatedly shafted the working class to billionaires’ direct benefit, so I’ll give the orange site that on the “fomenting class hatred” part. Thinking his actions are any way related to Luigi Mangione nailing the healthcare billionaire’s fucking wild, though
I think SBF’s Scamfest Spectacular did a good enough job of that already :P
Still, being a possible hotbed for domestic terrorism is a whole different ballpark of having the authorities meddle into your day-to-day.
writing this up for today, will be mentioning Ziz
Looking forward to it!





