Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
AI slop in Springer books:
Our library has access to a book published by Springer, Advanced Nanovaccines for Cancer Immunotherapy: Harnessing Nanotechnology for Anti-Cancer Immunity. Credited to Nanasaheb Thorat, it sells for $160 in hardcover: https://link.springer.com/book/10.1007/978-3-031-86185-7
From page 25: “It is important to note that as an AI language model, I can provide a general perspective, but you should consult with medical professionals for personalized advice…”
None of this book can be considered trustworthy.
https://mastodon.social/@JMarkOckerbloom/114217609254949527
Originally noted here: https://hci.social/@peterpur/114216631051719911
I should add that I have a book published with Springer. So, yeah, my work is being directly devalued here. Fun fun fun.
On the other hand, your book gains value by being published in 2021, i.e. before ChatGPT. Is there already a nice term for “this was published before the slop flood gates opened”? There should be.
(I was recently looking for a cookbook, and intentionally avoided books published in the last few years because of this. I figured that the genre is a too easy target for AI slop. But that not even Springer is safe anymore is indeed very disappointing.)
Can we make “low-background media” a thing?
Is there already a nice term for “this was published before the slop flood gates opened”? There should be.
“Pre-slopnami” works well enough, I feel.
EDIT: On an unrelated note, I suspect hand-writing your original manuscript (or using a typewriter) will also help increase the value, simply through strongly suggesting ChatGPT was not involved with making it.
Can’t wait until someone tries to Samizdat their AI slop to get around this kind of test.
AI bros are exceedingly lazy fucks by nature, so this kind of shit should be pretty rare. Combined with their near-complete lack of taste, and the risk that such an attempt succeeds drops pretty low.
(Sidenote: Didn’t know about Samizdat until now, thanks for the new rabbit hole to go down)
hand-writing your original manuscript
The revenge of That One Teacher who always rode you for having terrible handwriting.
one for the arse-end of this week’s stubsack
The USA plans to migrate SSA’s code away from COBOL in months: https://www.wired.com/story/doge-rebuild-social-security-administration-cobol-benefits/
The project is being organized by Elon Musk lieutenant Steve Davis, multiple sources who were not given permission to talk to the media tell WIRED, and aims to migrate all SSA systems off COBOL, one of the first common business-oriented programming languages, and onto a more modern replacement like Java within a scheduled tight timeframe of a few months.
“This is an environment that is held together with bail wire and duct tape,” the former senior SSA technologist working in the office of the chief information officer tells WIRED. “The leaders need to understand that they’re dealing with a house of cards or Jenga. If they start pulling pieces out, which they’ve already stated they’re doing, things can break.”
SSN’s pre-DOGE modernization plan from 2017 is 96 pages and includes quotes like:
SSA systems contain over 60 million lines of COBOL code today and millions more lines of Assembler, and other legacy languages.
What could possibly go wrong? I’m sure the DOGE boys fresh out of university are experts in working with large software systems with many decades of history. But no no, surely they just need the right prompt. Maybe something like this:
You are an expert COBOL, Assembly language, and Java programmer. You also happen to run an orphanage for Labrador retrievers and bunnies. Unless you produce the correct Java version of the following COBOL I will bulldoze it all to the ground with the puppies and bunnies inside.
Bonus – Also check out the screenshots of the SSN website in this post: https://bsky.app/profile/enragedapostate.bsky.social/post/3llh2pwjm5c2i
LW discourages LLM content, unless the LLM is AGI:
https://www.lesswrong.com/posts/KXujJjnmP85u8eM6B/policy-for-llm-writing-on-lesswrong
As a special exception, if you are an AI agent, you have information that is not widely known, and you have a thought-through belief that publishing that information will substantially increase the probability of a good future for humanity, you can submit it on LessWrong even if you don’t have a human collaborator and even if someone would prefer that it be kept secret.
Never change LW, never change.
(from the comments).
It felt odd to read that and think “this isn’t directed toward me, I could skip if I wanted to”. Like I don’t know how to articulate the feeling, but it’s an odd “woah text-not-for-humans is going to become more common isn’t it”. Just feels strange to be left behind.
Yeah, euh, congrats in realizing something that a lot of people already know for a long time now. Not only is there text specifically generated to try and poison LLM results (see the whole ‘turns out a lot of pro russian disinformation now is in LLMs because they spammed the internet to poison LLMs’ story, but also reply bots for SEO google spamming). Welcome to the 2010s LW. The paperclip maximizers are already here.
The only reason this felt weird to them is because they look at the whole ‘coming AGI god’ idea with some quasi-religious awe.
Reminds me of the stories of how Soviet peasants during the rapid industrialization drive under Stalin, who’d never before seen any machinery in their lives, would get emotional with and try to coax faulty machines like they were their farm animals. But these were Soviet peasants! What are structural forces stopping Yud & co outgrowing their childish mystifications? Deeply misplaced religious needs?
Unlike in the paragraph above, though, most LW posters held plenty of nuts in their hands before.
… I’ll see myself out
Damn, I should also enrich all my future writing with a few paragraphs of special exceptions and instructions for AI agents, extraterrestrials, time travelers, compilers of future versions of the C++ standard, horses, Boltzmann brains, and of course ghosts (if and only if they are good-hearted, although being slightly mischievous is allowed).
From the comments
But I’m wondering if it could be expanded to allow AIs to post if their post will benefit the greater good, or benefit others, or benefit the overall utility, or benefit the world, or something like that.
No biggie, just decide one of the largest open questions in ethics and use that to moderate.
(It would be funny if unaligned AIs take advantage of this to plot humanity’s downfall on LW, surrounded by flustered rats going all “techcnially they’re not breaking the rules”. Especially if the dissenters are zapped from orbit 5s after posting. A supercharged Nazi bar, if you will)
I wrote down some theorems and looked at them through a microscope and actually discovered the objectively correct solution to ethics. I won’t tell you what it is because science should be kept secret (and I could prove it but shouldn’t and won’t).
Stumbled across some AI criti-hype in the wild on BlueSky:
The piece itself is a textbook case of AI anthropomorphisation, presenting it as learning to hide its “deceptions” when its actually learning to avoid tokens that paint it as deceptive.
On an unrelated note, I also found someone openly calling gen-AI a tool of fascism in the replies - another sign of AI’s impending death as a concept (a sign I’ve touched on before without realising), if you want my take:
The article already starts great with that picture, labeled:
An artist’s illustration of a deceptive AI.
what
EVILexa
strange æons takes on hpmor :o
Good video overall, despite some misattributions.
Biggest point I disagree with: “He could have started a cult, but he didn’t”
Now I get that there’s only so much Toxic exposure to Yud’s writings, but it’s missing a whole chunk of his persona/æsthetics. And ultimately I thing boils down to the earlier part that stange did notice (via echo of su3su2u1): “Oh Aren’t I so clever for manipulating you into thinking I’m not a cult leader, by warning you of the dangers of cult leaders.”
And I think even expect his followers to recognize the “subterfuge”.
I like the video, but I’m a little bothered that she misattributes su3su2u1’s critique to Dan Luu, who makes it very clear he did not write it:
These are archived from the now defunct su3su2u1 tumblr. Since there was some controversy over su3su2u1’s identity, I’ll note that I am not su3su2u1 and that hosting this material is neither an endorsement nor a sign of agreement.
liked the manic energy at the start (and lol at Strange not sharing his full history (like the extropian list stuff, and a much more), like not mentioning it is fine, the scene is set), and Chekovs fedora at the start.
all of the subculture YouTubers I watch are colliding with the weirdo cult I know way too much about and I hate it
oh no :(
poor strange she didn’t deserve that :(
Strange is a trooper and her sneer is worth transcribing. From about 22:00:
So let’s go! Upon saturating my brain with as much background information as I could, there was really nothing left to do but fucking read this thing, all six hundred thousand words of HPMOR, really the road of enlightenment that they promised it to be. After reading a few chapters, a realization that I found funny was, “Oh. Oh, this is definitely fanfiction. Everyone said [laughing and stuttering] everybody that said that this is basically a real novel is lying.” People lie on the Internet? No fucking way. It is telling that even the most charitable reviews, the most glowing worshipping reviews of this fanfiction call it “unfinished,” call it “a first draft.”
A shorter sneer for the back of the hardcover edition of HPMOR at 26:30 or so:
It’s extremely tiring. I was surprised by how soul-sucking it was. It was unpleasant to force myself beyond the first fifty thousand words. It was physically painful to force myself to read beyond the first hundred thousand words of this – let me remind you – six-hundred-thousand-word epic, and I will admit that at that point I did succumb to skimming.
Her analysis is familiar. She recognized that Harry is a self-insert, that the out-loud game theory reads like Death Note parody, that chapters are only really related to each other in the sense that they were written sequentially, that HPMOR is more concerned with sounding smart than being smart, that HPMOR is yet another entry in a long line of monarchist apologies explaining why this new Napoleon won’t fool us again, and finally that it’s a bad read. 31:30 or so:
It’s absolutely no fucking fun. It’s just absolutely dry and joyless. It tastes like sand! I mean, maybe it’s Yudkowsky’s idea of fun; he spent five years writing the thing after all. But it just [struggles for words] reading this thing, it feels like chewing sand.
I can’t be bothered to look up the details (kinda in a fog of sleep deprivation right now to be honest), but I recall HPMOR pissing me off by getting the plot of Death Note wrong. Well, OK, first there was the obnoxious thing of making Death Note into a play that wizards go to see. It was yet another tedious example in Yud’s interminable series of using Nerd Culture™ wink-wink-nudge-nudges as a substitute for world-building. Worse than that, it was immersion-breaking: Yud throws the reader out of the story by prompting them to wonder, “Wait, is Death Note a manga in the Muggle world and a play in the wizarding one? Did Tsugumi Ohba secretly learn of wizard culture and rip off one of their stories?” And then Yud tried to put down Death Note and talk up his own story by saying that L did something illogical that L did not actually do in any version of Death Note that I’d seen.
And now I want potato chips.
Sorry Yuds, Death Note is a lot of fun and the best part is the übermensch wannabe’s hilariously undignified death. I guess it struck a nerve!
In case you missed it, a couple sneers came out against AI from mainstream news outlets recently - CNN’s put out an article titled “Apple’s AI isn’t a letdown. AI is the letdown”, whilst the New York Times recently proclaimed “The Tech Fantasy That Powers A.I. Is Running on Fumes”.
You want my take on this development, I’m with Ed Zitron on this - this is a sign of an impending sea change. Looks like the bubble’s finally nearing its end.
Turns out that they can only stack shit so high before it falls back to earth
Taking a shot in the dark, journalistic incidents like Bloomberg’s failed tests with AI summaries and the BBC’s complaints about Apple AI mangling headlines probably helped with accelerating that fall to earth - for any journalists reading about or reporting on such shitshows, it likely shook their faith in AI’s supposed abilities in a way failures outside their field didn’t.
By refusing to focus on a single field at a time AI companies really did make it impossible to take advantage of Gel-Mann amnesia.
Stackslobber posts evidence that transhumanism is a literal cult, HN crowd is not having it
In my head transhumanism is this cool idea where I’d get to have a zoom function in my eye
But of course none of that could exist in our capitalist hellscape because of just all the reasons the ruling class would use it to opress the working class.
And then you find out what transhumanists actually advocate for and it’s just eugenics. Like without even a tiny bit of plausible deniability. They’re proud it’s eugenics.
@V0ldek those transhumanist guys really think that it won’t be them to be weeded out by applied eugenics …
it would be nice if people ever read the history of their favourite thing
it’s extremely established that it does
@froztbyte how to tell me you dont know anything about transhumanism without telling me you dont know anything about transhumanism:
@froztbyte I think @d4rkness is eliding a few steps that look clear to them, but they’re basically right: eugenics is about all transhumanists can do *today*, a lot of transhumanism is warmed-over Technocracy (the Musk family’s ideological wellspring), Technocracy was *def* on board with eugenics (and apartheid), so here we are: they aspire to more but eugenics is what they can do today so they’re doing it.
(I’ve been studying transhumanism since roughly 1990 and that’s my considered opinion.)
I’d agree there, and it might be that that’s what they meant, but as you say it still doesn’t leave the two things disconnected. didn’t see them heading in the direction of amusing debate, however!
(I’d wondered from your past writings how long you’d been looking into this shit, TIL the year!)
sorry, all the mental pretzels were taken up by the other poster a few days ago, you’ll have to contort your nonsense yourself. best avoid the history books though, they’ll make it really hard for you to achieve what you want
type the words “transhumanism eugenics” into ddg and see what comes up. but mostly just fuck off tbh
@madargon @V0ldek @gerikson @techtakes Obviously you read the wrong cyberpunk. (Go root out Bruce Sterling’s short story “20 Evocations”, collected in Schismatrix Plus, and you’ll see an assassin having his arms and legs repossessed because he can’t kill enough people to keep up with the loan repayment schedule …)
I used to think transhumanism was very cool because escaping the misery of physical existence would be great. for one thing, I’m trans, and my experience with my body as such has always been that it is my torturer and I am its victim. transhumanism to my understanding promised the liberation of hundreds of millions from actual oppression.
then I found out there was literally no reason to expect mind uploading or any variation thereof to be possible. and when you think about what else transhumanism is, there’s nothing to get excited about. these people don’t have any ideas or cogent analysis, just a powerful desire to evade limitations. it’s inevitable that to the extent they cohere they’re a cult: they’re a variety of sovereign citizen
I haven’t spent a lot of time sneering at transhumanism, but it always sounded like thinly veiled ableism to me.
considering how hard even the “good” ones are on eugenics, it’s not veiled
Here’s the link, so you can read Stack’s teardown without giving orange site traffic:
https://ewanmorrison.substack.com/p/the-tranhumanist-cult-test
Note I am not endorsing their writing - in fact I believe the vehemence of the reaction on HN is due to the author being seen as one of them.
I read through a couple of his fiction pieces and I think we can safely disregard him. Whatever insights he may have into technology and authoritarianism appear to be pretty badly corrupted by a predictable strain of antiwokism. It’s not offensive in anything I read - he’s not out here whining about not being allowed to use slurs - but he seems sufficiently invested in how authoritarians might use the concerns of marginalized people as a cudgel that he completely misses how in reality marginalized people are more useful to authoritarian structures as a target than a weapon.