

I would also like to understand under what definition ChatGPT can be classified as “connection technology”.
sometimes a dragon
I would also like to understand under what definition ChatGPT can be classified as “connection technology”.
Can you imagine selling something like a firewall appliance with a setting called “Yolo Mode”, or even a tax software or a photo organizer or anything that handles any data, even if only of middling importance, and then still expect to be taken seriously at all?
Ok, maybe someone can help me here figure something out.
I’ve wondered for a long time about a strange adjacency which I sometimes observe between what I call (due to lack of a better term) “unix conservativism” and fascism. It’s the strange phenomenon where ideas about “classic” and “pure” unix systems coincide with the worst politics. For example the “suckless” stuff. Or the ramblings of people like ESR. Criticism of systemd is sometimes infused with it (yes, there is plenty of valid criticism as well. But there’s this other kind of criticism I’ve often seen, which is icky and weirdly personal). And I’ve also seen traces of this in discussions of programming languages newer than C, especially when topics like memory safety come up.
This is distinguished from retro computing and nostalgia and such, those are unrelated. If someone e.g. just likes old unix stuff, that’s not what I mean.
You may already notice, I struggle a bit to come up with a clear definition and whether there really is a connection or just a loose set of examples that are not part of a definable set. So, is there really something there or am I seeing a connection that doesn’t exist?
I’ve also so far not figured out what might create the connection. Ideas I have come up with are: appeal to times that are gone (going back to an idealized computing past that never existed), elitism (computers must not become user friendly), ideas of purity (an imaginary pure “unix philosophy”).
Anyway, now with this new xlibre project, there’s another one that fits into it…
Yes, thank you, I’m also annoyed about this. Even classic “AI” approaches for simple pattern detection (what used to be called “ML” a few hype waves ago, although it’s much older than that even) are now conflated with capabilities of LLMs. People are led to believe that ChatGPT is the latest and best and greatest evolution of “AI” in general, with all capabilities that have ever been in anything. And it’s difficult to explain how wrong this is without getting too technical.
Related, this fun article: ChatGPT “Absolutely Wrecked” at Chess by Atari 2600 Console From 1977
- You will understand how to use AI tools for real-time employee engagement analysis
- You will create personalized employee development plans using AI-driven analytics
- You will learn to enhance employee well-being programs with AI-driven insights and recommendations
You will learn to create the torment nexus
- You will prepare your career for your future work in a world with robots and AI
You will learn to live in the torment nexus
- You will gain expertise in ethical considerations when implementing AI in HR practices
I assume it’s a single slide that says “LOL who cares”
Maybe someone has put into their heads that they have to “go with the times”, because AI is “inevitable” and “here to stay”. And if they don’t adapt, AI would obsolete them. That Wikipedia would become irrelevant because their leadership was hostile to “progress” and rejected “emerging technology”, just like Wikipedia obsoleted most of the old print encyclopedia vendors. And one day they would be blamed for it, because they were stuck in the past at a crucial moment. But if they adopt AI now, they might imagine, one day they will be praised as the visionaries who carried Wikipedia over to the next golden age of technology.
Of course all of that is complete bullshit. But instilling those fears (“use it now, or you will be left behind!”) is a big part of the AI marketing messaging which is blasted everywhere non-stop. So I wouldn’t be surprised if those are the brainworms in their heads.
Also, happy Pride :3
Yes, happy pride month everyone!
I’ve decided that this year I’m going to be more open about this and wear a pride bracelet whenever I go in public this month. Including for (remote) work meetings where nobody knows… wonder if anyone will notice.
That’s it, I’m ordering a copy.
Ah, thanks, well my sarcasm detector isn’t that good.
160,000 organisations, sending 251 million messages! […] A message costs one cent. […] Microsoft is forecast to spend $80 billion on AI in 2025.
No problem. To break even, they can raise prices just a little bit, from one cent per message to, uuh, $318 per message. I don’t think that such a tiny price bump is going to reduce usage or scare away any customers, so they can just do that.
From McCarthy’s reply:
My current answer to the question of when machines will reach human-level intelligence is that a precise calculation shows that we are between 1.7 and 3.1 Einsteins and .3 Manhattan Projects away from the goal.
omg this statement sounds 100% like something that could be posted today by Sam Altman on X. It’s hititing exactly the sweet spot between appearing precise but also super vague, like Altman’s “a few thousand days”.
Also great, right after that:
Nevertheless, it is common in literature for a good writer - to show greater understanding of the experience of the opposite sex than a poorer writer of that sex.
Yeeeaah, sure. And to write that in the 1970s even.
If anything, this McCarthy reply makes me want to read the Weizenbaum book.
I didn’t know that uwu news influencer was a thing.
Same, and also I’m still trying to process that “uwu” breached out of furry spaces and became a widely understood term. (Although I’m not entirely sure what way it took, it’s also possible that it breached out of anime-related communities. Maybe some day cyber-archeologists can figure this out.)
In the collection of links of what Ive has done in recent years, there’s one to an article about a turntable redesign he worked on, and from that article:
The Sondek LP12 has always been entirely retrofittable and Linn has released 50 modular hardware upgrades to the machine, something that Ive said he appreciates. “I love the idea that after years of ownership you can enjoy a product that’s actually better than the one you bought years before,” said Ive.
I don’t know, should I laugh, or should I scream, that it’s Ive, of all people, saying that.
I’m heckin’ moving to Switzerland next month holy hell.
Good luck!!
they posted these two videos to TikTok in response to the AI backlash
The cringey “hello, fellow kids” vibe is really unbearable… good that people are not falling for that.
Seeing a lot of talk about OpenAI acquiring a company with Jony Ive and he’s supposedly going to design them some AI gadget.
Calling it now: it will be a huge flop. Just like the Humane Pin and that Rabbit thing. Only the size of the marketing campaign, and maybe its endurance due to greater funding, will make it last a little longer.
It appears that many people think that Jony Ive can perform some kind of magic that will make a product successful, I wonder if Sam Altman believes that too, or maybe he just wants the big name for marketing purposes.
Personally, I’ve not been impressed with Ive’s design work in the past many years. Well, I’m sure the thing is going to look very nice, probably a really pleasingly shaped chunk of aluminium. (Will they do a video with Ive in a featureless white room where he can talk about how “unapologetically honest” the design is?) But IMO Ive has long ago lost touch with designing things to be actually useful, at some point he went all in on heavily prioritizing form over function (or maybe he always did, I’m not so sure anymore). Combine that with the overall loss of connection to reality from the AI true believers and I think the resulting product could turn to be actually hilarious.
The open question is: will the tech press react with ridicule, like it did for the Humane Pin? Or will we have to endure excruciating months of critihype?
I guess Apple can breathe a sigh of relief though. One day there will be listicles for “the biggest gadget flops of the 2020s”, and that upcoming OpenAI device might push Vision Pro to second place.
If the companies wanted to produce an LLM that didn’t output toxic waste, they could just not put toxic waste into it.
The article title and that part remind me of this quote from Charles Babbage in 1864:
On two occasions I have been asked, — “Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?” In one case a member of the Upper, and in the other a member of the Lower, House put this question. I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.
It feels as if Babbage had already interacted with today’s AI pushers.
Re the GitLab marketing: what does it mean, what toolchains are they referring to, and what is “native AI”? Does that even mean anything, or is it just marketing gibberish to impress executives?
*scrolls down*
GitLab Duo named a Leader in the Gartner® Magic Quadrant™ for AI Code Assistants.
[eternal screaming]
Oh god, so many horror quotes in there.
With a community of 116 million users a month, Duolingo has amassed loads of data about how people learn
…and that’s why I try to avoid using smartphone apps as much as possible.
“Ultimately, I’m not sure that there’s anything computers can’t really teach you,”
How about common sense…
“it’s just a lot more scalable to teach with AI than with teachers.”
Ugh. So terrible. Tech’s obsession with “scaling” is one of the worst things about tech.
If “it’s one teacher and like 30 students, each teacher cannot give individualized attention to each student,” he said. “But the computer can.
No, it cannot. It’s a statistical model, it cannot give attention to anything or anyone, what are you talking about.
Duolingo’s CFO made similar comments last year, saying, “AI helps us replicate what a good teacher does”
Did this person ever have a good teacher in their life
the company has essentially run 16,000 A/B tests over its existence
Aaaarrgh. Tech’s obsession with A/B testing is another one of the worst things about tech.
Ok I stop here now, there’s more, almost every paragraph contains something horrible.
I’m in therapy and much better than I used to, but from my past before that, I am unfortunately quite experienced over many years in having existential worries and anxieties about extremely unlikely things.
And then I see this…
…and damn, that’s next-level thinking, even for me.