

Very kool web page, that rat got over there.


Very kool web page, that rat got over there.


but I do know that what’s available now is just f*cking impressive - and it will only get better.
Another victim of the proof-by-dopamine-hit fallacy it seems.
It’s telling that the example he brings is that Claude can do pretty much decently what he was about to buy a 100$ voice controlled app for. As someone who aspires to the art of making great software, it’s so infuriating to see how non-techies were conditioned into accepting slopware by years of enshittification and price gouging. Who cares if the tech barely works right? So does most anything, right?


qBitTorrent in I2P only mode is free and safe.
It’s slow and limited selection, but there’s good stuff.


When we use the fart app on our phone we merge with and become hybrids of human conciousness and artificial fartelligence (created by us and therefore of conciousness)


I wouldn’t say “turned against”. The summary from PwC gives the numbers, but still frames it as a challenge to overcome. And wouldn’t you know who can sell you the expertise on how to make AI work in enterprise…


He should have put the printouts on display post-digestion, thus turning himself into a provocative parody of GenAI.


I was looking into a public sector job opening, running clouds for schools, and just found out that my state recently launched a chatbot for schools. But it’s made in EU and safe and stuff! (It’s an on-premise GPT-5)


I suppose you can go for a Jolla, if you’re willing to bet that SailfishOS will finally work. I’ll let y’all know in a year or so.


FediLab is an explicitly general Fediverse app, though it’s not designed for lemmy. There’s also plenty of native cross-fedi support, as all activities are translated into ActivityPub events.
Frankly I’d recommend a separate app for Lemmy-likes and Mastodon-likes. The difference between following communities and following accounts seems to result in a lot of jank, if crammed into one app.


I’m gonna leave here my idea, that an essential aspect of why GenAI is bad is that it is designed to extrude media that fits common human communication channels. This makes it perfect to choke out human-to-human communication over those channels, preventing knowledge exchange and social connection.


So, Copilot for VSCode apparently got hit with an 8.8 CVE in November for, well, doing Copilot stuff. (RCE if you clone a strange repo and promptfondle it.)
Fixes were allegedly released on Nov 12th, but I can’t find anything in the Changelog on what those changes were, and how they would prevent Copilot from doing, well, Copilot stuff. (Although I may not be ITSec-savvy enough to know where such information would be found.)


I scolled around the “ludwell institute” a bit for fun. Seems like a pretty professional opinion piece/social media content operation run by one person as far as I can tell. I read one article, where they lionized a jailed BitCoin Mixer developer. Another one seems to be hyped for Ethereum for some reason.
Seems like pretty unreflected “I make money by having this opinion” stuff. They lead with reasonable stuff about using privacy-respecting settings or tools, but the ultimate solution seems to be becoming OpSec paranoid and using Tor and Crypto.


I was looking into pepping up my CV and was poking around the 2024 CCC style guide when this abomination hit my retinas:
Use the provided LUTs to tint your images in the predefined scheme. You can load them with your graphics software of choice. Ask your friendly AI overlord if you don’t know how.
After being provided with such reproducible instructions, I was of course poking around blog posts for half an hour to finagle this thing. Adding insult to injury: poking around the LUT files shows they were made by Affinity Studio (a freeware pumped out by Canva) instead of the true scotsman’s choice: the G’MIC command line tool! (In fairness, there doesn’t seem to be a FOSS option with a usable GUI for this task. The G’MIC GIMP plugin is sort of okay, but it can’t parse this particular file.)


I think I’m with Haunted’s intuition in that I don’t really buy code generation. (As in automatic code generation.) My understanding was you build a thing that takes some config and poops out code that does certain behaviour. But could you not build a thing instead, that does the behaviour directly?
I know people who worked on a system like that, and maybe there’s niches where it makes sense. Just seems like it was a SW architecture fad 20 years ago, and some systems are locked into that know. It doesn’t seem like the pinnacle of engineering to me.


(Those are my 2 reference points, if I’m ignorant of some cool org let me know.)


The era of useful Silicon Valley Non-Profits seems to be fizzling out. I wonder how long Signal is going to hold out…


I heard the same complaint from leftist metal fans.


Recently my research lead recounted a meeting among the senior people, where they hammered out a bunch of project pitches. Some of the wording was still a little rough, but they were going to pass it all through DeepL anyway, to make it read good. Also everyone’s bad at spellling these days, since you got a thing that autocompletes for you, right? They were proud they remembered how to spell “continuous”.
Sure, everyone has days they can’t word good, but this starts sounding like worrying de-skilling. These people spend a good portion of their paid time working on and arguing over wording.
I’m pretty sure LAWS exist right now, even without counting landmines. Automatic human targeting and friend/foe distinction aren’t exactly cutting edge technologies.
The biggest joke to me is that these systems are somewhat cost-efficient on the scale of a Kalashnikov. Ukraine is investing heavily into all kinds of drones, but that is because they’re trying to be casualty-efficient. And it’s all operator based. No-one wants the 2M€ treaded land-drone to randomly open fire on a barn and expose its position to a circling 5k€ kamikaze drone.