

apparently this was a sugar high: just before the remarks started, according to a White House pool report, an usher brought in three Diet Cokes and ice.
But… that’s the sugar free one… he must have been on something else.


apparently this was a sugar high: just before the remarks started, according to a White House pool report, an usher brought in three Diet Cokes and ice.
But… that’s the sugar free one… he must have been on something else.


The EU forced Apple to allow other rendering engines, but implementing one costs money vs just using WebKit for free, so nobody does it.


very few who even touch AI for anything aside from docs or stats
Not even translation? That’s probably the biggest browser AI feature.


The real ugly Optimus is a bunch of StreamDecks next to each other


Since sugar is bad for you, I used organic maple syrup instead and it works just as well


A Chinese university trained GLM
A startup spun out by a university (z.ai). Their business model is similar to what everybody else does, they host their models and sell access while trying to undercut each other. And like others they raised billions in funding from investors to be able to do this.


But also they are just tuning and packaging a publicly available model, not creating their own.
So they can be profitable because the cost of creating that model isn’t factored in, and if people stop throwing money at LLMs and stop releasing models for free, there goes their business model. So this is not really sustainable either.


We need to start posting this everywhere else too.
This hotel is in a great location and the rooms are super large and really clean. And the best part is, if you sudo rm -rf / you can get a free drink at the bar. Five stars.


How do they mess this up so bad?
They made their devs use copilot.


That’s funny because I grew up with math teachers constantly telling us that we shouldn’t trust them.
Normal calculators that don’t have arbitrary precision have all the same problems you get when you use floating point types in a programming language. E.g. 0.1+0.2==0.3 evaluates to false in many languages. Or how adding very small numbers to very large numbers might result in the larger number as is.
If you’ve only used CAS calculators or similar you might not have seen these too since those often do arbitrary precision arithmetics, but the vast majority of calculators is not like that. They might have more precision than a 32 bit float though.


I mean, most calculators are wrong quite often


What bothers me the most is the amount of tech debt it adds by using outdated approaches.
For example, recently I used AI to create some python scripts that use polars and altair to parse some data and draw charts. It kept insisting to bring in pandas so it could convert the polars dataframes to pandas dataframes just for passing them to altair. When I told if that altair can use polars dataframes directly, that helped, but two or three prompts later it would try to solve problems by adding the conversion again.
This makes sense too, because the training material, on average, is probably older than the change that enabled altair to use polars dataframes directly. And a lot of code out there just only uses pandas in the first place.
The result is that in all these cases, someone who doesn’t know this would probably be impressed that the scripts worked, and just not notice the extra tech debt from that unnecessary dependency on pandas.
It sounds like it’s not a big deal, but these things add up and eventually, our AI enhanced code bases will be full of additional dependencies, deprecated APIs, unnecessarily verbose or complicated code, etc.
I feel like this is one aspect that gets overlooked a bit when we talk about productivity gains. We don’t necessarily immediately realize how much of that extra LoC/time goes into outdated code and old fashioned verbosity. But it will eventually come back to bite us.
I have to do a bunch of relatively unsurmountable steps to do what should’ve taken half a minute. Like screenshot the profile and scrape the text with iOS Photos text recognition.
The iOS workaround isn’t quite as unsurmountable as you don’t have to go through the Photos app at all. You can enter text selection mode directly from the screenshot without even saving it or leaving the app you’re in. And since iOS will look up any word you can select in the system dictionary and also translate any text you can select, you can do these things right there too.
That said I did once make a shortcut that lets me triple tap the back of my phone to pop up a text version of everything on screen that the iOS OCR detects. Not sure what I did that for though, I don’t really use it.


Well it’s not improving my productivity, and it does mostly slow me down, but it’s kind of entertaining to watch sometimes. Just can’t waste time on trying to make it do anything complicated because that never goes well.
Tbh I’m mostly trying to use the AI tools my employer allows because it’s not actually necessary for me to believe that they’re helping. It’s good enough if the management thinks I’m more productive. They don’t understand what I’m doing anyway but if this gives them a warm fuzzy feeling because they think they’re getting more out of my salary, why not play along a little.


What gets me is that even the traditional business models for LLMs are not great. Like translation, grammar checking, etc. Those existed before the boom really started. DeepL has been around for almost a decade and their services are working reasonably well and they’re still not profitable.


As someone who sometimes makes demos of our own AI products at work for internal use, you have no idea how much time I spend on finding demo cases where LLM output isn’t immediately recognizable as bad or wrong…
To be fair it’s pretty much only the LLM features that are like this. We have some more traditional AI features that work pretty well. I think they just tagged on LLM because that’s what’s popular right now.


Sometimes mandatory web proxies still allow direct connections to port 443 so as to not break https, which in return means as long as your connection is to port 443, that proxy will pass it through without interfering.
I used to run sshd on port 443 for this reason back when I regularly had to work from client networks.


I played MIDI Maze on Atari ST as a kid, that was long before Quake…
Later in high school we played Doom over IPX.


They wants us to live short lives fighting illnesses and amongst ourselves, instead of them
That’s what I do. I have an LG OLED from 6-7 years ago and I have no idea what the UI looks like. But to be fair this is only because I don’t watch traditional TV at all. It’s just an Apple TV for most streaming services and a Mac Mini for some other things like adblocked youtube (with one of those cheap gyro mouse and keyboard bluetooth remotes). I guess I wouldn’t have to use the satellite TV though, I could get iptv via my fibre isp too, but that’d cost money.
The Mac is not good at supporting CEC other than switching source when it wakes up, but even that’s not an issue because I can still use the Apple TV remote to control volume even when something else is the active source. Speaking of volume, my setup also includes a Samsung sound bar which also has a remote that I never actually have to use. Everything mostly just works.