

Be so nice to notify your contacts about breaking e2ee then, will ya
So my fear of sharks has gotten better, and I don’t quite understand why.
Curious 🏳️⚧️
But anyhow, you’ve probably found me because you’ve seen one of my totally inconspicuous comments
I’m open to chat about these and other related things over on Matrix
Be so nice to notify your contacts about breaking e2ee then, will ya
They aren’t allowed anymore where I live. Only basic calculators now
Have you told your contacts, at least?
They do provide an apk outside of the Play Store, that uses a Web Socket for push notifications. Not he best way of going about it, but hey, it exists.
Seems like a few countries should go over their laws again and prohibit those models from being sold. I don’t know what else would be effective
Wait… what? What???
Aegis Authenticator, in case someone was wondering what to use
Depends on a lot of factors, maybe you’re regaining that battery life elsewhere. But it is fact that several apps all doing their own thing will drain more battery than if they all relied on a single service like Firebase or UnifiedPush to wake them up
There is one thing about interoperability that I don’t see many people talking about:
Your messages going to and being handled by other services means you’d be subject to their TOS and privacy policy as well.
As long as services are transparent about it so users can make informed decisions based on it, that’s generally fine.
But then services like Beeper, or just Matrix bridges in general, make it so anyone can setup such a connection between services without their contacts even knowing about it.
The potential thing has to be one of the most damaging things you can say to someone. Not many things instill quite as much self-doubt.
And nowadays, most online ads are intrusive and privacy-defeating tracking, so you know
So nice, right? Just being able to curate where your search engine pulls result from… I wish I’d discovered it sooner
All of them are gguf, yeah
I may need to lower it a bit more, yeah. Though when I try to to use offloading, I can see that vram usage doesn’t increase at all.
When I leave the setting at its default 100 value on the other hand, I see vram usage climb until it stops because there isn’t enough of it.
So I guess not all models support offloading?
I’m currently playing around with the Jan client, which uses the nitro engine. I think I need to read up on it more, because when I set the ngl value to 15 in order to offload 50% to GPU like the Jan guide says, nothing happens. Though that could be an issue specific to Jan.
That’s nice and all, but what are some FOSS models I can run on GPU with only 4GB?
I’ve tried Deepseek Coder, and it’s pretty nice for what I use it for. Then there’s TinyLlama, which… well it’s fast, but I need to be veeeery exact in how I prompt it.
I just forget about the things I should be doing. Not that I mean to, but it’s the pinnacle of ignorant bliss
I think I’m gonna get one of the higher end models, since it’ll be the last possible upgrade I can do on my motherboard.
So 5700X3D or 5800X3D, depending on what the prices look like whenever I’m gonna be in the market for them. And then I’ll be set for a looong while. Well, an appropriately fast GPU would be nice to go along with it, but you know.
But it’s pretty cool that they made a 5600 variant too. Might as well use the chips they evidently still have left over
Oh, now that sounds like something I might like
I don’t have the fastest RAM out there, so whenever I upgrade from my 1600, I want an X3D variant to help with that
It’s nice that the option exists, I didn’t know that. But I have to say, I’m still not a fan of the overall concept of bridges