

what use would a regular citizen have for a quantum computer?
Other accounts:
what use would a regular citizen have for a quantum computer?
If kagi is just an aggregate of other search engines why not just use a searx instance instead? Its open source and customizable.
What is that?
Edit: I just looked it up and apparently it’s a plant with roots that look like ginger so yeah that makes sense.
Do I just have to wait to see what it grows into?
I was thinking that but the leaves look really pointy so I’m not sure. I’m in Canada at the moment.
And if it was ginger how did it get in my backyard? I’m don’t have a garden or anything.
You can block any posts coming from threads.net by going to settings and doing instance blocking but long term it probably is better to just move to a different instance that better aligns with your values.
I appreciate it.
I think it is part of a long term strategy.
They saw all the negative feedback that was given when the first announcement came and there were a lot of users saying its not so bad or that we should give them a chance then.
Eventually everything became quiet and things moved on now there is a steady rise of pro Meta comments again and this time it will lead to a less violent reaction because it has already happened once before.
Rinse and repeat until they become the norm.
When you check the mod logs and filter by mod you can see that it came from Mr. Kaplan which is a lemmy.world admin.
So yes it was a lemmy.world decision. The question is whether or not this admin was a lone actor.
This is sort of related but do you have any plans on looking for coordinated voting?
You know if you want to do something more effective than just putting copyright at the end of your comments you could try creating an adversarial suffix using this technique. It makes any LLM reading your comment begin its response with any specific output you specify (such as outing itself as a language model or calling itself a chicken).
It gives you the code necessary to be able to create it.
There are also other data poisoning techniques you could use just to make your data worthless to the AI but this is the one I thought would be the most funny if any LLMs were lurking on lemmy (I have already seen a few).
Yeah I think so. If you are talking about the evolution of trust one.
It seems like he removed the ad read from the video. So I’m guessing he was taking the criticism to heart.
Yeah quite a few of his videos feel like that.
The content is good but then comes the ad read being read with the same enthusiasm as the video which just makes the video feel insincere.
I found this search engine that helps find non commercial sites. www.marginalia.nu
Might be useful for finding those kind of sites again.
I have a few in the community already, but I got some feedback from someone that I should do a few more that were shorter.
So tomorrow I’ll post shorter ones.
Sounds interesting.
you should consider moving to Facebook or Threads, maybe?
Not an option
As for the rest yeah those do seem like genuine obstacles. Partially think the reason I liked the algorithm is because it reminded me of the Web of Trust things like Scuttlebutt use to get relevant information to users but with a lower barrier to entry.
Also as I’ve said elsewhere it doesn’t have to be this exact thing but since this is a new platform we have the chance to make algorithms that work for us and are transparent so I wanted to share examples that I thought were worthwhile.
Edit:
You’d also turn Lemmy into the strongest echo chamber you could possibly create.
PS. I don’t think that’s true. Big tech companies that have more advanced algorithms would probably be much better at creating echo chambers.
If you happen to encounter Boba first then Cofee will be predicted to be disliked based on the overall preferences of people who agree with your Boba preference.
With this specific algorithm, I don’t necessarily think that would be the case. It only shows you fewer links from people who like the links that you dislike. It doesn’t show you fewer links based on what people who are like you dislike which is what it seems like you are describing.
Also, it doesn’t have to be this specific algorithm that we implement but I thought the idea was unique so I thought I’d share it anyway.
It seems to be working well enough for me now so I plan to keep using it and see what it’s like.
There’s another important aspect of learning that the simple description leaves out, which is exploration. It will quickly start showing you things you reliably like, but won’t experiment with things it doesn’t know you’d like or not to find out.
Why would this be the case? It shows you stuff that people who like similar stuff that you do like, but people have diverse interests so wouldn’t it be likely that the people that like one thing like other things that you hadn’t known about and that leads to a form of guided exploration?
couldn’t they get that with a regular computer?