- 5 Posts
- 7 Comments
Occhioverde@feddit.itto
Technology@lemmy.world•Teen killed himself after ‘months of encouragement from ChatGPT’, lawsuit claimsEnglish
34·7 months agoArguably, they are exactly the same thing, i.e. parents that are asking other people (namely, OpenAI in this case and adult sites operators in the other) to do their work of supervising their children because they are at best unable and at worst unwilling to do so themselves.
Occhioverde@feddit.itto
Technology@lemmy.world•Teen killed himself after ‘months of encouragement from ChatGPT’, lawsuit claimsEnglish
63·7 months agoI think we all agree on the fact that OpenAI isn’t exactly the most ethical corporation on this planet (to use a gentle euphemism), but you can’t blame a machine for doing something that it doesn’t even understand.
Sure, you can call for the creation of more “guardrails”, but they will always fall short: until LLMs are actually able to understand what they’re talking about, what you’re asking them and the whole context around it, there will always be a way to claim that you are just playing, doing worldbuilding or whatever, just as this kid did.
What I find really unsettling from both this discussion and the one around the whole age verification thing, is that people are calling for techinical solutions to social problems, an approach that always failed miserably; what we should call for is for parents to actually talk to their children and spend some time with them, valuing their emotions and problems (however insignificant they might appear to a grown-up) in order to, you know, at least be able to tell if their kid is contemplating suicide.
Occhioverde@feddit.itto
Technology@lemmy.world•Why LLMs can't really build softwareEnglish
2·7 months agoYes and no.
In many cases (like for the Gradle DSL, that even if it can be either the old Groovy-based one or the new Kotlin-based one, you will always be able to find extensive documentation and examples in the wild for both of them) it is sufficient to specify which version you’re using and, as long as this doesn’t get too far in its context window forcing you to repeat it, you are good to go.
But for niche libraries that have recently undergone significant refactors with the majority of the tutorials and examples still built with past versions, they have a huge bias towards the old syntax, making it really difficult - if not impossible - to make them use the new functions (at least for ChatGPT and GitHub Copilot with the “Web search” functionality on).
Occhioverde@feddit.itto
Memes@lemmy.ml•Western media and liberals on Lemmy apparently 🙄
1530·8 months agoI’m sure you know the phrase “extraordinary claims require extraordinary evidence”, yet while there are almost 400 sources documenting the Uyghur Persecution on the dedicated Wikipedia page, the basis for your argument is a photo of a city?
If what you’re trying to say is that there is a genocide in Gaza that is not adequately documented by our media I’m all with you, but since this thread is attracting some deniers, maybe it’s important to remember that this doesn’t erase a completely disconnected fact that is documented by every major human rights organization (here is, for example, the whole report by Amnesty International: https://xinjiang.amnesty.org/).
Occhioverde@feddit.itto
Technology@lemmy.world•Fairphone announces the €599 Fairphone 6, with a 6.31" 120Hz LTPO OLED display, a Snapdragon 7s Gen 3 chip, and enhanced modularity with 12 swappable partsEnglish
1515·9 months agoI really respect Fairphone and I’m a happy owner of the Fairphone 5, but I find a bit puzzling for a company that suggests its customer should keep their phone for more than the 2.5 years average to release a new model just 2 years after the previous one.
Just my two cents, but they shoul’ve focused on developing either a tablet or a smartwatch to fill a gap in other markets before announcing yet another smartphone.




Yes and no. The example you made is of a defective device, not of an “unethical” one - though I understand how you are trying to say that they sold a malfunctioning product without telling anyone.
For LLMs, however, we know damn well that they shouldn’t be used as a therapist or as a digital friend to ask for advice; they are no more than a powerful search engine.
An example that is more in line with the situation we’re analyzing is a kid that stabs itself with a knife after his parents left him playing with one; are you sure you want to sue the company that made the knife in that scenario?