• 1 Post
  • 62 Comments
Joined 2 years ago
cake
Cake day: June 11th, 2023

help-circle


  • Balder@lemmy.worldtoTechnology@lemmy.worldaight... i'm out..
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    7 days ago

    Always has been. Nothing has changed.

    The fact that OpenAI stores all input typed doesn’t mean you can make a prompt and ChatGPT will use any prior information as context, unless you had that memory feature turned on (which allowed you to explicitly “forget” what you choose from the context).

    What OpenAI stores and what the LLM uses as input when you start a session are totally separate things. This update is about the LLM being able to search your prior conversations and referencing it (using it as input, in practice), so saying “Nothing has changed” is false.


  • Balder@lemmy.worldtoTechnology@lemmy.worldaight... i'm out..
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    2
    ·
    edit-2
    8 days ago

    Maybe for training new models, which is a totally different thing. This update is like everything you type will be stored and used as context.

    I already never share any personal thing on these cloud-based LLMs, but it’s getting more and more important to have a local private LLM on your computer.













  • Do you really think Lemmy could handle the amount of people that Reddit has?

    As far as I know the existing instances are usually running on capacity and always in need of donations, and that’s when the owner isn’t handling the costs themselves. I’m not sure how well most instances have right now.

    Maybe Lemmy would benefit of some way to get people to pay, such as purchasing the ability to give people awards etc. like Reddit. Despite being useless stuff, it might provide some fun that would make hardcore users want to pay. But for that to work out, all apps would also need to show the posts awarded in a different way, so I think that’s unlikely.

    But the point is that without a business model, the Fediverse will only be able to handle a limited number of enthusiasts before it faces scaling problems.





  • Data augmentation is a thing since a long time, but of course if the majority of your data is synthetic your model will suck on real world data. Though as these generative models get better and better at mimicking real world data and we select the results we want to use (removing the nonsense and hallucinations, artifacts etc.), we’re still feeding them “more data”.

    I guess we’ll have to wait and see what effect it’ll produce on future models. I think overall the improvements on LLMs have been good, even at slow steps we’re still figuring out how to better turn them into useful tools. I don’t know how well the image generation models have improved in the last 2 years though.