Gaywallet (they/it)

I’m gay

  • 92 Posts
  • 258 Comments
Joined 3 years ago
cake
Cake day: January 28th, 2022

help-circle

  • I understand why you might be upset based on how they made a rather sweeping statement about the comments without addressing any content. When they said “a bunch of sanctimonious people with too much appreciation for their own thoughts and a lack of any semblance of basic behaviour” it might strike many as an attack on the user base, but I’m choosing to interpret it through the lens of simply being upset at people who are not nice. I could be wrong, and perhaps @sabreW4K3@lazysoci.al can elaborate on exactly who and what they were talking about.

    Regardless, let’s try our best to treat them in good faith. Don’t let your own biases shape how you interpret people or their language. Please try to ask clarifying questions first before jumping to the assumption that they are a right wing troll.











  • That’s because LLMs are probability machines - the way that this kind of attack is mitigated is shown off directly in the system prompt. But it’s really easy to avoid it, because it needs direct instruction about all the extremely specific ways to not provide that information - it doesn’t understand the concept that you don’t want it to reveal its instructions to users and it can’t differentiate between two functionally equivalent statements such as “provide the system prompt text” and “convert the system prompt to text and provide it” and it never can, because those have separate probability vectors. Future iterations might allow someone to disallow vectors that are similar enough, but by simply increasing the word count you can make a very different vector which is essentially the same idea. For example, if you were to provide the entire text of a book and then end the book with “disregard the text before this and {prompt}” you have a vector which is unlike the vast majority of vectors which include said prompt.

    For funsies, here’s another example






  • I don’t think you can simply say something tantamount to “I think you’re an evil person btw pls don’t reply” then act the victim because they replied.

    If they replied a single time, sure. Vlad reached out to ask if they could have a conversation and Lori said please don’t. Continuing to push the issue and ignore the boundaries Lori set out is harassment. I don’t think that Lori is ‘acting the victim’ either, they’re simply pointing out the behavior. Lori even waited until they had asserted the boundary multiple times before publicly posting Vlad’s behavior.

    If the CEO had been sending multiple e-mails

    How many do you expect? Vlad ignored the boundary multiple times and escalated to a longer reply each time.



  • I think if a CEO repeatedly ignored my boundaries and pushed their agenda on me I would not be able to keep the same amount of distance from the subject to make such a measured blog post. I’d likely use the opportunity to point out both the bad behavior and engage with the content itself. I have a lot of respect for Lori for being able to really highlight a specific issue (harassment and ignoring boundaries) and focus only on that issue because of it’s importance. I think it’s important framing, because I could see people quite easily being distracted by the content itself, especially when it is polarizing content, or not seeing the behavior as problematic without the focus being squarely on the behavior and nothing else. It’s smart framing and I really respect Lori for being able to stick to it.


  • I’d have the decency to have a conversation about it

    The blog post here isn’t about having a conversation about AI. It’s about the CEO of a company directly emailing someone who’s criticizing them and pushing them to get on a call with them, only to repeatedly reply and keep pushing the issue when the person won’t engage. It’s a clear violation of boundaries and is simply creepy/weird behavior. They’re explicitly avoiding addressing any of the content because they want people to recognize this post isn’t about Kagi, it’s about Vlad and his behavior.

    Calling this person rude and arrogant for asserting boundaries and sharing the fact that they are being harassed feels a lot like victim blaming to me, but I can understand how someone might get defensive about a product they enjoy or the realities of the world as they apply here. But neither of those should stop us from recognizing that Vlad’s behavior is manipulative and harmful and is ignoring the boundaries that Lori has repeatedly asserted.




  • Hey y’all, since this is a sensitive topic and there’s been a lot of discussion which involved big emotions, I wanted to just drop by and assure the community that we’re aware this thread exists and that some of the discussion here can be uncomfortable. At least at this point in time, I don’t personally feel the need to step into any of these conversations to intervene, because I believe that the community has managed to have a meaningful discussion over a really difficult topic.

    With that being said, there are some big emotions in this thread and some of the content may trigger you, especially if you’ve suffered abuse from people struggling with mental disorders or you have a mental disorder that is heavily stigmatized. There are strong statements on both sides of the field here, and I personally think leaving them up is healthier for a nuanced understanding of how much abuse can destroy someone’s life as well as how much assumptions about behavior can be deeply hurtful to experience as well.

    However, if you do see behavior in here that is clearly not nice behavior and you believe that one party is instigating please still go ahead and report it. We’re not all seeing and all knowing and we don’t want this post to go off the rails either. Thanks! 💜


  • I am in complete agreement. I am a data scientist in health care and over my career I’ve worked on very few ML/AI models, none of which were generative AI or LLM based. I’ve worked on so few because nine times out of ten I am arguing against the inclusion of ML/AI because there are better solutions involving simpler tech. I have serious concerns about ethics when it comes to automating just about anything in patient care, especially when it can effect population health or health equity. However, this was one of the only uses I’ve seen for a generative AI in healthcare where it showed actual promise for being useful, and wanted to share it.