

The latest in chatbot “assisted” legal filings. This time courtesy of an Anthropic’s lawyers and a data scientist, who tragically can’t afford software that supports formatting legal citations and have to rely on Clippy instead: https://www.theverge.com/news/668315/anthropic-claude-legal-filing-citation-error
After the Latham & Watkins team identified the source as potential additional support for Ms. Chen’s testimony, I asked Claude.ai to provide a properly formatted legal citation for that source using the link to the correct article. Unfortunately, although providing the correct publication title, publication year, and link to the provided source, the returned citation included an inaccurate title and incorrect authors. Our manual citation check did not catch that error. Our citation check also missed additional wording errors introduced in the citations during the formatting process using Claude.ai.
Don’t get high on your own AI as they say.
We already knew these things are security disasters, but yeah that still looks like a security disaster. It can both read private documents and fetch from the web? In the same session? And it can be influenced by the documents it reads? And someone thought this was a good idea?