

Well, I’m generally very anti-LLM but as a library author in Java it has been very helpful to create lots of similar overloads/methods for different types and filling in the corresponding documentation comments. I’ve already done all the thinking and I just need to check that the overload makes the right call or does the same thing that the other ones do – in that particular case, it’s faster. But if I myself don’t know yet how I’m going to do something, I would never trust an AI to tell me.
I’m not sure how you would do that if you are asking about something you don’t have expertise in yet, as it takes the exact same authoritative tone no matter whether the information is real.
So far, research suggests this is not possible (unsurprisingly, given the nature of LLMs). Introspective outputs, such as certainty or justifications for decisions, do not map closely to the LLM’s actual internal state.