LLMs: The standard is not perfection
Right now, LLMs are shaking up the way we live life and access information. There's a lot of debate around how useful they can be if they'll lie to you.
Everything lies to you
When I get info from an LLM, I affectionately tell folks "the hallucination box says..."
But this isn't unique to LLMs!
- Google searches are full of incorrect or inapplicable information. A gore skill of web search is sorting out what's true and what's relevant.
- StackOverflow famously closes questions as duplicates, which are not actually duplicates. Accepted answers can be wrong, outdated, or solve a problem you don't have while confidently proclaiming it's you who's solving the wrong problem.
- People you know get things wrong all the time. Wrong facts, bad opinions, faulty memories.
Nobody argues these shouldn't be used because they get things wrong. We just know we have to use our judgement.
But those are powered by people. LLMs are machines. That changes our expectations.
Which seems strange, because personal computers are famous for behaving in faulty, erratic, inscrutable ways. Yet have hold them to the standard of their best days, when they function with unerring determinism.
I'm going to keep using LLMs for personal use. Rather than grump that they're unreliable, I'd rather build the skill of assessing when they're right or wrong, like I have for other tools. That way I get the benefits.
Programmatic use is harder, since there's no human in the loop to exercise judgement, and I'd like to use LLMs in exactly the situations that are hardest to algorithmically verify. But I'm sure I'll figure how to make them useful there, too.