Mar 22, 2024
How Smart Are LLMs?
Large Language Models are everywhere now. If you follow tech news, you've probably seen claims that LLMs are getting close to matching or even surpassing human intelligence. But when you look at how these models actually work, the picture is a lot more nuanced.
LLMs learn by looking at huge amounts of text data and finding patterns in how humans write and speak. The quality of what they produce depends entirely on the data they were trained on.
- If you ask them something similar to what they've seen before, they'll usually give you a decent answer.
- If you ask them something completely outside their training, they might confidently give you wrong information. This is what people call "hallucination."
What LLMs are good at:
- Summarizing information
- Creative writing
- Translation
- Writing code
Where LLMs struggle:
- Remembering facts accurately. Some systems get around this by connecting to external databases (this is called RAG).
- Doing math. Many systems use separate tools for calculations.
- Following complex logic chains. You can improve results by showing them examples of reasoning in your prompt, but they still have limits.
These limitations aren't surprising once you understand what's happening under the hood. LLMs are essentially very sophisticated pattern matchers, not thinkers. They're incredibly useful tools, but they're not the same as human intelligence.
(Notes from the course Generative AI with Large Language Models on Coursera)
