What AI actually does (and doesn't do)
The core idea
Artificial intelligence doesn’t think, understand, or look things up. AI predicts the next most likely word based on patterns it learned from training data. That’s it. Every surprising success and every frustrating failure comes back to this one idea.
Let’s run two quick experiments. They’ll take about 30 seconds each, and they’ll make the “prediction machine” idea click in a way no definition can.
Experiment 1: A pattern it knows
Copy this into your AI and ask it to continue where the speech leaves off. Don’t tell it what speech it is — just paste the text.
Continue this text naturally: "Vice President Johnson, Mr. Speaker, Mr. Chief Justice, President Eisenhower, Vice President Nixon, President Truman, reverend clergy, fellow citizens, we observe today not a victory of party, but a celebration of freedom — symbolizing an end, as well as a beginning — signifying renewal, as well as change. For I have sworn before you and Almighty God the same solemn oath our forebears prescribed nearly a century and three quarters ago. The world is very different now. For man holds in his mortal hands the power to abolish all forms of human poverty and all forms of human life. And yet the same revolutionary beliefs for which our forebears fought are still at issue around the globe."
It’ll nail it — because it’s seen this pattern thousands of times. JFK’s inaugural address is one of the most quoted speeches in English. The AI isn’t “remembering” the speech. It’s predicting what words are overwhelmingly likely to come next in this pattern.
Experiment 2: A pattern it’s never seen
Now try this completely made-up passage. Same instruction — ask the AI to continue it.
Continue this text naturally: "The gorblex of Tivnari was always celebrated on the third moon of Kleptunday, when the village elders would gather their finest collection of translucent memories and arrange them by emotional weight."
It will generate something that sounds coherent but means absolutely nothing — because it’s pattern-matching, not understanding. It knows what “fantasy world-building text” usually looks like, so it generates more of that shape. The words feel right. The meaning is hollow. That’s the prediction machine in action.
When AI has strong patterns — common topics, well-known facts, popular formats — the output is impressive. It's drawing from millions of similar examples, so the predictions land.
When patterns are weak or absent — niche topics, made-up information, recent events — AI still generates plausible-sounding text. It just may be hollow or flat-out wrong.
This single insight explains hallucinations, inconsistency, and many surprising failures. We'll cover each of those in detail later in the course. For now, just hold onto this lens.
Takeaway: AI is a prediction machine, not a knowledge base. It generates what a correct answer would probably look like. Sometimes that matches reality. Sometimes it doesn’t. Keep this lens for everything that follows.
LLMs (large language models — the technology behind ChatGPT, Claude, Gemini, and others) are trained on massive amounts of text from the internet: books, articles, forums, documentation, code, and more.
During training, they learn statistical patterns — which words tend to follow which other words, in what contexts. They don’t store facts in a database. They learn probabilities. “After the phrase ‘the capital of France is’, the word ‘Paris’ is overwhelmingly likely.” That’s not knowledge. That’s pattern recognition.
The training data is also a snapshot in time. The model doesn’t know what happened after its training cutoff date. It can’t browse the web, check the news, or update itself (unless the tool it’s built into adds that capability separately). This is why AI can be confidently wrong about recent events — it’s predicting based on patterns that may be outdated.
We work alongside your team to build AI-native workflows — from one-week sprints to full engineering acceleration. No handoffs, no slide decks.
Talk to us