The Illusion of Understanding
AI gives answers. You still need to ask the right questions.
I made up some revenue figures and fed them into an AI. A simple view with 5 quarters moving up slowly. I asked the AI whether these revenues are good. It told me the trend looks positive, and that a consistent, unbroken upward trend is a healthy sign.
The question was meaningless, because it lacked any context. And yet the AI answered with confidence. To be fair, it did flag that it would need an industry benchmark to say whether the increase is truly good. But buried beneath that caveat was a response that felt reassuring. The kind of answer that could easily make you believe everything is fine.
That’s the trap.
People have always been able to get bad financial advice: from unqualified friends, dubious websites, or gut instinct. But AI is different. It doesn't hesitate or sound uncertain. It speaks in the calm, authoritative tone of an expert, even when it’s working with nothing. The more convincing the answer sounds, the less likely you are to question it.
This is precisely why financial literacy matters more now, not less. The bottleneck has shifted. Getting an answer used to be hard. Now it’s instant. The hard part today is knowing which questions to ask, and whether the answer you’re getting is actually meaningful. That requires understanding the subject yourself.
AI is genuinely powerful. But without the financial knowledge to interrogate its output, you’re not using a tool. You’re just outsourcing your blind spots.
Confident summaries, shallow foundations
AI compresses complexity into clean, confident summaries. That’s genuinely useful. The mistake is confusing accessibility with understanding.
AI doesn’t automatically make the person using the tools more financially literate. And the gap between those two things is where the danger lives. Without any basic knowledge in finance, you can’t tell the difference between a good answer and a plausible-sounding one.
A financially literate person wouldn’t just ask better questions. They would know that the original question was the wrong one entirely. That instinct is something AI cannot supply. It has to come from you.
What makes things worse is that the stakes of being confidently wrong are higher than ever. When financial analysis was slow and expensive, decisions moved slowly too. The friction created natural checkpoints: time to consult someone, to notice something felt off, to get a second opinion.
AI removes that friction. Decisions that used to take weeks now happen in hours. That’s mostly a good thing. Except when the underlying analysis is flawed. The speed advantage of AI-assisted finance only works if the person using it can move fast with confidence because they understand what they’re looking at. Moving fast without that understanding is just a faster way to make expensive mistakes.
The right questions don’t ask themselves
Back to our revenue example. Besides benchmarking with other companies in your industry, there are so many more things to investigate. Raw direction (up or down) tells you almost nothing. You want, for instance, to gain an understanding of what drives the growth: whether it was more customers, higher prices or larger orders per customer.
But not only that. You want to understand the gross margins behind these revenues. Is the growth profitable? Revenue can grow while a business deteriorates. Seasonality or one-off items may be distorting your figures. A financially literate person knows that revenue is just the starting point.
Numbers only make sense relative to expectations and the conditions that produced them. How does this compare to the company’s own forecast or budget? What is the customer acquisition cost trend alongside this revenue growth? Is this growth sustainable, or is it dependent on factors that won’t repeat?
Of course, you can prompt AI to help you here. Asking “what am I missing?” or “what else should I look at?” will often surface useful directions. And that’s genuinely helpful as a starting point. But it introduces a new problem: how do you know when to stop? AI will give you a list of things to investigate. Without financial knowledge, you can’t tell which ones are critical and which are peripheral, how deep to go on each, or whether something important didn’t make the list at all. You’ve moved the dependency one level up, but you haven’t resolved it.
Validating the output
A financial model is only as good as its assumptions. AI doesn’t audit your assumptions. It builds on them. The output sounds polished. But it can still be deeply flawed.
AI will build you a beautiful cash flow forecast based on whatever inputs you give it without questioning whether those inputs are realistic. A founder who doesn’t understand unit economics might feed in optimistic assumptions and receive a compelling growth model that obscures a fundamentally broken business.
The model looks professional. The thinking behind it isn’t. And because AI produced it, it carries an air of authority it hasn’t earned.
What does it take to spot a gap, a missing caveat, or a flawed assumption? Again, that requires baseline knowledge. You need to read the answer that AI produces critically.
Where AI genuinely helps
AI can do real work: modelling scenarios, explaining concepts, flagging things you might miss. But it works best as a thinking partner for someone who already has a foundation, not as a substitute for one.
AI genuinely does lower the barrier to financial tasks. Things that required an accountant or analyst five years ago. Building a model, reading a financial statement, stress-testing assumptions are now accessible to anyone willing to ask the right questions. That’s real and worth acknowledging.
AI is not a replacement for financial knowledge. It’s a multiplier of whatever financial knowledge you already have.
A financially literate person using AI becomes dramatically more capable. They know which questions to ask, they can spot when an answer doesn’t make sense, they understand what the output is telling them and what it isn’t. The tool extends their judgment. Seen like this, AI is a very good discussion partner that helps you work out a good solution.
A financially illiterate person using AI gets confident-sounding answers they can’t evaluate. They don’t know which questions they’re not asking. They can’t tell when the model has made a wrong assumption. They mistake fluency for accuracy. The tool creates an illusion of understanding without the substance.
The risk isn’t that AI gives wrong answers. It’s that wrong answers become harder to detect when they’re wrapped in clean prose and a well-formatted table. Your financial literacy is what lets you have a real conversation rather than simply being talked at.
The quiet work that pays off
Financial literacy can feel like one of those things you’ll get to someday. When things slow down. When it feels more urgent. When you have more time.
But here’s the truth: Once it is urgent, you will not have time to prepare. When you’re looking at a loan agreement, evaluating a business opportunity, or trying to make sense of your company’s numbers in a board meeting. That is not the moment to start learning. That moment rewards the preparation you did quietly, in advance, when nothing was on the line.
And the good news is that it compounds. Every concept you understand makes the next one easier to grasp. Every time you read a set of numbers and something clicks, your confidence grows. You start asking better questions. Not just of AI, but of advisors, of partners, of yourself. You stop nodding along and start actually knowing.
The difference this makes in real life is hard to overstate. It shows up when you negotiate. When you invest. When you catch something others missed. When you sit in a room and understand what’s really being said. Financial literacy doesn’t just make you better with money. It makes you clearer-headed, harder to mislead, and more confident in the decisions that matter most.
That’s exactly why I write about this. Not to turn you into an accountant, but to give you the foundation that lets you engage with the financial world — and increasingly, with AI — on your own terms. One concept at a time, one newsletter at a time. The progress is quiet at first. Then one day you realise you’re asking exactly the right questions.
And that’s when it all starts to work.



this type of confidence is also performed in organizational leadership. this is not unique to ai. this is fundamentally part of humanity. we completely accept this from numerous people including those that we consider experts whether they have that expertise or not. the difference with ai is that you can question it without worrying about the implications of those questions. you can safely question and dig and find the answers to: is that confidence validated or not?