Why AI Doesn’t Work for Most Investors
Fix these 13 mistakes and AI starts working
Every time I hear “AI doesn’t work for investing,” I smile.
Because I know what happened.
They opened ChatGPT.
Asked something vague.
Got a generic answer.
Then blamed the tool.
AI didn’t fail.
The way they used it did.
AI doesn’t work for most investors because they repeat the same 13 mistakes, and I’ll show you each one so you can stop doing them:
Thinking Errors
Everything starts with how you think.
If your thinking is wrong, nothing you do with AI will work.
1. The Magic Button Illusion
The more people talk about AI, the more it starts sounding like this all-knowing, powerful machine that can solve everything
Yes, AI is powerful...
But not the kind of powerful where 1 click gives you a 100 bagger.
You still have to think.
You still have to judge.
Here’s where it goes wrong:
you try AI for an hour, it doesn’t hand you the perfect stock idea,
and you decide it’s useless.
That’s like a new investor expecting a 50% CAGR and quitting when it doesn’t happen.
Wrong expectations kill the tool.
The fix:
Set real expectations:
AI won’t pick stocks for you :It won’t tell you “Buy LVMH”
AI makes you 10× faster at understanding any business you’ve never seen before, when you use the right prompts and the right process.
AI makes you clearer by explaining the business in to 3–5 real drivers you could explain to a child.
AI makes it easier to judge management by letting you compare what they said in past filings and calls with what they’re doing today faster…
You keep the judgment, because only you know your circle of competence, your risk tolerance, and your opportunity cos
2. The Shiny Tool Problem
Don’t fall into the trap of comparing every model you see.
Gemini, ChatGPT, Claude… it turns into endless debate.
Here’s the real issue:
Most AI tests don’t measure what investors actually need.
They test:
math puzzles
coding level
logic games
None of that tests real business analysis.
Switching tools doesn’t improve your research.
Because the edge isn’t the tool.
It’s knowing how to ask clear, expert questions and that skill takes time.
If your inputs are unclear, every model fails.
The fix:
pick one tool
learn the basics
build your flow
add others later if you want
Right now, Gemini DeepResearch + NotebookLM are the strongest mix in my workflow.
3.The Perfection Paralysis
Don’t wait for the perfect workflow or the perfect prompt.
Don’t wait until everything is perfectly organized before you start.
Nothing gets figured out without messy reps.
You learn by doing, not by overthinking.
The fix: start simple, use what works, and improve as you go.
Progress beats perfection every time
4.The “AI Is Cheating” Belief
Some investors feel guilty using AI, like it makes their work less real or less earned.
Others think AI is only for beginners, and that after 20 years in the market they should not need it.
But refusing to use AI is like refusing to use a calculator to stay authentic.
It does not make you better. It only slows you down.
AI is only a tool.
The more experience you have, the sharper your questions become.
The better your prompts get.
And the more useful AI becomes compared to any beginner.
INPUT ERRORS
AI is only as good as the input you give it.
Good inputs force depth. Bad inputs guarantee shallow answers.
Here are the input mistakes that ruin everything.
5. The Undefined Objective
Most prompts fail because the investor never says what they actually want.
They write:
“Analyze this business.”
“Can you summarize this company?”
“What do you think of this stock?”
None of these say what success looks like.
AI has to guess.
And when AI guesses, you get generic answers.
This is where context engineering matters.
AI needs 4 things to give a sharp answer:
The Goal
What you’re trying to do:
understand the business, spot risks, check unit economics, etc.The Lens
The angle you care about:
quality investor, early research, deep research.The Scope
How deep you want it to go:
3 bullets, a full breakdown, a 1-minute read, a detailed scan.The Frame
What “done” looks like:
questions answered, checklist completed, thesis built, risks listed.
When you define these things, AI stops guessing
and starts acting like a real analyst.
6. The Everything-in-One Prompt
Investors pack 10 jobs into 1 prompt and expect depth.
Explain the business.
Analyze margins.
Check valuation.
Compare competitors.
List risks.
Find catalysts…
All in one breath.
This splits the model’s attention and turns everything shallow.
Nothing deep happens in a mega-prompt.
One prompt = one mission.
That is how you get signal instead of noise.
7. The Leading Question Error
If your prompt contains your opinion, AI will mirror it.
You say “I think margins are improving,”
AI says “Yes, here are 5 reasons.”
You say “This company looks strong,”
AI says “Absolutely.”
This isn’t analysis.
It’s confirmation with extra words.
Ask neutrally.
Let AI tell you something you didn’t already think.
8. The Missing Constraints Problem
Most prompts give no boundaries.
So AI pulls from everywhere blogs, news, random opinions.
You want precision, but you didn’t set rules.
Add constraints:
Use filings only.
Cite sources.
Stay inside the last X years.
No constraints → hallucination zone.
SYSTEM DESIGN ERRORS
Mindset matters.
Inputs matter.
But without a system behind it all, nothing compounds.
9.The Random Route Mistake
You don’t need a new angle every time you analyze a company.
You just need a clear path you follow every time.
Build 1 sequence and stick to it:
Business model
Unit economics
Management and incentives
Competition and moat
Risks…
Use the same path for every company you study.
Clarity comes from repetition, not randomness.
For example, when I dive into a new industry, I always follow this sequence.
10.The No-Memory Problem
If you don’t save what works, you’ll feel like you’re starting from zero every time.
You read filings.
You run prompts.
You find good insights.
And then… all of it disappears.
Next company?
Blank page again.
Build memory into your system:
Save the prompts that give good answers
Save the questions that always work
Save your checklists
Save your notes and patterns
Save the insights you want to reuse
Your research should get easier with each company you analyze.
Not harder.
Skill-Building Errors
AI won’t give you an edge if you don’t practice.
These are the mistakes that keep you from turning AI into a real skill.
11. The Consistency Gap
Trying AI once or twice won’t change anything.
A couple of good answers don’t mean you “get it.”
If you stop for weeks, you lose all momentum.
AI only gets useful when you use it on the companies and industries you actually follow.
your watchlist, your ongoing theses, your real decisions.
Use it every week, even for 15 minutes.
That’s when the edge starts to show.
12. The Sprint Mentality
Going all-in for 2 days doesn’t help you.
5 workflows, 10 prompts, 4 deep dives…
and then nothing for a month.
You can’t build an AI-powered research system this way.
1 focused analysis per week beats 10 in one weekend.
Small, steady work on real companies compounds faster than any sprint.
13. The No-Feedback Loop
If you never check whether AI actually improved your analysis.
your workflow stays stuck at version one.
You run prompts.
Read the outputs.
Move on.
But you don’t ask the real questions:
Did this change my decision?
Did it reveal something I missed?
Did it reduce my uncertainty?
Did it surface a risk I hadn’t seen?
Reflection is what upgrades your system.
Without it, nothing compounds.
A simple check after each deep dive
makes your next analysis twice as strong.
Recap: The 13 Mistakes That Break AI for Investors
Most investors break AI in 4 places:
Thinking (wrong expectations)
Inputs (vague prompts, no constraints)
Systems (no clear path, no memory)
Skill (no reps, no feedback, no improvement)
Fix these, and AI starts becoming a real research edge.



