Cut through the AI hype. The 4 types of AI tasks, how to evaluate vendor claims, and a framework for categorizing what AI can and can't help your team with. Day 1 of the free AI for managers course.
This simple model explains both AI's capabilities and its failures — and it will help you evaluate any AI claim in about 30 seconds.
Every vendor pitching you an AI tool wants you to believe it can do everything. Every breathless news article makes AI sound like it will either save civilization or end it. Neither is useful to you as a manager trying to run a team and hit quarterly goals.
What you need is not enthusiasm or anxiety — you need a working mental model. A way of thinking about what AI does that is accurate enough to make good decisions, practical enough to apply quickly, and durable enough to use as AI continues to evolve.
Here is one that holds up: AI is a pattern-completion engine. It learns patterns from enormous amounts of existing data and uses those patterns to complete new inputs. That's it. When you type a prompt into ChatGPT, you are providing the beginning of a pattern, and the AI is completing it based on what it has seen in training data.
This simple model explains both AI's capabilities and its failures — and it will help you evaluate any AI claim in about 30 seconds.
Everything you will ever be pitched by an AI vendor, everything your team will ever ask about, falls into one of four categories. Understanding these categories is the single most useful framework a non-technical manager can have.
Within those four task types, AI performs best when the following conditions are present:
Understanding AI's failure modes is as important as understanding its strengths. Here is where managers consistently get burned:
AI learns patterns from past data. It does not reason the way a human does. If you ask it to analyze a situation that is genuinely new — an unusual regulatory scenario, an unprecedented organizational challenge — it will produce output that sounds confident and plausible but may be completely wrong. It's completing "what a good answer looks like" rather than actually working through your specific problem. Always verify AI analysis of novel situations against expert judgment.
AI "hallucinates" (that means it invents facts that don't exist). This is not a bug being fixed — it's a fundamental property of how the technology works. AI will fabricate citations to papers that don't exist. It will state incorrect statistics with full confidence. It will invent specific facts when it doesn't know the real answer.
Any AI output containing specific numbers, named sources, or dates must be independently verified before you act on it or share it.
AI only knows what you put in the prompt. It does not know your organization's history, your team's dynamics, or the unwritten rules of how decisions get made in your context.
When you ask AI for advice on a complex organizational situation, it's giving you its best pattern-match on situations like yours — not actual knowledge of your specific situation. Use it as a starting point, not a final answer.
AI produces competent, conventional work quickly. It is not good at genuine originality — the insights that come from seeing something nobody has connected before. AI can assist with execution but rarely with the creative leap that changes how an industry thinks. For that, you still need humans.
You will be pitched a lot of AI tools. Here is a five-question filter that surfaces the most important information in ten minutes:
| Question | What a Good Answer Sounds Like | Red Flag Answer |
|---|---|---|
| What specific task does this do? | "It classifies incoming support tickets into 12 categories and routes them to the right team." | "It uses AI to transform your entire operations." (No specifics.) |
| How accurate is it? | Specific accuracy rate with test methodology. "92% accuracy on a held-out test set of 10,000 tickets." | "It's very accurate" or "it gets better over time." (No numbers.) |
| What happens when it's wrong? | Clear description of failure modes and how humans catch and correct errors. | "It rarely makes mistakes." (Evasion — all AI makes mistakes.) |
| Can you show me a live demo on our actual data? | Yes. Demo on realistic data similar to yours. | "We'll set that up in phase 2." (They know it won't perform on real data.) |
| What do customers who have been using this for a year say? | Specific reference customers you can call, with measurable outcomes they've achieved. | "We're still early in rollout." (No proven track record.) |
Think about the work your team does on a weekly basis. List 10 recurring tasks — the things that happen every week or month. Then, for each one, make two assessments:
For each task you rate "Strong candidate" or "Possible," note the one specific condition that makes it suitable — or the one concern that needs to be addressed before deployment.
Keep this list. You will use it in Day 2 to frame your tool evaluation and in Day 3 to build a business case.
Before moving on, confirm understanding of these key concepts:
Our 2-day AI bootcamp includes a full-day leadership module with live vendor evaluation workshops. Five cities. $1,490 per seat.
Reserve Your Seat →