Understand AI in plain English, what OMB M-25-21 actually requires, and identify where AI can help your work. Day 1 of the free federal AI course.
For practical purposes, most AI tools you'll encounter in a federal context do one of four things:
Let's start with what AI is not. It is not a sentient being. It is not magic. It does not "understand" your work the way a colleague does. And it will not replace your judgment — at least not the good kind.
What AI actually is: software that finds patterns in large amounts of data and generates outputs based on those patterns. That's it. The reason it feels remarkable is that the patterns it has learned come from nearly every text document ever put on the internet. So when you ask it a question, it draws on a vast — though imperfect — base of human knowledge to respond.
For practical purposes, most AI tools you'll encounter in a federal context do one of four things:
Notice what is not on that list: making final decisions, operating without oversight, or handling classified information without specific authorization. That is not a limitation of AI — it is the current federal policy framework. And understanding that framework is the most important thing you can do right now.
In January 2025, the White House Office of Management and Budget (OMB — the office that sets policy for how federal agencies operate) issued Memorandum M-25-21 (a numbered policy memo), titled "Accelerating Federal Use of Artificial Intelligence through Innovation, Governance, and Public Trust." This is the governing document for federal AI policy in 2025 and beyond.
Here is what it actually requires — translated from policy language into plain English:
| Requirement | What It Means | Who's Responsible |
|---|---|---|
| AI Use Case Inventory | Every agency must maintain a public inventory of all AI systems in use. Entries must describe the use case, the data used, the risk level, and the human oversight mechanism. | Chief AI Officer (CAIO — the senior official responsible for AI strategy at each agency) with input from all program offices |
| Chief AI Officer (CAIO) | Each agency must designate a CAIO responsible for AI governance, strategy, and coordination with OMB. | Agency head designates; often the CIO (Chief Information Officer) or a senior IT official |
| AI Governance Board | Agencies must establish internal review committees to evaluate AI use cases before they are deployed in any official capacity. | CAIO chairs; membership should include legal, privacy, and program leads |
| Workforce Training | Agencies must develop and implement AI literacy training for their workforce. This course counts. | CHCO (Chief Human Capital Officer — the head of HR for an agency) and CAIO jointly responsible |
| Procurement Standards | When an agency buys an AI system from a vendor, it must verify the vendor meets specific standards — including transparency (can they explain how it works?), explainability (can they explain why it made a specific decision?), and testing standards (was it validated on real data?). | Contracting officers with CAIO guidance |
| High-Impact AI Review | AI systems that affect significant rights or safety must undergo enhanced review with mandatory human oversight. | Program leads with legal and privacy review |
The memo uses the phrase "AI-ready workforce" 11 times. But what does that actually mean for someone who is not a data scientist?
It means three things:
Not every task is a good candidate for AI assistance. A memorandum that requires your institutional knowledge and policy judgment is not. A meeting summary that needs to be drafted quickly from raw notes is. The skill is knowing the difference — and making that call quickly.
AI tools make mistakes. They "hallucinate" — that means they invent facts that sound plausible but aren't true. They can be confidently and completely wrong. An AI-ready workforce verifies outputs before acting on them, especially in high-stakes situations. This is not paranoia — it is professional judgment applied to a new tool.
You don't need to memorize M-25-21. But you do need to know: does this use case need to go in the inventory? Does this data classification allow this tool? Is there a human in the loop where required? These are operational questions that affect your day-to-day work.
As of April 2026, here is the honest state of federal AI adoption:
What's working well: Document summarization, meeting note drafting, policy research, internal knowledge management, and procurement document preparation. These are low-risk, high-value applications that agencies are deploying today with strong results.
What's being piloted carefully: Benefits determination assistance, fraud detection, case prioritization. These are higher-stakes applications that require robust human oversight and are subject to enhanced review under M-25-21.
What's still off the table: Autonomous decision-making on rights-affecting determinations (AI systems making final calls on benefits, enforcement, or legal matters without human review), use of non-FedRAMP tools with sensitive data (FedRAMP is the federal security certification program — if a cloud tool isn't FedRAMP authorized, it hasn't been vetted for government data), and AI systems that can't be explained or audited. These are not just policy restrictions — they are reasonable professional standards.
Understanding this landscape tells you where your agency is likely focused and what types of use cases will get governance approval versus what will face scrutiny.
This exercise produces a real output you can use when you get to Day 3 (writing use cases). Take 10 minutes with this framework:
Keep this list. You will use it on Day 3 to draft formal use case entries.
Before moving on, confirm understanding of these key concepts:
Our 2-day in-person bootcamp includes a full federal AI strategy workshop — group exercises, live use case drafting, and governance templates. Section 127 eligible. Five cities.
Reserve Your Seat →