By Greg Godbout
As the federal government once again stares down the barrel of a shutdown, much of Washington’s conversation turns to what stops — paychecks, services, contracts, and operations. But in the era of automation and artificial intelligence, a new question is starting to emerge: Can’t AI just keep the lights on while people are furloughed?
The short answer is no — and for good reason. Even the most advanced government AI systems depend on something shutdowns take away: human-in-the-loop (HITL) oversight. It’s a non-negotiable requirement written into policy, grounded in law, and essential to the safe, lawful, and trustworthy use of AI.
What “Human-in-the-Loop” Actually Means
“Human-in-the-loop” is more than a buzzword. It refers to the deliberate embedding of human judgment, intervention, and accountability into every stage of an AI system’s lifecycle. That includes:
- Before deployment: humans design, test, validate, and approve models.
- During operation: humans monitor, review, and override automated decisions when necessary.
- After action: humans audit outcomes, handle exceptions, and provide redress.
In the federal government, this is not optional. OMB Memorandum M-24-10 directs agencies to ensure human oversight and risk management for AI systems, especially those that impact safety, rights, or essential services. The NIST AI Risk Management Framework goes further, defining oversight as a prerequisite for trustworthy AI. Without qualified personnel actively engaged in these processes, an AI system is non-compliant — and often unsafe — to operate.
Why Human Oversight Is Non-Negotiable
There are several critical reasons why government AI systems cannot and should not run unattended:
- Legal accountability and due process: Only authorized officials can make binding decisions about benefits, enforcement, or security. AI may assist, but people must decide — and take responsibility.
- Protection of rights and safety: Federal policy requires that humans oversee any AI that could impact civil liberties or public safety. If those humans are furloughed, agencies cannot meet their legal obligations.
- Error handling and escalation: AI models make mistakes, particularly in edge cases. Humans are needed to review false positives, interpret ambiguous results, and handle exceptions.
- Governance, audit, and transparency: Agencies are required to document decisions, monitor performance, and continuously evaluate AI systems. Those governance functions rely on people, not algorithms.
Without humans in these roles, most federal AI systems must be shut down — not because they can’t operate, but because they shouldn’t.
Problems Solved by Humans in the Loop
Humans aren’t just a safety net — they solve real operational challenges that machines alone cannot:
- Bias mitigation and error correction: People review flagged cases to minimize harms from false positives or discriminatory patterns.
- Contextual judgment: Many government decisions require nuanced understanding of law, policy, or human behavior that AI cannot replicate.
- Exception processing and appeals: When citizens dispute an outcome, human adjudicators must review and remedy cases.
- Continuous oversight: Humans monitor outputs and, if necessary, pull the plug — something a machine will never do to itself.
These are not edge cases; they are core components of responsible AI governance.
Federal AI Systems That Depend on Humans in the Loop
Across agencies, some of the government’s most advanced AI programs are built around human oversight. Here are a few key examples:
- TSA Facial Recognition and Officer Review
The Transportation Security Administration uses facial comparison technology to help verify passenger identity. But if the algorithm fails to match, or if a traveler opts out, a TSA officer performs a manual review and makes the final call. Without those officers, the system cannot function.
- CBP “Simplified Arrival”
U.S. Customs and Border Protection deploys real-time facial matching at ports of entry. If the system is uncertain, travelers are immediately directed to speak with an officer — a safeguard that becomes unavailable when personnel are furloughed.
- USCIS Decision Support
U.S. Citizenship and Immigration Services uses natural language processing and analytics to assist adjudicators. But the tools never make final determinations; human officers are legally required to decide each case. Without them, applications cannot move forward.
- DoD Project Maven
The Department of Defense’s Project Maven uses computer vision to analyze surveillance video and identify objects of interest. Yet every detection is reviewed by a human analyst before any action is taken. The system’s value collapses without that human judgment.
- Department of Veterans Affairs Clinical AI
At the VA, AI tools help clinicians predict patient risk and interpret imaging data. These systems are explicitly designed around clinician oversight — models assist, but people diagnose and decide. Furlough the clinicians, and the AI is effectively useless.
Shutdown Reality: AI Stops Too
Because of these dependencies, most AI initiatives cannot operate during a shutdown. OMB policy empowers agencies to pause or terminate AI use if oversight cannot be guaranteed. Inventories and governance frameworks across DHS, VA, and other agencies make clear that HITL is built into their systems — and without humans, those systems are incomplete.
Building for Resilience: Policy Recommendations
If shutdowns are going to remain a fact of political life, agencies should plan for continuity of oversight just as they do for continuity of operations. That means:
- Classifying key oversight roles as “essential” so AI systems supporting critical services can legally continue running.
- Designing for graceful degradation, where systems can default to manual or paused modes instead of running unchecked.
- Investing in oversight tooling to reduce the number of interventions required — but never to eliminate the need for human accountability.
AI can automate many government functions. It can speed up processes, flag risks, and support decisions. But it cannot replace the humans who design, supervise, and take responsibility for those systems — especially in the high-stakes world of public policy and civil rights.
In other words, when the government shuts down, so does AI. And that’s not a sign of weakness. It’s a sign that our democracy still values human judgment where it matters most.
About Greg Godbout
Greg Godbout is the CEO of Flamelit – a Data Science and AI/ML consultancy. He was the former Chief Growth Officer at Fearless. Formally the Chief Technology Officer (CTO) and U.S. Digital Services Lead at the EPA. Greg was the first Executive Director and Co-Founder of 18F, a 2013 Presidential Innovation Fellow, Day One Accelerator Fellow, GSA Administrator’s Award Recipient, and a The Federal 100 and Fedscoop 50 award recipient. He received a Masters in Management of IT from the University of Virginia, and a Masters in Business Analytics and AI from NYU.