By Kyle Tuberson, CTO, ICF

In the last year, all eyes have been on AI. Across industries, businesses have explored AI for everything from customer support to ai-augmented coding. The federal government is no exception. 2023 proved agency leaders are eager to embrace its benefits as soon as possible. In fact, 88% of mission leaders report their agency is mostly or completely prepared to use AI as part of its overall data strategy, and nearly all agree that investing in safe and effective AI is essential to fulfilling the needs of their organization.

Despite this enthusiasm, many agencies might not be as ready as they think they are. While more than 3 in 4 IT professionals report their agency’s data infrastructure is mostly or fully mature, an overwhelming 79% say their agency has not achieved major digital modernization goals. Given that, mission and IT leaders looking to use AI for data management must employ a thoughtful approach to ensure success or risk sinking time and resources into inadequately supported projects.

From rushing implementation to building systems without the proper governance in place, here are the biggest pain points federal leaders face when building their AI strategies – and how to avoid them.

Tackling Too Much Too Soon

While AI has transformative potential, mission leaders must be strategic about how and when they begin implementation. Afterall, 3 in 4 mission leaders are concerned that decision makers in their organizations are rushing to incorporate generative AI without understanding the data needed to ensure its success.

Decision makers, especially those less familiar with technical nuances, may not always be fully aware of what Gen AI implementation entails and the proper guardrails and ethical considerations needed for its success. For the strongest impact, mission leaders and decision makers must align around the strategic goals of the agency. The Department of Health and Human Services, for example, embraces AI initiatives that map back to its key objectives: improving health outcomes; advancing scientific research; and breaking down social inequities. By aligning AI implementation with larger goals and defining clear success metrics, agencies can better evaluate progress and gauge where a human touch is needed.

While the potential for Gen AI is promising, agencies should take a slow and steady approach, embracing small-scale testing and evaluation before expanding efforts. Many mission leaders are already experimenting along these lines, with 37% reporting their agencies are conducting small, controlled tests of Gen AI in preparation for more widespread use. Only with unified commitment from the entire organization around priority applications and goal-setting can early AI implementation succeed, paving the way for future use cases.

Building in Silos

When implementing AI to drive digital modernization, building robust interdisciplinary teams to inform technology, security, business, and even legal considerations is essential. Vital to this supergroup are domain experts, who bring specialized knowledge or skills in a particular field and can define specific steps for the mission’s success based on their expertise. Further, these individuals understand the goals, challenges, and needs of clients, allowing them to help teams stay focused on mission outcomes.

IT professionals recognize the value domain experts bring, with 88% stating that digital modernization efforts that do not include these team members are doomed to fail. Despite this recognition, however, 32% of IT professionals admit their agency’s digital modernization efforts do not include a domain expert.  Their exclusion can greatly threaten the success of implementation efforts. For instance, 62% of data and analytics professionals cite technology solutions not meeting the end-user’s needs as a possible consequence of embarking on a modernization project without a domain expert. At the same time, more than half of data and analytics professionals identify budget overruns and delays in project implementation as consequences of not including a domain expert.

For the best results, agencies should encourage the collaboration of decision makers, technologists, and domain experts within modernization efforts. Working in concert, these individuals bring the appropriate expertise and perspective to the table to drive advancement. When paired with the proper AI guardrails, this unified effort will improve AI implementation, allowing agencies to embrace new technologies more seamlessly and with better outcomes.

Building Without Governance

Before tackling implementation, IT teams must guarantee proper AI governance is in place. This refers to a collection of frameworks, policies, and best practices that ensure AI is being developed, deployed, and used ethically and responsibly.

Such infrastructure is not to be taken lightly. In fact, the federal government passed an executive order promoting the use of trustworthy AI, including guidelines for responsible handling of systems and data sharing within agencies, underscoring the importance of privacy and ethical AI development.

But while 54% of data and analytics professionals report that their agencies have or are developing guidelines on how to use generative AI, others are struggling. 47% have cited creating a governance framework and 41% cite regulatory compliance as their biggest challenge related to AI adoption.

To comply with federal standards and drive responsible AI use, mission leaders should first identify how data will be protected and then determine the protocol for system failures or breaches. Guiding principles include repeatedly testing systems for cybersecurity; maintaining human touch points to review models; and embracing transparency around the status of implementation efforts and the impact on both mission delivery and the larger agency.

At the same time, agencies should take measures to prevent the proliferation of AI bias. Mission leaders must keep an eye on AI-generated disinformation, as thus could create a new slew of challenges for agencies. By ensuring accountability and increasing education around identifying bias, leaders can stay ahead of the game.

Luckily, there are tools available to help bring Gen AI to market safely. IBM recently introduced AI Fairness 360, an open-source toolkit that helps examine, report, and mitigate discrimination and bias in machine learning models. Meanwhile, OpenAI’s Safety Gym helps evaluate the safety and resilience of machine learning models to various types of attacks.

Ultimately, clear governance and AI transformation go hand-in-hand. By identifying clear strategies and guidelines for implementation with attention to cybersecurity, transparency, and accountability, leaders can ensure responsible use, paving the way for future success and expanded efforts.

When given the proper alignment, collaboration, and governance, AI implementation can strengthen mission delivery and futureproof agencies. While excitement around AI and its impact is at a high, agencies should experiment intentionally and incrementally, carefully measuring results and building a culture of trust. When it comes to AI and its transformative impact, the old adage holds true: slow and steady wins the race.


Want to get involved with OS AI? - A small number of Sponsorship Opportunities are now available here. Starting at $500.

How useful was this post?

Click on a star to rate it!

We are sorry that this post was not useful for you!

Let us improve this post!

Tell us how we can improve this post?

Leave a Reply