The EAIS establishes a centralized vision for artificial intelligence (AI) innovation, infrastructure, policy, responsible and ethical use, governance, and culture by inaugurating Department-wide guidance for the design, development, acquisition, and appropriate application of AI.
The Department of State stands at a critical juncture where an emerging ecosystem of artificial intelligence (AI) capabilities presents enormous opportunity. This opportunity can allow the Department to leverage AI to achieve breakthroughs of all kinds – in public diplomacy, language translation, management operations, information proliferation and dissemination, task automation, code generation, and others…
To guide the Department towards its Vision, four Goals serve as foundational targets that will enhance the Department’s AI capabilities. Each Goal rests on specific objectives that encompass priorities identified by the Department’s AI leaders. These relevant and achievable efforts will enable measurable advancement over the next two years…
Objective 1.1: Enable AI Technology Integration To build and scale a variety of AI technologies, the Department will integrate impactful AI technologies into sustainable, AI-enabling infrastructure, with security as a top priority. It will seek to provide broad access to AI technologies, commensurate with the Department’s range of user abilities, with a mix of open-source, commercially available, and custom-built AI systems. Robust access controls and authentication mechanisms aligned to Zero Trust principles will mitigate risk of unauthorized access to AI technologies and Department data, providing a high level of security.
Objective 1.2: Fully Utilize Infrastructure for AI Adoption at Department Scale The Department will rely on a robust technology infrastructure that further enables computing, development, testing, deployment, and continuous monitoring of AI technologies, while also protecting Department data and security. Leveraging resources within the Bureau of Information Resource Management (IRM) and integration with the Information Technology Executive Council (ITEC), the Department will design and implement supplementary technology architecture that allows for integration of AI components into our existing infrastructure and data pipelines. To meet the computational demands of AI development, our infrastructure will leverage Department cloud based solutions and scalable infrastructure services. The Department will rely on expertise in data encryption mechanisms, robust network security, multi-factor authentication, and regular data backups to safeguard its data.
Objective 1.3: Modernize Acquisition of AI Tools The Department’s IT procurement authorities, in partnership with the Chief Data and AI Officer (CDAO), Responsible AI Officer (RAIO), Chief Innovation Officer (CIO), and others, will work to streamline the approval and procurement of prioritized AI technologies to meet the demand signaled by the Department’s strongest potential use cases, consistent with applicable law and regulation. This includes examining the IT procurement pipeline to find efficiencies while maintaining the safeguards provided in the Federal Risk and Authorization Management Program (FedRAMP), the Information Technology Change Control Board (ITCCB), Authorization to Operate (ATO) process, and other approval mechanisms, following federal guidelines. To prioritize investment opportunities, the Department will identify use cases where AI can provide the highest impact, utilizing coordination by key offices like the CDAO and issued FedRAMP frameworks. Prior to their acquisition, AI technologies will be evaluated against security protocols and risk assessment processes. The Department’s procurement and open-source approval processes will be further developed to enable flexibility and streamlined deployment of AI. Evaluating vendor claims and developing new language for Department contracts will ensure our partners are held to the same standards of security, risk management, and transparency to enshrine these requirements going forward.
Objective 2.1: Provide AI Training and Support Services The Department will develop specialized AI learning opportunities to meet the diverse needs of its workforce, enhance AI literacy, encourage and educate on responsible AI use, and ensure that users can adequately mitigate some risks associated with AI tools. As AI is integrated into Department infrastructure and existing technology platforms, it is integral that our workforce understands what these technologies are and how to safely use and apply them. The Department will increase AI fluency for both technical and non-technical users through the development of multi-tiered, incentivized trainings and modifications of existing trainings, led by the Foreign Service Institute (FSI). To further support users, technology-specific materials will be developed to assist in the recognition, exploration, and interpretation of AI, as well as the facilitation of support sessions to assist all AI users. The Department will convene communities of practice to share AI resources, use cases and best practices, and development of specific impact metrics to accompany AI technologies that will establish parameters for the expected benefits of use.
Objective 2.2: Develop New Opportunities for AI Talent The Department will recruit and hire AI expertise currently underrepresented in its workforce, especially those with an understanding of AI techniques, technologies, principles, and ethics, who can play pivotal roles in our adoption of responsible AI. The Department will build on its success hiring a cohort of data science practitioners under the guidance of the Department’s CDAO, and open fresh opportunities for technical practitioners through the development of new AI focused roles, such as data scientists, operations researchers, and IT specialists, and deployment of programs to support, attract, and retain AI talent.
Objective 2.3: Promote Responsible AI Use In this early stage of AI adoption, the Department must wrestle with extensive unknowns to navigate the path of opportunity while ensuring responsible AI practice, including by respecting and promoting security, privacy, equity, and other core principles. Much like the EDS aims to cultivate a data culture, the Department will imbue its values around responsible AI use across the organization, including to uphold data and scientific integrity. The Department will instill best practices that routinizes responsible AI use by teaching our workforce when and how to use AI tools effectively, safely, and lawfully. Through the development of interdisciplinary training courses, we will educate our workforce on the basics of AI risk and risk mitigation techniques to empower effective AI use and uphold data and scientific integrity, while also recognizing the level of acceptable risk that accompanies each AI application. We will comply with applicable law and AI governance and policy guidelines and minimize the risk of AI use.
Objective 3.1: Establish and Maintain AI Governance and Policy The Department will oversee and manage risk, adhere to the principles, guidelines, tools and practices established from key directives (e.g., Executive Orders), and develop additional policy to ensure alignment of applied AI with applicable law and policy, and with standards for responsible and ethical use, through the Enterprise Data & AI Council (EDAC), the AI Steering Committee (AISC), and the Data Governance Network. The Department’s CDAO will support and coordinate the establishment and maintenance of AI policies—such as 20 FAM 201.1—that provide clear guidelines for responsible AI use, steward AI models, and prioritize the evaluation and management of algorithmic risk (e.g., risks arising from using algorithms)in AI applications during their entire lifecycle— including those related to records retention, privacy, cybersecurity, and safety. That commitment implicates many data science disciplines, like data collection, extraction, transformation, and loading; model selection, development, deployment, and monitoring in production; statistical methods; and others. AI compliance plans and protocols for system maintenance, recalibration, and use stoppage will prevent unintended bias and functionality. Minimum risk-management practices for rights- and safety-impacting AI will be established for development and procurement. The RAIO, per CDAO direction, will define rights or safety impacting AI use cases. Regular safety and trustworthy assessments and internal audits will be required to manage risks, both in isolation and because of human users, and address threats, mitigate bias, and ensure data protection. The policies and guidelines developed will consider the security and privacy of Department data. The Department will ensure clear and transparent procedures for legal and policy review of new AI use cases.
Objective 3.2: Broker Appropriate Access to AI-ready Data The Department will streamline and ensure appropriate access to internal, interagency, and third-party data for AI use that is transparently sourced. The Data. State platform will provide enterprise-wide access to data when possible and appropriate, consistent with applicable law and protections. Safeguards, protocols, and data management standards apply where necessary, in addition to data sharing agreements that reflect the Department’s policies for data use in its technology platforms and with vendors.
Objective 3.3: Facilitate Data Quality Assurance High quality datasets are those sufficiently free of incomplete, inconsistent, or incorrect data, while also being well documented, organized, and secure. The Department will maintain reliable, high-quality data that is fit for AIuse, development, operation, and evaluation through implementation of robust data cleaning and quality assurance capabilities, assessments, and monitoring processes, implemented at the AI use case level, that are transparent to users. The Departmentwill develop and implement data quality assessment tools and monitoring processes with results that are transparent to users. Assessments will also be done to evaluate data outputs from other AI platforms to minimize risk.
Objective 4.1: Identify Opportunity The Department will use AI to advance U.S. diplomacy by honing its ability to identify opportunities in AI in an entrepreneurial approach. The Department will facilitate the identification of potential AI use cases in both centralized and decentralized fora and help the workforce identify appropriate applications of AI technologies. We will leverage enterprise Data Campaigns, as well as the Department’s Data Governance Network, communities of practice, bureau AI forums, conference participation, innovation channels, training sessions in AI as referenced in
Objective 2.1., Bureau Chief Data Officers, and other avenues to surface opportunities in AI. We will also rely on our Public Diplomacy and Public Affairs professionals; the Foreign Service Institute (FSI); CAIO Council; alumni networks, trade associations, private tech leaders, and newly established strategic partnerships with leading AI providers. Finally, we will pursue a new Department AI funding strategy to propel prioritized AI use cases and provision sufficient resources.
Objective 4.2: Facilitate Responsible Experimentation Responsible, entrepreneurial experimentation will underwrite long-term and cost-effective success in AI adoption at the Department. We will leverage shared resources, including the expertise of our technologists, and pursue new funding to establish an innovation sandbox environment where practitioners from around the Department can bring their ideas to vet. In these sandboxes, the Department will run low stakes experiments to test new AI tools with safely controlled data and build empirical cases for deployment. To accelerate AI technology adoption, centralized access to shared AI use cases, models, datasets, and applications will be provided to combine expertise, enable effective evaluation of progress, avoid duplication, and identify capability gaps. As delegated by the CDAO, the RAIO will oversee the maintenance of the existing AI Use Case Inventory, which will be enhanced with plain language documentation to inform users of the existence, purpose, and level of risk associated with AI technologies being used at the Department and will provide developers with access to example models to use.
Objective 4.3: Scale Successes As Department personnel experiment with and identify AI use cases, certain use cases will prove widely valuable and merit reproduction at larger scales. The Department will emphasize cooperation with interagency CDAOs, RAIOs, and CIOs, interagency bodies working in applied AI, and networks of responsible AI practitioners in academia, industry, and foreign affairs to propagate best practices and scale success. Our partnerships will constitute an active frontier in AI innovation. Clear evaluation guidelines will be established requiring testing of AI systems prior to scaling to make sure accurate, safe, reliable function, and benefits outweigh risks before enabling AI capabilities in a production environment with access to Department data. AI system outputs will follow federal guidelines for transparency.
Not Yet a Premium Partner/Sponsor? Learn more about the OS AI Premium Corporate and Individual Plans here. Plans start at $250 annually.