Notice ID: 36C10B25Q0021
Department/Ind. Agency: VETERANS AFFAIRS, DEPARTMENT OF
Office: TECHNOLOGY ACQUISITION CENTER NJ (36C10B)
VA Office of the Chief Technology Officer | VA Office of the Chief Artificial Intelligence Officer | Artificial Intelligence Program and Governance Support | VA-25-00009400
The Office of the CTO (OCTO) serves the Department of Veterans Affairs (VA) Office of Information Technology’s (OIT) mission to deliver world-class IT products and services to VA and Veterans. OCTO works closely with core program portfolios across VA to examine the short and long-term needs of the Department, and to identify and fill gaps in VA’s technology portfolio. OCTO takes an approach to problem-solving, teamwork, and leadership that is built on agile development and results in ongoing improvement.
OCTO now serves a dual purpose as the VA Office of the Chief Artificial Intelligence Officer (OCAIO). The mission of OCAIO is to deliver world-class AI-enabled IT products and services to VA and Veterans in a purposely, effective & safe, secure & private, fair & equitable, transparent & explainable, and accountable & monitored fashion. These six pillars comprise the VA Framework for Trustworthy AI.
What are we trying to solve?
VA is committed to leveraging Artificial Intelligence (AI) technologies to enhance the services and care provided to our Veterans. However, the introduction of AI brings complex challenges, particularly concerning safety and the protection of individual rights. Our primary goal is to implement AI across the VA in a manner that meticulously minimizes risks related to safety and privacy while upholding the highest ethical standards. This a proactive approach to preventing potential issues and establishing a robust, responsive governance structure that includes policies, processes and guidance to address and resolve problems that may arise. Ethical, Responsible and Trustworthy deployment of AI technologies is paramount to maintaining the trust and confidence of our Veterans and the employees who serve them.
Why is it needed?
The rapid advancement and increasing popularity of Large Language Models (LLM) and other AI technologies have propelled a shift towards AI integration in various sectors. In healthcare, especially within VA, this shift presents unique opportunities to improve efficiency, decision-making, and patient care. However, it also introduces significant ethical considerations and potential risks that must be managed carefully. AI is about to be embedded into almost every new product and service and it is critically important for VA to lead by example in establishing and adhering to rigorous standards. By doing so we ensure that the AI technologies we adopt, develop, or use not only enhance our ability to serve Veterans but do so in a way that aligns with our core values. This Task Order will support the creation, implementation and management of an AI management process within the Office of the Chief Artificial Intelligence Officer.
AI-enabled tools in production at VA must be consistent with the VA Trustworthy AI Framework.
In accordance with the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence VA has a mandate to create and maintain a comprehensive inventory of AI use cases within the Agency, ensuring that each aligns with the principles of with M-24-10, Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence , and all existing and subsequent guidance issued by the White House and the Office of Management and Budget (OMB). This directive is not just a procedural task but a substantive effort that aims to embed transparency into AI within the Federal Government.
This work is inherently complex and multifaceted, demanding a blend of expertise across several disciplines including public policy, Responsible AI, software engineering, privacy, and security. Each AI product or application within the agency’s purview must be meticulously assessed for compliance with these broader principles. This includes procedurally creating and running the AI intake and use case review process, evaluating ethical implications of AI, ensuring privacy and security of data, and assessing the overall impact of AI use cases on service delivery and stakeholder trust. To create standards and assess every AI product against them is not only labor-intensive but also continuous as it needs to be amended and reevaluated every time there’s a policy change or a new issue has been identified. Robust intake and review processes must be developed, implemented, and continuously ran for this.
The goal is to ensure that AI technologies used by the Agency not only enhance operational efficiency and services but do so in a manner that is ethically sound, secure, and respectful of the rights and dignity of all individuals.
Desired user outcomes
- Improved Services for Veterans
- Enhanced accuracy and personalization of services, from healthcare to benefits.
- Increased Trust and Transparency
- Clear communication regarding the use and impact of AI.
- Assurance of privacy and data protection.
- Mechanisms for feedback and engagement on AI-driven services.
- Enhanced Security and Privacy
- Strict controls to protect personal information in conversational agents.
- Reduced risk of data breaches and unauthorized access.
- Transparent policies on data usage.
Desired user outcomes
- Improved Services for Veterans
- Enhanced accuracy and personalization of services, from healthcare to benefits.
- Increased Trust and Transparency
- Clear communication regarding the use and impact of AI.
- Assurance of privacy and data protection.
- Mechanisms for feedback and engagement on AI-driven services.
- Enhanced Security and Privacy
- Strict controls to protect personal information in conversational agents.
- Reduced risk of data breaches and unauthorized access.
- Transparent policies on data usage
Not Yet a Premium Partner/Sponsor? Learn more about the OS AI Premium Corporate and Individual Plans here. Plans start at $250 annually.