DARPA RFI: Techniques and Tools for Vulnerability Assessment of AI-enabled Systems

Notice ID:  DARPA-SN-25-28

The Defense Advanced Research Projects Agency (DARPA) Information Innovation Office (I2O) seeks information regarding current and emerging techniques and tools for the operational assessment of potential vulnerabilities in DoD-relevant artificial intelligence (AI)-enabled systems. We seek techniques and tools that: 1) consider a spectrum of relevant adversarial access threat models (white box, grey box, black box, hidden box); 2) consider not just the AI model, but also vulnerabilities presented by the entire AI-enabled system development and deployment pipeline; and 3) consider the platform-specific challenges in operationally assessing vulnerabilities, including environmental conditions, multi-modal sensor ingest, and system purpose.

DARPA is interested in responses that will address one or all of the following areas:

  1. AI red teaming framework and autonomous toolkit, considering:
    1. The major dynamics of operating environments where AI-enabled systems might be deployed by the DoD.
    2. Levels of knowledge of the ASUT required for your techniques and tools to be effective.
    3. Modularity and integrability with external tools for realizing attacks in practical settings.
    4. Algorithms for autonomous vulnerability assessment and analysis.
  2. Cyber means of effecting AI-enabled battlefield systems, to include:
    1. Methods of extracting model weights and architecture from an ASUT.
    2. Methods of covert data contamination/poisoning and/or model weight manipulation.
    3. Methods exploiting potential vulnerabilities in common AI development pipelines/frameworks. d. Methods for reliably executing manipulation in the open data/model ecosystem.
    4. Methods for executing a malicious middleware between a sensor and an AI-enabled system (either on device or in cloud application).
    5. Exploitations of application programming interface-based AI services to gain model information or to directly manipulate the model.
  3. Electronic warfare (EW) effects for manipulating AI-enabled battlefield systems, to include:
    1. Methods of using EW for high precision sensor input modification over a variety of wavelengths, including electro-optical (EO) and other sensor modalities at a given distance to a sensor.
    2. Methods of jamming/manipulation EW-based communications with autonomous systems.
  4. Physical manufacturing of adversarial effects, to include:
    1. Methods of automatic/rapid construction of “adversarial objects” from AI specification including 2D printing and 3D shapes.
    2. Materials research in 2D color printing material that has reduced glare and high-fidelity in print quality. These may include paper, cloth, or other printable material.
    3. electronic displays that can adapt to ambient lighting condition to ensure a constant display from an EO sensor …

Read more here.

Ad



Not Yet a Premium Partner/Sponsor? Learn more about the OS AI Premium Corporate and Individual Plans here. Plans start at $295 annually.

How useful was this post?

Click on a star to rate it!

We are sorry that this post was not useful for you!

Let us improve this post!

Tell us how we can improve this post?

LEAVE A REPLY

Please enter your comment!
Please enter your name here