Notice ID: 49100424K0024
NSF is seeking an external evaluation team to work closely with NSF program directors on assessing the impact of the NSF Regional Innovation Engines program (NSF Engines) and the extent to which the program is achieving its intended goals. This comprehensive evaluation of the program will inform programmatic decisions, increase our understanding of how regional innovation ecosystems are created, and determine the overall success of the program.
Scope: The Contractor, in collaboration with NSF Program Directors, will be responsible for conducting a mixed-methods evaluation of the portfolio of NSF Engine projects. It will be critical for the contracting team to identify macro-level indicators to determine regional economic development and social impacts and real-time metrics related to the activities of each Engine Awardee. They shall also consider how the evaluation could capture the qualitative impact of an Engine’s activities on the region while documenting the state or evolution of the regional governance and networks within each Engine. To determine the long-term impacts of the funded Engines, the Contractor shall conduct a comparative analysis to assess the extent to which NSF Engines were able to catalyze and/or accelerate regional innovation ecosystems across the Nation.
The NSF Engines program is making long-term investments at an unprecedented scale and time frame — up to 10 years and $160 million per Awardee — to establish inclusive regional innovation ecosystems capable of evolving and sustaining themselves for decades to come. Presently, there are nine engines spanning 16 states. The regions of impact for each NSF Engine may evolve and it’s anticipated that the number of NSF Engines may increase. NSF can only anticipate specific evaluation needs (e.g. metrics, data analytics, reporting requirements) over relatively short time horizons (approximately two years). The NSF Engines program evaluation needs will evolve as NSF Engine Awardees progress and the program matures.
This evaluation will assess the causal pathways that link the drivers of the NSF Engines program to the anticipated programmatic goals; serving as a critical tool for TIP leaderships and NSF Engines Program Directors to manage the program. Outcomes of the study will also be valuable to each Engine Awardee as they conduct award-level evaluations. Evaluation findings will inform teams of when and how they need to adjust plans or course-correct over the 10-year funding period
The Contractor will be expected to:
- Work closely with NSF to integrate the evaluation and data collection efforts already underway with the evaluation conducted by the Contractor and NSF.
- Coordinate with the evaluators hired by NSF Engine Awardees to ensure evaluation data collection across all Engine teams is conducted in a consistent manner and the data can be aggregated and assessed across the entire portfolio of awards.
- Utilize advanced data analytics methodologies encompassing structural equation modeling, hierarchical modeling, Bayesian analyses, and more, with a proficiency in conducting analyses within platforms like R, RMarkdown, Python, SQL, Tableau, and NVivo.
- Apply econometrics and psychometrics, perform data imputation, and calculate nonresponse bias …
NSF anticipates an initial one-year base period with four one-year option periods.
Not Yet a Premium Partner/Sponsor? Learn more about the OS AI Premium Corporate and Individual Plans here. Plans start at $250 annually.