Blog Archives

“Thinking like a Science Executive” report for CI enterprise leaders

Leaders of cyberinfrastructure enterprises (“CI enterprises”) are under increasing pressure to think strategically and to demonstrate their impact on a diversity of stakeholders. Most CI enterprise leaders are scientists or engineers with strong project management skills. Few have been exposed to practices, frameworks, and theories from organization science that can potentially improve the effectiveness of their enterprises. In short, as CI enterprise leaders evolve into executives, they do so with minimal formal guidance.

To help in thinking about cyberinfrastructure and CI enterprises strategically, the NSF Division of Advanced Cyberinfrastructure (ACI) funded a project to develop a “science executive” curriculum for leaders of CI enterprises, to help these leaders think strategically, align their activities with key stakeholders, and to understand and communicate their value. This document summarizes this curriculum – developed in 2013 and stemming from two planning workshops and then delivered in 2013 to 22 leaders of CI enterprises, and subsequently evaluated and amended.

The report can be downloaded here:


Key areas of focus include:
– the changing nature of science and the role of CI enterprises
– CI enterprise stakeholder analysis and stakeholder alignment
– value propositions of CI enterprises for stakeholders
– metrics: understanding and communicating the value of CI enterprises



CASC Pilot Benchmark Results Report

Berente & Rubleske just posted the first set of benchmarks for CASC on SSRN.  The report is entitled: “Academic Computing Management Benchmarks: Beta Version – A Report on a Pilot Survey of the Coalition for Academic Scientific Computing (CASC) Members” and can be downloaded here:


The idea was to illustrate the potential for management benchmarking to the CASC community, while laying the groundwork for the development of an annual management benchmarking survey. CASC is the “Coalition for Academic Scientific Computing” and is made up of a variety of computing organizations – from the biggest supercomputing centers to local university research computing groups and everything in between.

The next step will be to refine the items and roll out “Version 1” of the actual benchmarks in Fall 2013.