Expanding Performance Measurement for the 21st Century

Expanding Performance Measurement for the 21st Century


By Matthew D. Kenyon

 Without tradition, art is a flock of sheep without a shepherd. Without innovation, it is a corpse.

—Winston Churchill

Churchill’s keen insights into the world of art elucidate a more universal theme: without a nod toward tradition, we lose our direction; without fostering what’s new, we sacrifice our usefulness. In whatever we do, there should be a constant drive to balance the usefulness and insights of the past with the excitement and promise of the future. Innovation without tradition may find us lost at sea, but tradition without innovation will abandon us to merely treading water.

Performance measurement is data gathering and analysis that falls somewhere between two much more high-profile and well-known types of research: evaluation and anecdote. Academics spend years working with tons of data to evaluate effectiveness and apply causation. They assess programs through rigorous trials to tell us what works: evidence-based practices. Journalists talk to seemingly everyone, searching for the one case study that effectively and simply illustrates the point of their story. They use that story as a vehicle to spread a greater message to a wide audience. Performance measurement isn’t quite evaluation research, but it also isn’t quite journalism.

 Down in the valleyPerformance measurement, down in the valley.

 Performance measurement evaluates data in the short term. It tells us who is doing what and when. It is longitudinal, showing change over time. It is descriptive, showing the facts. But it falls short in two ways: It cannot show causation like academic research, and it cannot move us like journalism. It is for precisely these reasons that we do not see performance measurement on the evening news or on our social media feeds. There’s no shocking headline (Chocolate causes cancer!) or moving story (Mother survives horrific crash with drunk driver!). It’s just data.

To make performance measurement more tangible and accessible, it needs to adapt. It will never be able to show causation like an evaluation, and it may not be pack the punch of a headline above the fold. But that doesn’t mean performance measurement can’t look at outcomes and use data to tell an exciting story. It can build on the ideas and successes of research and journalism, bringing together the best of both to present current, reliable, and interesting data in a useful and compelling way.

 Evelauation and anecdotePerformance measurement, building on evaluation and anecdote.

 Take for example the Justice Assistance Grant (JAG) program, administered by the Bureau of Justice Assistance (BJA), the primary grant-making office in the Department of Justice’s Office of Justice Programs. Recently, CSR, Incorporated, in conjunction with BJA, completely redesigned the JAG performance measures with innovation in mind. The measure redesign was based on the principles that accountability measures should be informed by the best available academic research and should reflect the high-quality questions used in surveys from rigorous outcome evaluations. To make the redesign work, the development team spent hundreds of hours reviewing research and collaborating with subject matter experts to ensure that the measures reflected the latest evidence-based practices.

Before the measure revision, JAG law enforcement grantees were asked how many of their JAG-funded programs were considered “effective” by crimesolutions.gov, a Federal repository of evidence-based practices. Grantee responses ranged from zero to several hundred. But crimesolutions.gov only lists 14 “effective” law enforcement programs or practices. Clearly this measure did not accurately assess the usefulness of evidence-based programs.

As part of the JAG revisions, this measure assessing program effectiveness was removed and replaced with three separate questions that asked about the specific program model, target population, and target locations. Taken together, these three questions can show if a grantee is using a model in a way consistent with the literature. Much as psychologists do when they try to measure a complex phenomenon such as personality, we avoided asking the question directly in favor of asking about the components. We then used analysis to determine if the program was, in fact, evidence based.

Although we still can’t say that JAG funding caused a particular result, we can say that this funding is being used for programs that, when implemented with fidelity, are likely to produce positive outcomes. Here, performance measurement took a bite-sized sample from the academic world to create a better way to measure performance that keeps outcomes in mind.

We also wanted grantees to be able to tell us their stories and highlight their important accomplishments in a narrative format. Luckily, BJA’s website already housed a success story page where grantees could share their good news. To encourage JAG grantees to post their successes on this page, we included this link in the performance measures. This accomplished two goals: It increased awareness of the success of others, and it made it easier for grantees to share their stories.


The success stories page features some of the attention-grabbing headlines performance measurement lacks. Take for example Cook County, IL, where a JAG-funded automated external defibrillator (AED) was used to save a woman’s life; or Polk County, FL, where a JAG-funded drug court helped a client overcome heroin addiction. When these stories are paired with robust, research-informed measures, a more complete picture of the value of the JAG program begins to emerge. Evidence and emotion can be effectively presented together through performance measurement.

Some have suggested that there is a long-standing rivalry between performance measurement analysts and evaluation analysts in government. This does not seem to be the case at BJA. By incorporating ideas from evaluation survey instruments, performance measurement has expanded, creating opportunities for PM analysts and academics to work together. Likewise, including and promoting moving anecdotes offers the media the chance to work with performance measure analysts, backing up headlines with data.

Performance measurement as a field still has room to grow. It should not be resigned to being in research’s shadow, but should embrace new ways to promote its own merits. Performance measurement needs to be innovative, effective, and emotional, and performance measure analysts need to finally see themselves as more than just data crunchers.


Matthew D. Kenyon is a senior staff member with CSR, Incorporated, where he works as a performance measure analyst and associate manager of a Department of Justice contract. He has spent much of his career working in and with law enforcement agencies throughout Virginia. He is currently pursuing a doctorate in criminology, law, and society at George Mason University, where his research interests include law enforcement organizations, innovation and change, and procedural justice.