Measuring What Matters: The Outcomes and Impact of Science-based Training

By Joseph Trindal and Ari Vidali

In determining key performance indicators and measurable terms of an evidence-based training program’s success, law enforcement stakeholders may find themselves facing a new set of challenges.

These boil down to three pressing questions:

  1. Do the current metrics we use to gauge individual officer performance and departmental success still align with training following a shift to evidence-based learning methods?
  1. If new performance indicators are needed, how do we align them with current departmental goals, as well as the goals of larger governing agencies?
  1. Are these measures fair, transparent, and truly reflective of the agency’s role within the community?

The inherent nature of law enforcement work—and public response to the services officers provide—means answering these questions is not easy.

While not directly related to evidence-based training, the idea of so-called “ticket quotas” is one example of how performance-monitoring programs are met with negative public perception and adverse judicial review in many jurisdictions.¹

If a performance metric suggests bias, bad-faith focus, or motives beyond the standard goals of stopping crime and upholding public safety, the eventual backlash can generate community distrust, diminish political support and devastate police morale.

That said, agencies and their stakeholders still have numerous reasons to measure the success of their evidence-based training policies in the field.

Taxpayers and financial stewards both wish to know the time and expense allotted to public safety yield measurable, sustainable results.

Officers themselves will likewise wish to know how their training applies directly to on-the-job tasks and the true measures by which their performance is evaluated once they step out of the classroom.

In describing these diverse motivations and challenges, a National Institute of Justice (NIJ) author argues the day-to-day management of a law enforcement agency is more complex than the hundreds of controls and gauges on a commercial airliner’s cockpit, a comparison that almost certainly applies to the nuances of performance monitoring and management:

“Unless something strange or unusual happens along the way, the airline pilot (and most likely an autopilot) follows the plan. For police agencies, ‘strange and unusual’ is normal. Unexpected events happen all the time, often shifting a department’s priorities and course. As a routine matter, different constituencies have different priorities, obliging police executives to juggle conflicting and sometimes irreconcilable demands.”While arrests and²

Taking the “cockpit” idea a step further, law enforcement executives and others in a decision-making capacity should take their knowledge of the department, its specific needs and outside pressures (specifically those from the community and larger governing bodies) and use them to choose the “instruments” that best reflect relevant measures of effectiveness.

This does not mean an agency must subscribe to far-flung statistical models to derive value from an evidence-based training regimen, let alone create new ones. Instead, mapping performance goals to departmental and governmental needs may reveal areas where existing performance indicators could be more refined.

Returning to one example shared in the NIJ piece, an agency formerly concerned with the number of serious crimes reported may turn its focus to the reduction of unreported crimes as a better-refined performance indicator; likewise, those formerly concerned with hardline use-of-force complaint numbers may find greater value in secondary data points, such as complaints against officers serving in a certain capacity.

On the individual level, performance indicators should follow a similar path, with “instruments” that measure an officer’s ability to contribute to larger departmental and community-based goals.

In many modern departments, the shift in outlook may naturally require a move from justice-focused performance indicators (arrest numbers and case closure rates being two common examples) to community- and service-based metrics.³

While arrests and closed cases will always be important, agencies must also consider the immense pressure for greater community outreach and human interaction.  

Improved community confidence in police yields a direct correlation to addressing neighborhood concerns and crime reduction.⁴

This pressure is the basis behind the ongoing community policing trend and numerous other groundswell changes within law enforcement.

An officer who has undergone implicit bias and de-escalation training may not be best served by a performance evaluation based on hard arrest numbers and little else; it is up to the individual agency to find a mix of factors that works well within their unique context, with special focus on performance metrics-to-goals interdependencies, aligned in both on-the-job and evidence-based training evaluative criteria.

None of these suggestions are silver bullets. Even the narrowest best practice must ultimately be tailored to each department. As one officer in a Deloitte research piece notes: “How do you reward an officer that has strong community ties and doesn’t make an arrest all year? Right now, those ties are not captured in metrics, but they can still be effective at preventing crime.”

For the agency grappling with concerns like these, the issues may be less about what measures to monitor, and indeed more closely tied to precisely the kind of service the department wishes to put forth to the community—a matter closely tied to the question of departmental goals and community expectations.

In reality, an agency concerned with implementing evidence-based training practices will likely have grappled with these very questions before implementation, which should at least lay a foundation for future change.

To the last point, a growing body of research seems to indicate that community-based measures focused on community trust and perception are “more informative” as performance indicators.

According to BioMed Central, the softer performance indicators pose a paradox for agencies that wish to harness them: Because they tend to be more difficult to capture and quantify as compared to less useful “partial” performance indicators, department stakeholders may struggle to justify the return on investment of capturing “better” statistics when less expensive, traditionally valued measures are cheaply and readily available.

Still, no change comes without effort. Implementing evidence-based training requires a reassessment of on-the-job performance measures by which individual officers, supervisors and the department as a whole are evaluated.

Decision-makers with knowledge of the communities they serve and a real appetite for change can embrace this challenge as a path toward measurably improved community relations, improved officer performance, and a reduction in the costly errors to which ill-trained officers might subject their department and the taxpayer—and a step towards the changing face of policing to reflect the priorities of the communities served.

This article initially ran in IADLEST’s Why Law Enforcement Needs to Take a Science-based Approach to Training and Education.

References:

  1. Rose J. Despite Laws and Lawsuits, Quota-Based Policing Lingers. NPR. April 4, 2015.
  2. Sparrow MK. Measuring Performance in a Modern Police Organization. New Perspectives in Policing. NIJ. March 2015.
  3. IACP. Starting with What Works: Using Evidence-Based Strategies to Improve Community and Police Relations.
  4. U.S. Department of Justice. Importance of Police-Community Relationships and Resources for Further Reading. Community Relations Services Toolkit for Policing.
  5. Gelles M, Mirkow A, Mariani J. The future of law enforcement. Deloitte Insights. October 22, 2019.
  6. Tiwana N, Bass G, Farrell G. Police performance measurement: an annotated bibliography. Crime Sci. 2015;4:1.

Posted on Jul 15, 2021