U.S. flag

An official website of the United States government, Department of Justice.


A thorough program evaluation will require the allocation of resources to analyze the data collected. Agencies with planning and research divisions may want to identify agency staff and allocate a percent of their time during the program design phase to coordinate or conduct these evaluations. Agencies without research capacity may benefit from outside assistance in aggregating, deciphering, and interpreting the data to determine program effectiveness.

Because of the challenges associated with data collection, as well as the difficulties in analyzing often incomplete data, many law enforcement agencies partner with a local college or university to assist with this process. Academic partners may require compensation for which law enforcement agencies may need to find sources of support, including submitting joint grant proposals. If the agency chooses to engage an external research partner, these outside teams will need to work closely with law enforcement and their collaborators during the evaluation process, and this staff time commitment should be considered at the planning stage.

Law enforcement agencies should designate a staff person who will work with a subcommittee on evaluation issues. In addition to helping to ensure that all agencies that are contributing data are using sound and accurate collection and reporting practices, this group can determine how the evaluation results will be used, how they will be disseminated, and who should review interim reports and the interpretations of the data.

For additional information see:


Measuring Excellence: Planning and Managing Evaluations of Law Enforcement Initiatives


Performance measurement is a management tool for monitoring a program on a regular and on-going basis. It is typically conducted by program or agency managers and is part of an agency's management information system used to monitor progress on a variety of law enforcement activities and indicators.  Performance measurement focuses on whether a program has achieved its objectives, expressed as measurable performance standards. Performance measures may address the type or level of program activities conducted (process), the direct products and services delivered by a program (outputs), and/or the results of those products and services (outcomes).

Program evaluations take a longer view and provide more detailed information for policy and program decisions. Program evaluations typically examine a broader range of information on program performance and its context than is feasible to monitor on an ongoing basis. Often, program evaluations are conducted by, or in collaboration with, and external evaluator, such as college professor.  The program evaluation should contain both a process assessment as well as an assessment of outcomes. This will allow revisions to the activities that may be experiencing difficulties and to enhance those that are effective, as well as to provide proof of the program's success to foster sustainability.

For both performance measure and program evaluations data should be collected for both process and outcome measures. Evaluating a program's process will allow managers to assess whether the proposed activities are being carried out (how many individuals were trained, how many calls were answered by an officer with training, and more).

For additional information see:


Key Concepts and Issues in Program Evaluation and Performance Measurement


Measuring Performance in a Modern Police Organization


The primary challenges law enforcement faces include cumbersome data collection systems and the sheer volume of police activity that can be counted and categorized. Some specific challenges include:


  • Computer-Aided Dispatch (CAD) systems categorize calls for service and attach codes to the call types. Not all CAD systems have mental health call codes.
  • Codes can be limited in their ability to track mental health calls because complaints involving people with mental illness often are described in other terms, such as "suspicious activity," "welfare check," or "disturbance." When calls are coded imprecisely it is difficult to count the true number of mental health calls.
  • Changing or adding codes to allow for more accurate coding of calls can be restricted by the type of CAD system being used or the cost to program new codes.  Changing codes can also be problematic when the law enforcement agency does not have oversight of the emergency communications center, but must rely on others to authorize those changes.
  • Records Management (RM) systems are the repository for officer incident reports, which include detailed information about certain calls for service. However, CAD and RM systems usually use different operating systems, making it difficult to match call for service data to incident reports and to track information on an incident from initial call to final disposition.
  • Officers have an enormous amount of documentation to complete and information to record, and may be resistant to completing additional forms—such as those required for collecting more detailed information on incidents involving people with mental illnesses.
  • Officers may not always be clear about when to re-code calls that involve a person with mental illness, or what information to include in reports either because of a lack of training or lack of policies to guide them.
  • Many agencies use paper-based forms to collect information (as compared to use of mobile data computers), which require many person hours to enter and analyze.
  • Many agencies that are able to collect data often lack trained (or appropriate) personnel to conduct analyses of the data.

For additional information see:


International Association of Chiefs of Police CAD-RMS resources

Standard Functional Specifications for Law Enforcement Records Management Systems Version II

Standard Functional Specifications for Law Enforcement Computer Aided Dispatch (CAD) Systems

RMS/CAD: Exploring Technology Brings the Future to the Present


​Reliable data is essential for demonstrating PMHC program activities and performance, ensuring that scarce resources are effectively managed, demonstrating to government decision-makers that the program is meeting its goals, requesting funding through annual budgets or grants, and garnering the support of behavioral health providers and other community stakeholders. Lacking the ability to collect data inhibits the ability of agencies to measure performance

Some of the frequently used performance measures that rely upon data include:


Process measures:

  • Officers trained
  • Training effectiveness
  • Officers selected as PMHC specialists
  • Policies developed
  • MOUs developed


 Operational measures:

  • The number  of calls for service involving people with mental illnesses
  • Duration of calls for service
  • Percentage of calls that specially trained personnel handle
  • Repeat calls for the same individuals
  • Repeat locations for mental health calls
  • Frequency of disposition decisions
    • resolve at scene
    • provide referral to behavioral health resources
    • transport for voluntary treatment
    • involuntary examination and hold
    • arrest
  • The frequency of use of force during mental health calls
  • The number of injuries or fatalities to officers, consumers and third parties

Each PMHC program should determine the specific goals and objectives that will guide the data collection process. Then, law enforcement and their partners can identify what information is needed to demonstrate whether progress towards these goals has been made and determine the best method to collect this data.

Many existing data sources—such as Computer-Aided Dispatch (CAD) data, incident reports, jail admissions, emergency medical services (EMS) logs, and emergency room records—can provide useful information, although there are challenges associated with extracting the needed data because these data systems typically were designed to capture information for purposes other than PMHC performance measurement or program evaluation.

Date Created: May 14, 2020