Section 3: Routine Assessment and Self-Evaluation
Evaluation Techniques
The type of assessment and self-evaluation you decide on depends on the data you have and the outcomes you wish to evaluate. Though we often use the term self-evaluation in the general sense, there are many types of evaluations. The five most common you might use for the TJC initiative are
1. Process Evaluation: Documents all aspects of program planning, development, and implementation and how they add value to services for those transitioning from the jail to the community.
Data sources that support process evaluations usually include program materials, direct observation of the intervention, and semi-structured in-person interviews with staff and other stakeholders that focus on the intervention.
2. Outcome Evaluation: Assesses the extent to which an intervention produces the intended results for the targeted population; outcome evaluations typically use some kind of comparison group (e.g., participants who are similar to the target population but don’t get the intervention being evaluated). This technique is more formal than performance measurement.
Note: Outcome evaluations are in-depth studies that include comparison groups; these evaluations take many months to obtain results and are often expensive. An independent evaluator may be needed. The benefit of an outcome evaluation is that it answers specific questions and it attributes outcomes directly to the program or initiative studied.
3. Performance Measurement: Based on regular and systematic collection of data to empirically demonstrate results of activities.
Note: Performance measurement only tracks outcomes. Unlike an outcome evaluation, it cannot attribute those outcomes or changes to specific program activities. However, performance measurement is relatively easy to design and implement, and it is less resource intensive than outcome evaluations.
4. Cost-Benefit Evaluation: Measures how much an initiative, its programs, and partnerships cost, and what, if any long- short-term savings the initiative generated.
5. Quality Assurance (QA) Assessment: Involves systematic monitoring of the various aspects of a program, service, or process to ensure that standards of quality are being met; under TJC, this would include your screening, assessment, programming and case planning services. For example, QA data collection that supports QA practices could include a pre- and post-test administered short questionnaire to participants before class starts and then at the end or a brief client satisfaction survey asking them about the quality of services they received.
Below we explore two evaluation techniques in more depth:
Process Evaluation
A process evaluation will help you determine whether the TJC initiative and its programs are being implemented in the intended way, and what types of clients typically participate in the initiative.
The process evaluation focuses on capturing the basic elements of the TJC initiative as it presently functions in your community.
These data would be captured through structured observations of the TJC stakeholders, interviews with program staff, and a review of all available documentation.
Basic system-level questions you would seek to answer include
- What is the overall TJC initiative strategy?
- How is it different from business as usual?
- Who is involved? Who are the stakeholders?
- What does each stakeholder contribute?
- What are the core elements of the approach?
- What are the mechanisms for collecting data on clients—prior history, current experiences, and follow-up?
Additional questions include
- How many agencies, partners, and clients participate in the TJC initiative?
- What is the pool of potential participants?
- What are the eligibility criteria to participate?
- How many participate in each program?
- How long do they remain engaged with each service provider before and after release?
- How do potential participants learn about the TJC initiative?
- How do TJC participants differ from others incarcerated?
- What types of services or referrals does each participant receive?
- What are the background and demographic characteristics of participants for each service?
- Why did the participants show up to the community providers after release?
Process evaluations also assess penetration rates and program fidelity. These terms are defined below:
Penetration Rate: The TJC initiative’s reach into the target population. In other words, the number of inmates engaged in the program divided by the number of eligible inmates in the target population.
Program Fidelity: How closely the implementation of a program or component corresponds to the original model.
This is particularly important in the TJC Initiative because with limited time and resources it is imperative that all program elements adhere to the originally designed program model in order for the intervention to be as successful as possible.
Quality Assurance: A robust QA process supports the improvement of transition work over time (and makes deterioration in quality less likely). A QA plan allows all providers to participate in a process of self improvement. It also pushes the development of clear shared standards for how key elements of the transition process should be carried out, fostering consistency of approach throughout the system.
The following programmatic Quality Assurance strategies/activities are critical in monitoring how effective your programs are performing.
First, identify the key components that make this a quality, evidence-based process:
- Is it an evidence-based or a best-practice program?
- What types of offenders are best suited to benefit from the program?
- Are risk to reoffend screening data used to inform placement and/or system action?
- How are offenders identified for placement in the program (e.g., based on what criteria? By whom?)?
- What are the minimum resources required to implement the program effectively (e.g., qualified staffing, adequate space, appropriate technology, sufficient time, participant criteria)?
- Does the program come with a comprehensive curriculum and training documents provided by the program developer?
- Is there an understanding of how the program was intended to be implemented? For example, the program’s duration, class size, frequency of sessions or activities, and materials to be used or discussed in delivery of the program.
- Is there an agreement on what system and individual level outcomes would indicate program success (i.e., the program is achieving the desired outcomes)? Is there a clear target population for the program?
- Does the program target and reduce specific criminogenic needs?
Second, work with staff on site: What were the criteria for program staff selection?
- Is the staff familiar with the participants’ needs?
- Does the staff person have a background in delivering groups?
- Are staff experienced in delivering these curricula to an offender population within a correctional environment?
- Was the staff provided comprehensive training before program implementation?
- Does staff understand and support screening and assessment and identification of offender groups for programming?
- Does the staff maintain characteristics that facilitate communication?
- Is a thorough implementation plan developed prior to the start of the program?
- Are appropriate resources made available to staff and participants?
- Does the staff have access to a staff training manual?
- Is there ongoing training and supervision for the program staff?
- Has the staff been tested to insure on their understanding of program curriculum, requirements, and goals?
Third, monitor the program’s operations and measure the program’s performance.
- Are screening and assessment procedures and process followed as designed – e.g., are the right people being screened and assessed?
- Are program eligibility criteria adhered to?
- Are programs being facilitated/delivered by trained (certified) staff?
- Are case plans being developed in a timely manner according to established benchmarks determined by the initiative’s partners?
- Do case plans incorporate assessment data and address the individual’s criminogenic needs?
- Is the program held in an adequate space?
- Is there an agreement on what aspects of the program will be measured?
- Does sufficient data exist in electronic format to enhance performance evaluation? Is a system in place and evaluation tools developed to gather performance and outcome feedback from the program participants and staff (e.g., observations, surveys, administrative data, audits, assessment instruments, and file reviews)?
- Is there adequate record keeping?
- Can you measure short, intermediate, and long-term outcomes?
Fourth, improve the program through:
- Quality team collaboration
- Using a strength-based, supportive approach
- Being results-oriented based upon objective, transparent measures
- Using measures that are individual- and system-focused
- Embracing a learning organization orientation
- Enhancing long-term sustainability through policy adjustments that are informed by objective evaluation
- Celebrating success and improvement
Sample System Questions for consideration to maintain program philosophy and integrity
- What staff will be allocated to oversee the quality assurance (QA) process?
- How will QA outcomes be reported, to whom, and for what purpose?
- How will observations and feedback be structured?
- How will system and individual audits be structured? How often will they be conducted? By whom? How will outcomes be utilized?
- How will this quality assurance process guide the adjustment of curriculum and programming to better meet the needs of the clients being served?
- How will gaps between the current and expected levels of quality be addressed?
- What process will be enacted to utilize QA outcomes to revise policy, procedure, and/or practice?
- How will revisions be reported to TJC, system, or organizational stakeholders?
Final Report: Process & Systems Change Evaluation Findings from the TJC Initiative is a detailed account examining how implementation worked in the TJC Phase 1 learning sites.
 |
2 of 4 |
|