advertisement

Street Jibe Evaluation Workshop 2

67 %
33 %
advertisement
Information about Street Jibe Evaluation Workshop 2
Education

Published on November 14, 2008

Author: brentmack

Source: slideshare.net

Description

Dr. Uzo Anucha - Workshop presentation -Streetjibe - Thinking Critically to Improve Program Effectiveness
advertisement

A Conversation about Program Evaluation: Why, How and When? Uzo Anucha, MSW; PhD Associate Professor – School of Social Work Director – Applied Social Welfare Research and Evaluation Group York University

Presentation Outline Setting the Context for our Program Evaluation Work Our Evaluation Principles….. Why Evaluate? Who is An Evaluation For? Types of Evaluation Outcome Evaluation Planning a Program Evaluation Engage Stakeholders Focus the Evaluation Collect Data Analyze & Interpret Use the Information Ready, Set, Go? Some Things to Consider…..

Setting the Context for our Program Evaluation Work

Our Evaluation Principles…..

Why Evaluate?

Who is An Evaluation For?

Types of Evaluation

Outcome Evaluation

Planning a Program Evaluation

Engage Stakeholders

Focus the Evaluation

Collect Data

Analyze & Interpret

Use the Information

Ready, Set, Go? Some Things to Consider…..

Setting the Context for our Program Evaluation Work

Our Evaluation Principles ….. We are committed to the following principles/values in our evaluation work Strengthen projects Use multiple approaches Design evaluation to address real issues Create a participatory process Allow for flexibility Build capacity (W.K. Kellogg Foundation Evaluation Handbook, 1998)

We are committed to the following principles/values in our evaluation work

Strengthen projects

Use multiple approaches

Design evaluation to address real issues

Create a participatory process

Allow for flexibility

Build capacity

(W.K. Kellogg Foundation Evaluation Handbook, 1998)

Our Evaluation Approach…. A Critical Approach: Question the questions. Some questions to consider: How does this program work? Why has it worked or not worked? For whom and in what circumstances? What was the process of development and implementation? What were the stumbling blocks faced along the way? What do the experiences mean to the people involved? How do these meanings relate to intended outcomes? What lessons have we learned about developing and implementing this program? How have contextual factors impacted the development, implementation, success, and stumbling blocks of this program? What are the hard-to-measure impacts of this program (ones that cannot be easily quantified)? How can we begin to effectively document these impacts? (W.K. Kellogg Foundation Evaluation Handbook, 1998)

A Critical Approach: Question the questions.

Some questions to consider:

How does this program work?

Why has it worked or not worked? For whom and in what circumstances?

What was the process of development and implementation?

What were the stumbling blocks faced along the way?

What do the experiences mean to the people involved?

How do these meanings relate to intended outcomes?

What lessons have we learned about developing and implementing this program?

How have contextual factors impacted the development, implementation, success, and stumbling blocks of this program?

What are the hard-to-measure impacts of this program (ones that cannot be easily quantified)? How can we begin to effectively document these impacts?

(W.K. Kellogg Foundation Evaluation Handbook, 1998)

Our Evaluation Approach…. We acknowledge the influence of paradigms, politics, and values and are willing to deal with these by: Getting ‘inside’ the project Creating an environment where all stakeholders are encouraged to discus their values and philosophies Challenging our assumptions Asking stakeholders for their perspectives on particular issues Listening Remembering there may be multiple “right” answers Maintain regular contact and provide feedback to stakeholders Designing specific strategies to air differences and grievances. Make the evaluation and its findings useful and accessible. Early feedback and a consultative relationship with stakeholders and project staff leads to a greater willingness by staff to disclose important and sensitive information Sensitivity to the feelings and rights of individuals. Create an atmosphere of openness to findings, with a commitment to considering change and a willingness to learn. (W.K. Kellogg Foundation Evaluation Handbook, 1998)

We acknowledge the influence of paradigms, politics, and values and are willing to deal with these by:

Getting ‘inside’ the project

Creating an environment where all stakeholders are encouraged to discus their values and philosophies

Challenging our assumptions

Asking stakeholders for their perspectives on particular issues

Listening

Remembering there may be multiple “right” answers

Maintain regular contact and provide feedback to stakeholders

Designing specific strategies to air differences and grievances.

Make the evaluation and its findings useful and accessible. Early feedback and a consultative relationship with stakeholders and project staff leads to a greater willingness by staff to disclose important and sensitive information

Sensitivity to the feelings and rights of individuals.

Create an atmosphere of openness to findings, with a commitment to considering change and a willingness to learn.

(W.K. Kellogg Foundation Evaluation Handbook, 1998)

What is Not Program Evaluation? What is Program Evaluation?

Program evaluation is not an assessment of individual staff performance. The purpose is to gain an overall understanding of the functioning of a program. Program evaluation is not an audit – evaluation does not focus on compliance with laws and regulations. Program evaluation is not research . It is a pragmatic way to learn about a program. What is Not Program Evaluation?

Program evaluation is not an assessment of individual staff performance. The purpose is to gain an overall understanding of the functioning of a program.

Program evaluation is not an audit – evaluation does not focus on compliance with laws and regulations.

Program evaluation is not research . It is a pragmatic way to learn about a program.

Program evaluation is not one method . It can involve a range of techniques for gathering information to answer questions about a program. Most programs already collect a lot of information that can be used for evaluation. Data collection for program evaluation can be incorporated in the ongoing record keeping of the program. What is Not Program Evaluation?

Program evaluation is not one method . It can involve a range of techniques for gathering information to answer questions about a program.

Most programs already collect a lot of information that can be used for evaluation. Data collection for program evaluation can be incorporated in the ongoing record keeping of the program.

Program evaluation means taking a systematic approach to asking and answering questions about a program. Program evaluation is a collection of methods, skills and sensitivities necessary to determine whether a human service is needed and likely to be used , whether it is sufficiently intensive to meet the unmet needs identified, whether the service is offered as planned , and whether the human service actually does help people in need at reasonable cost without undesirable side effects (Posavac & Carey, 2003. p.2) What is Program Evaluation?

Program evaluation means taking a systematic approach to asking and answering questions about a program.

Program evaluation is a collection of methods, skills and sensitivities necessary to determine whether a human service is needed and likely to be used , whether it is sufficiently intensive to meet the unmet needs identified, whether the service is offered as planned , and whether the human service actually does help people in need at reasonable cost without undesirable side effects (Posavac & Carey, 2003. p.2)

Why Evaluate?

Verify that resources are devoted to meeting unmet needs Verify that planned programs do provide services Examine the results Determine which services produce the best results Select the programs that offer the most needed types of services Why Evaluate?

Verify that resources are devoted to meeting unmet needs

Verify that planned programs do provide services

Examine the results

Determine which services produce the best results

Select the programs that offer the most needed types of services

Provide information needed to maintain and improve quality Watch for unplanned side effects Create program documentation Help to better allocate program resources Assist staff in program development and improvement Why Evaluate?

Provide information needed to maintain and improve quality

Watch for unplanned side effects

Create program documentation

Help to better allocate program resources

Assist staff in program development and improvement

Evaluation can…. Increase our knowledge base Guide decision making Policymakers Administrators Practitioners Funders General public Clients Demonstrate accountability Assure that client objectives are being achieved

Increase our knowledge base

Guide decision making

Policymakers

Administrators

Practitioners

Funders

General public

Clients

Demonstrate accountability

Assure that client objectives are being achieved

Who is an evaluation for?

What do they want to know? What do we want to tell them about the program? How can they contribute to the evaluation? Program participants? Family members and caregivers? Program staff? Volunteers? Partner agencies and professionals? Referral sources? Funders? Others? Who is an evaluation for?

What do they want to know?

What do we want to tell them about the program?

How can they contribute to the evaluation?

Program participants?

Family members and caregivers?

Program staff?

Volunteers?

Partner agencies and professionals?

Referral sources?

Funders?

Others?

Types of Evaluation….

Types of evaluations Needs assessment Evaluability assessment Process evaluation Outcome evaluation Efficiency evaluation (cost evaluation)

Needs assessment

Evaluability assessment

Process evaluation

Outcome evaluation

Efficiency evaluation (cost evaluation)

Process Evaluation….

Process Evaluation Sometimes referred to as “formative evaluation” Documents and analyzes how a program works and identifies key factors that influence the operation of the program. Allows for a careful description of a program’s actual implementation and services therefore facilitating the replication of the program. Emphasis is on describing activities and characteristics of clients and workers. Allows for an investigation of whether services are delivered in accordance with program design and makes it possible to study the critical ingredients of a model.

Sometimes referred to as “formative evaluation”

Documents and analyzes how a program works and identifies key factors that influence the operation of the program.

Allows for a careful description of a program’s actual implementation and services therefore facilitating the replication of the program.

Emphasis is on describing activities and characteristics of clients and workers.

Allows for an investigation of whether services are delivered in accordance with program design and makes it possible to study the critical ingredients of a model.

Process Evaluation Findings of a process evaluation are critical in shaping further development of a program’s services and assists in explaining why program objectives are (or are not) being met. Focuses on verifying program implementation… looks at the approach to client service delivery...day to day operations Two major elements: 1) how a program’s services are delivered to clients (what worker’s do including frequency and intensity; client characteristics; satisfaction 2) administrative mechanisms to support these services (qualifications; structures; hours; support services; supervision; training)

Findings of a process evaluation are critical in shaping further development of a program’s services and assists in explaining why program objectives are (or are not) being met.

Focuses on verifying program implementation… looks at the approach to client service delivery...day to day operations

Two major elements:

1) how a program’s services are delivered to clients (what worker’s do including frequency and intensity; client characteristics; satisfaction

2) administrative mechanisms to support these services (qualifications; structures; hours; support services; supervision; training)

Process Evaluation: Examples of Process Evaluation Questions: Is the program attracting a sufficient number of clients? Are clients representative of the target population? How much does the staff actually contact the client? Does the workload of staff match that planned? Are there differences in effort among staff?

Examples of Process Evaluation Questions:

Is the program attracting a sufficient number of clients?

Are clients representative of the target population?

How much does the staff actually contact the client?

Does the workload of staff match that planned?

Are there differences in effort among staff?

Outcome Evaluation….

Outcome Evaluation Outcomes are benefits or changes for individuals or populations during or after participating in program activities. Outcomes may relate to behavior, skills, knowledge, attitudes, values, condition, or other attributes. They are what participants know, think, or can do; or how they behave; or what their condition is, that is different following the program. Outcome evaluation helps us to demonstrate the nature of change that took place

Outcomes are benefits or changes for individuals or populations during or after participating in program activities. Outcomes may relate to behavior, skills, knowledge, attitudes, values, condition, or other attributes.

They are what participants know, think, or can do; or how they behave; or what their condition is, that is different following the program.

Outcome evaluation helps us to demonstrate the nature of change that took place

Outcome Evaluation Outcome evaluation tests hypotheses about how we believe that clients will change after a period of time in our program. Evaluation findings are specific to a specific group of clients experiencing the specific condition of one specific program over a specific time frame at a specific time.

Outcome evaluation tests hypotheses about how we believe that clients will change after a period of time in our program.

Evaluation findings are specific to a specific group of clients experiencing the specific condition of one specific program over a specific time frame at a specific time.

For example: A program to counsel families on financial management, outputs--what the service produces--include the number of financial planning sessions and the number of families seen. The desired outcomes--the changes sought in participants' behavior or status- -can include their developing and living within a budget, making monthly additions to a savings account, and having increased financial stability.

A program to counsel families on financial management, outputs--what the service produces--include the number of financial planning sessions and the number of families seen. The desired outcomes--the changes sought in participants' behavior or status- -can include their developing and living within a budget, making monthly additions to a savings account, and having increased financial stability.

Uses of Outcome Evaluation Improving program services to clients Generating knowledge for the profession Estimating costs Demonstrate nature of change...evaluation of program objectives e.g. what we expect clients to achieve Guide major program decisions and program activities

Improving program services to clients

Generating knowledge for the profession

Estimating costs

Demonstrate nature of change...evaluation of program objectives e.g. what we expect clients to achieve

Guide major program decisions and program activities

Outcome Evaluation Describe program effects Is the desired outcome observed? Are program participants better off than non-participants? Is there evidence that the program caused the observed changes? Is there support for the theoretical foundations underpinning the program? Is there evidence that the program could be implemented successfully elsewhere?

Describe program effects

Is the desired outcome observed?

Are program participants better off than non-participants?

Is there evidence that the program caused the observed changes?

Is there support for the theoretical foundations underpinning the program?

Is there evidence that the program could be implemented successfully elsewhere?

Program-Level Evaluations Program level evaluations vary on a continuum and are fundamentally made up of three levels Exploratory Descriptive Explanatory

Program level evaluations vary on a continuum and are fundamentally made up of three levels

Exploratory

Descriptive

Explanatory

Program-Level Evaluations Program level evaluations vary on a continuum and are fundamentally made up of three levels Exploratory Descriptive Explanatory

Program level evaluations vary on a continuum and are fundamentally made up of three levels

Exploratory

Descriptive

Explanatory

Exploratory Outcome Evaluation Designs Questions here include: Did the participants meet a criterion (e.g. Treated vs. Untreated)? Did the participants improve (e.g. appropriate direction)? Did the participants improve enough (e.g. statistical vs. meaningful difference)? Is there a relation between change and service intensity and participant characteristics?

Questions here include:

Did the participants meet a criterion (e.g. Treated vs. Untreated)?

Did the participants improve (e.g. appropriate direction)?

Did the participants improve enough (e.g. statistical vs. meaningful difference)?

Is there a relation between change and service intensity and participant characteristics?

Exploratory Designs One group post test only Multi-group post test only Longitudinal case study Longitudinal survey

One group post test only

Multi-group post test only

Longitudinal case study

Longitudinal survey

Strengths of Exploratory Designs Less intrusive and inexpensive Assess the usefulness and feasibility of further evaluations Can correlate improvement with other variables.

Less intrusive and inexpensive

Assess the usefulness and feasibility of further evaluations

Can correlate improvement with other variables.

Descriptive Designs To show that something causes something else, it is necessary to demonstrate: That the cause precedes the supposed effects in time e.g. that an intervention precedes the change That the cause covaries with the effect – the change covaries with the intervention – the more the intervention, the more the change. That no viable explanation of the effect can be found except for the assumed cause e.g. there can be no other explanation for the change except the intervention. Both 1 and 2 can be achieved with exploratory designs…but not 3.

To show that something causes something else, it is necessary to demonstrate:

That the cause precedes the supposed effects in time e.g. that an intervention precedes the change

That the cause covaries with the effect – the change covaries with the intervention – the more the intervention, the more the change.

That no viable explanation of the effect can be found except for the assumed cause e.g. there can be no other explanation for the change except the intervention.

Both 1 and 2 can be achieved with exploratory designs…but not 3.

Descriptive Designs Randomized one-group posttest only Randomized cross-sectional and longitudinal survey One-group pretest-posttest Comparison group posttest only Comparison group pretest-posttest Interrupted time series 

Randomized one-group posttest only

Randomized cross-sectional and longitudinal survey

One-group pretest-posttest

Comparison group posttest only

Comparison group pretest-posttest

Interrupted time series 

Explanatory Designs Defining characteristic is observation of people randomly assigned to either a program or control condition . Considered much better at addressing threats to internal validity Program group vs. Control group: if groups are formed randomly there is no reason to believe they differ in rate of maturation; no self selection into groups; groups did not begin at different levels

Defining characteristic is observation of people randomly assigned to either a program or control condition .

Considered much better at addressing threats to internal validity

Program group vs. Control group: if groups are formed randomly there is no reason to believe they differ in rate of maturation; no self selection into groups; groups did not begin at different levels

Explanatory Designs Classical experimental Solomon four group Randomized posttest only control group

Classical experimental

Solomon four group

Randomized posttest only control group

Explanatory Designs Strengths/Limitations: counter threats to internal validity allow interpretations of causation expensive and difficult to implement frequently resistance from practitioners who already know what is best   Suggested Times to Use: when new program is introduced when stakes are high when there is controversy over efficacy when policy change is desired when program demand is high

Strengths/Limitations:

counter threats to internal validity

allow interpretations of causation

expensive and difficult to implement

frequently resistance from practitioners who already know what is best

  Suggested Times to Use:

when new program is introduced

when stakes are high

when there is controversy over efficacy

when policy change is desired

when program demand is high

Planning a Program Evaluation

Planning a Program Evaluation Engage Stakeholders Focus the Evaluation Collect Data Analyze & Interpret Use the Information

Engage Stakeholders

Focus the Evaluation

Collect Data

Analyze & Interpret

Use the Information

Engage Stakeholders Who should be involved? How might they be engaged? Identify & meet with stakeholders – program director, staff, funders/program sponsors and clients/program participants.

Who should be involved?

How might they be engaged?

Identify & meet with stakeholders – program director, staff, funders/program sponsors and clients/program participants.

Focus the Evaluation What are you going to evaluate? (Describe program logic model/theory of change) What is the evaluability of the program? What is the purpose of the evaluation? Who will use the evaluation? How will they use it? What questions will the evaluation seek to answer? What information do you need to answer the questions? When is the evaluation needed? What evaluation will you use?

What are you going to evaluate? (Describe program logic model/theory of change)

What is the evaluability of the program?

What is the purpose of the evaluation?

Who will use the evaluation? How will they use it?

What questions will the evaluation seek to answer?

What information do you need to answer the questions?

When is the evaluation needed?

What evaluation will you use?

Collect Data What sources of information will you use? Intended beneficiaries of the program (program participants, artifacts, community indexes) Providers of service (program staff, program records) Observers (expert observers, trained observers, significant others, evaluation staff) What data collection method (s) will you use? When will you collect data for each method you’ve chosen?

What sources of information will you use?

Intended beneficiaries of the program (program participants, artifacts, community indexes)

Providers of service (program staff, program records)

Observers (expert observers, trained observers, significant others, evaluation staff)

What data collection method (s) will you use?

When will you collect data for each method you’ve chosen?

Analyze & Interpret How will the data be analyzed? Data analysis methods Who is responsible How will the information be interpreted – by whom? What did you learn? What are the limitations?

How will the data be analyzed?

Data analysis methods

Who is responsible

How will the information be interpreted – by whom?

What did you learn?

What are the limitations?

Use the Information How will the evaluation be communicated and shared? To whom? When? Where? How to present? Next steps

How will the evaluation be communicated and shared?

To whom?

When?

Where?

How to present?

Next steps

Ready, Set, Go? Some things to consider…..

StreetJibe: Summary of Process and Outcome Evaluation Questions

Things to Consider….. Planning an evaluation follows similar steps to the conduct of more basic research with some additional considerations More effort needs to be expended in engaging and negotiating with stakeholder groups There needs to be a keener awareness of the social/political context of the evaluation (e.g. differing and competing interests)

Planning an evaluation follows similar steps to the conduct of more basic research with some additional considerations

More effort needs to be expended in engaging and negotiating with stakeholder groups

There needs to be a keener awareness of the social/political context of the evaluation (e.g. differing and competing interests)

Important to consider… Internal or external evaluators? Scope of evaluation? Boundary Size Duration Complexity Clarity and time span of program objectives Innovativeness

Internal or external evaluators?

Scope of evaluation?

Boundary

Size

Duration

Complexity

Clarity and time span of program objectives

Innovativeness

Challenging Attitudes toward Program Evaluation……. Expectations of slam-bang effects Assessing program quality is unprofessional Evaluation might inhibit innovation Program will be terminated Information will be misused Qualitative understanding might be lost Evaluation drains resources Loss of program control Evaluation has little impact

Expectations of slam-bang effects

Assessing program quality is unprofessional

Evaluation might inhibit innovation

Program will be terminated

Information will be misused

Qualitative understanding might be lost

Evaluation drains resources

Loss of program control

Evaluation has little impact

Add a comment

Related presentations

Related pages

Street Jibe Evaluation Workshop 2 - Education

Dr. Uzo Anucha - Workshop presentation -Streetjibe - Thinking Critically to Improve Program Effectiveness
Read more

Street Jibe Evaluation Workshop 2 - Education

1.A Conversation about Program Evaluation: Why, How and When? Uzo Anucha, MSW; PhD Associate Professor – School of Social Work Director – Applied ...
Read more

New Happy Street 2 - Evaluation Test - Documents

Share New Happy Street 2 - Evaluation Test. ... Street Jibe Evaluation Workshop 2 Dr. Uzo Anucha - Workshop presentation -Streetjibe ...
Read more

Preop evaluation workshop (2) - Health & Medicine

Share Preop evaluation workshop (2) ... Street Jibe Evaluation Workshop 2. Preop Fasting. Preop. gest. Evaluation of Theater Workshop. Generic Workshop ...
Read more

The Jibe | LinkedIn

Google’s acquisition of Jibe has the potential to be the most important move in the Unified Communications landscape in years. First, ...
Read more