How to Develop an Evaluation Plan and What are the Characteristics of an Effective Evaluator  NIH MORE 2010 “How To…” Conference – Chicago 

How to Develop an Evaluation Plan and What are the Characteristics of an Effective Evaluator NIH MORE 2010 “How To…” Conference – Chicago June 10 & 11, 2010 Alan R. Peterfreund, Ph.D. SageFox Consulting Group There are five components common to all of the NIH MORE programs: 1) Student recruitment/selection; 2) Student performance and progress; 3) Student‐related interventions/activities; 4) Faculty development; and 5) Institutional change. In this workshop, we are going to consider how each of these components is evaluated. We will be using as our framework the Logic Model and supportive material developed by the University of Wisconsin‐ Extension program that participants were asked to review prior to the meeting (selected highlights are included in this handout.) In small groups you are going to work on two or three of the five components. The categories of outcomes gleamed from participant plans provided prior to this workshop should serve to quickly focus the group on outcomes of common interest. Given the paucity of outcomes associated with Institutional change, we are asking for that to be one of three you consider. It will also be the only one we discuss when we reconvene with ~20 minutes to go in our session. For each component: 1) Identify a small set of important outcomes (see handout for guiding questions) 2) Develop questions around “What would you want to know as program leaders in order to improve your program (formative)? What would you want to report to NIH in an annual report and to stakeholders on your campus (summative)? 3) What evidence would you like to have that evaluates the outcome? For the readout at the end of the session, each group should be prepared to share answers associated one outcome that addresses institutional change. [email protected] June 2010 SageFox Consulting Group
NIH MORE 2010 “How To…” Conference – Evaluation Categories of Outcomes – From Participant Submissions Student Recruitment/Selection • The pool of potential students o Outreach to partner institutions and summer programs o In‐reach to current students and transfers • Target underrepresented groups o Defined by race/ethnicity, economic disadvantage, and/or disability • Efficacy of pre‐program preparatory and recruitment activities • Selection standards and processes • Efficacy of marketing strategy Student Performance and Progress • Development/changes in interest and plans • Productivity in science o Achieved through publications, presentations, posters • Enhanced academic performance • Retention in degree program • Time to degree • Degree obtainment • Performance on critical exams (qualifiers, GRE) • Achievement of honors • Advancement to the next academic level • Quality of programs accepting the students • Job attainment Student‐Related Interventions/Activities • Development of specific and general skills • Provision of social support and integration into the group • Enhancement of self‐efficacy • Outcomes specific to the program activity Activities Mentioned Student skill development Student outreach to other populations • Programs to enhance math or writing skills Family outreach or engagement • Facilitated Study Groups/Supplemental Advising and mentoring Instruction Preparation for future opportunities • Training in scientific communication, grant • Help with applications to graduate school or writing summer REUs • Ethics training • Financial planning Exposure to science and research • Support of alumni • Exposure to science, research, or labs • Research experiences (on campus, off campus, rotating through labs) • Conference attendance and participation • Journal club • Guest lecture programs 2
SageFox Consulting Group
NIH MORE 2010 “How To…” Conference – Evaluation Faculty Development • Enhanced scientific productivity • Participation in the scientific community o Conferences such as SACNAS, ABRCMS • Development of mentoring/advising skills • Incorporation of new pedagogy • New practices in running a research group • Increased diversity in the lab and department Institutional Change • Identification of barriers to success • Changes in success of the population of underrepresented students • Increased numbers of underrepresented students enrolled in biomedical disciplines • Increased numbers of underrepresented students advancing to PhD programs • Changes in policies or practices • Curriculum reform • Transfer agreements • Partnerships with feeder schools or receiving programs • Coordination with other student support programs (on or off campus) • Institutionalization of program elements • Creation of models of sustainability 3
SageFox Consulting Group
NIH MORE 2010 “How To…” Conference – Evaluation The following are highlights from material produced by UW‐Extension Program Development ‐‐ Logic model on‐line, self‐study module: "Enhancing Program Performance with Logic Models http://www.uwex.edu/ces/lmcourse/interface/coop_M1_Overview.htm Outputs vs. Outcomes Understanding the difference between outputs and outcomes is important. Outputs relate to "what we do." Outcomes refer to "what difference is there." In the past, we've tended to focus on what is included in the outputs column ‐ the "what we do and who we reach." We are anxious to tell our clients, funders and community partners what it is that we do, the services we provide, how we are unique, who we serve... We've done a good job of describing and counting our activities and the number of people who come. Now, however, we are being asked: "What difference does it make?" This is a question about OUTCOMES. Identifying Outcomes Identifying specific, measurable outcomes requires time, thought, and a clear understanding of desired results. Some ways to do this include: •
•
Ask yourself: What is/will be different as a result of the initiative? For whom? What will be changed/improved? What do/will beneficiaries say is the value of the program? What do/will they say about why they come? Think about what you want to be able to say to your funder or the taxpayers who finance your program. What would you want to say to your state legislator? If you could write a news release about your program, what would the headline be? Your answers to these questions are most likely outcomes. 4
SageFox Consulting Group
NIH MORE 2010 “How To…” Conference – Evaluation •
•
•
For an existing program, look at all the program's major activities. For each activity, ask yourself, "Why are we doing that?" Usually, the answer to the "Why?" question is an outcome. Seek ideas and input from others. Their perspectives will help provide a broader understanding of the program and its benefits. This activity will also help build consensus among key program stakeholders. You might talk with current and past participants, funders, peers, local officials, board members, and informed outsiders. Review existing program material. Some people apply the SMART format when writing outcomes. SMART refers to •
•
•
•
•
Specific: concrete; who or what is expected to change Measurable: can see, hear, count, smell it Attainable: likely to be achieved Results‐oriented: meaningful, valued results Timed: target date Section Summary Outcomes are the benefits that result from the program or initiative. These benefits may be for individuals, groups (including households, families), agencies and organizations, systems, or communities. Outcomes relate to the knowledge, skills, attitudes, motivations, values, capacities, behaviors, practices, policies, decision making, and actions that occur that affect our economic, social, civic, and environmental conditions. Outcomes: •
•
•
•
•
•
•
Are not "what we do" but what results from what we do that is of value or benefit to others. Often occur over time on a continuum from short‐ to longer‐term results. May focus on the individual, group (family), agency, systems, or community. Are usually best identified through the involvement of others. Should be important, realistic, and reasonable. Can be articulated in succinct, action‐oriented statements. Can be unintended and negative so it is important to think about the unexpected, as well as the expected, as we craft our logic models Evaluation Section Summary •
•
Our advice is to keep evaluation straightforward and simple. Focusing on what you need to know and who will use the information for what helps to contain an evaluation and keep it useful. A logic model is not an evaluation model but rather a process and a framework. The logic model facilitates effective evaluation by helping you: o Determine what to evaluate. o Identify appropriate questions for your evaluation based on the program. o Know what information to collect to answer your evaluation questions‐‐the indicators. o Determine when to collect data. o Determine data collection sources, methods, and instrumentation. 5