A Review of Open Education Resources for Program Evaluation at the Undergraduate Level

A Review of Open Education Resources for Program Evaluation at the Undergraduate Level

Finn McLafferty Bell, PhD, MSW

Assistant Professor of Human Services

University of Michigan – Dearborn

 

In the Health & Human Services major at the University of Michigan – Dearborn (UMD), the capstone course, “Program Evaluation,” was recently merged with a course on “Program Planning and Implementation,” making the capstone course now “Program Planning & Evaluation.” Knowing that these two courses would be merged and having a strong and relatively inexpensive traditional textbook for program planning, I wanted to explore the Open Education Resources (OER) available for program evaluation. Program evaluation is applied research that is common to practice-based professions, such as social work, public health, and education. However, program evaluation courses are more common at the graduate level rather than the undergraduate level and are often taught in conjunction with a field practice or internship experience. Students in our program are not required to complete an internship and, as most students on our campus are working class with significant work and family responsibilities, few complete unpaid internships or come into the class with paid work experience in nonprofit, governmental, or other professional settings.

While I was looking for OER specific to program evaluation and, ideally, geared toward the undergraduate level, I found very little that matched these exact specifications. Rather, I found one Masters in Social Work (MSW) level OER on practice evaluation and a community health toolbox that covers program evaluation, as well as a handful of OER on social science research more generally and basic primers on statistical methods more specifically. Although students are now required to take a basic research methods course prior to enrolling in the capstone, they do not need to take the more advanced “Quantitative Research & Statistics” class as a prerequisite. Thus, all of the OER reviewed below hold promise in being helpful if used in combination to cover the basics of program evaluation at the undergraduate level, and I hope that my reviews will be helpful to other instructors who are interested in OER for program evaluation and other applied research courses at the undergraduate level.

 

A Primer on Practice Evaluation: How to Participate in the Process of Evidence-based Practice by Elspeth Slayter

Link: https://pressbooks.pub/eslayter

License: Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License

Publication Date: July 1, 2020

Slayter delivers on the promise of providing a “Primer” that generally uses simple language and is geared towards maximum student usability. In true social work fashion, Slayter meets students where they are at in the beginning of the text, which is more than likely trepidation at having to take a research and evaluation course. The author is particularly adept at providing helpful explanations of complex concepts (such as reflectivity vs. reflexivity, the deconstruction-reconstruction process). From the outset, Slayter makes the intended audience clear—MSW students in field sites. While this clarity is helpful, it immediately made me question how usable this resource would be in my decidedly different classroom setting, a concern that would prove valid as I read more.

Although it says so in the title, the focus on “practice” evaluation versus “program” evaluation is an important distinction. Rather than teaching students how to plan evaluations to ensure that programs as a whole (i.e. the intensive case management department of a community health clinic) are meeting their goals and objectives, this text focuses on training individual practitioners (i.e. each case manager) in how to evaluate their own practice. While there is a lot of overlap between practice evaluation and program evaluation and many of the skills that Slayter covers are relevant to both, the focus on individual practitioners is unrelatable to the undergraduate student population without practice experience or outlets. How Slayter characterizes practice evaluation also betrays the clinical focus of the text—for many macro social work practitioners, particularly administrators, programs are the unit of practice to be evaluated.

As is true with many research and evaluation resources, Slayter’s primer gives only passing attention to qualitative methods. Additionally, while the author underscores the importance of practice evaluation at micro, mezzo, and macro levels, the focus is on clinical work, and the examples of macro practice evaluation are simplistic and vague. For full disclosure, I am a macro social worker and qualitative researcher, so I am likely particularly sensitive to oversights in these areas.

In addition to the author’s target audience of MSW students in evaluation courses, I would recommend this OER for field education classes in clinical pathways, where the focus on individually focused, on the ground evaluation and evidence-based practice could be particularly helpful to beginning practitioners. Overall, I think that the author was successful in achieving the aim of providing a primer on practice evaluation for MSW students.

 

“Evaluating Community Programs & Initiatives” (Chapters 36-39). In The Community Tool Box.

by the Center for Community Health and Development at the University of Kansas

Link: https://ctb.ku.edu/en/table-of-contents

License: Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License

Publication Date: 1994-2023

The Community Tool Box is a public service from the University of Kansas (KU) to help equip people, groups, and organizations to make change in their own communities to improve health outcomes. KU developed this resource with the “fervent hope…that these tools can make it easier for people to take action to assure healthier and more just communities throughout the world” (“About the Tool Box”). In this review, I am focusing specifically on Chapters 36-39, which collectively make up the “Evaluating Community Programs and Initiatives” portion of the Tool Box.

Chapter 36 starts with a clear definition of both evaluation and programs, along with many examples of each. This is particularly helpful for students who do not have familiarity with nonprofit or governmental work and who struggle to understand what a program is. The authors also delve into the “why” of program evaluation before starting into the “how” which again is important in terms of building buy-in for why learning about and conducting program evaluation is so important. Not surprisingly for a public health tool that is a shining example of community-engaged scholarship, the Community Tool Box incorporates community-based participatory research principles throughout the discussion of program evaluation. After introducing readers to the basics of program evaluation, Chapter 36 ends with how to create an evaluation plan.

Chapter 37 picks up where 36 left off in choosing evaluation questions and developing a plan. While there is some redundancy to this that makes it difficult if you are assigning all of the chapters for reading, the repetition makes it such that the chapters work better as stand-alone readings. Chapter 37 then focuses primarily on data collection, covering both considerations and common sources of data for program evaluation. Chapter 38 moves onto research methods used in evaluating community initiatives. Some of these research methods are so specific that they are not necessarily broadly useful, but the additional online resources in this chapter are especially helpful supplements. As is true with many evaluation resources, qualitative methods are underrepresented, making up only one of the seven specific methods covered in this section. Chapter 39 effectively closes the loop on program evaluation by discussing how evaluation should be used to actually improve programs. This may seem like an obvious conclusion, but it is so often neglected that all of the resources that the Tool Box provides on how to meaningfully utilize and incorporate evaluation results into programming and future evaluation plans is refreshing.

The Community Tool Box was clearly developed to serve as many people as possible. This is a major strength of the Tool Box in terms of broad usefulness and accessibility, but its comprehensiveness can also strain undergraduate students’ attention span, particularly when some of the material is not applicable to their specific interests or projects. Much of the writing has been adapted from other sources, such as the Center for Disease Control & Prevention (CDC), which is a strength in terms of being grounded in evidence-based practice but also can mean dry writing. The chapters are divided into sections, which is helpful in terms of assigning only the sections that are relevant; however, the format of it (online only, no page numbers) make it difficult to further select if only part of a section is relevant. Each section also has supplementary materials, which are very helpful as an instructor, including a checklist of main points, a PowerPoint, and, in some sections, examples. Unsurprisingly for a Community Tool Box, this resource is focused on the big picture of program evaluation. The example interventions span micro, mezzo, and macro practice, but evaluation is generally focused on assessing the initiative as a whole. This is a better fit for a program evaluation class than Slayter’s text, but also the two resources work well together in that they generally complement each other’s strengths and weaknesses.

 

A Quick, Free, Somewhat Easy-to-Read Introduction to Empirical Social Science Research Methods

by Christopher S. Horne

Link: https://scholar.utc.edu/oer/1/

License: Creative Commons Attribution Non-Commercial No Derivatives

Publication Date: 2018

Horne provides an excellent summary on research methods that, surprisingly, also includes discussion and examples of evaluation, making this text particularly helpful for a program evaluation class. This text is written in a fun, informal manner as if the author was “talking to a friend” as they say. Although many of the concepts that Horne is discussing are no doubt ones that many students can find intimidating, especially at the undergraduate level, the writing style and many examples make the writing quite accessible and engaging.

Horne uses many different kinds of examples from public policy, social work, health, and education to show how both research and evaluation are done. The author also includes robust discussions of both quantitative and qualitative methods with a clear explanation of what should be used when and what some of the broader ontological and epistemological frameworks undergirding each are (without confusing students by actually using those words). As a qualitative researcher, I particularly appreciated the explanation of qualitative data analysis and the excellent resources that Horne provides on qualitative data analysis, including for evaluation.

To be clear, this text does not teach students how to do statistics or other specific analysis methods, but it does teach students what inferential statistics actually mean and how they can be applied. Horne is very successful in providing a broad overview, not a specific how-to, which is beyond the scope of such a short text. Particularly for a program evaluation class in which the students do not necessarily have a strong background in research methods, this short text is an excellent refresher on what they should have learned in an introductory undergraduate or graduate level research methods course. Further, Horne helps readers to become more discerning consumers of empirical research, which at the undergraduate level is an outstanding contribution.

 

A Quick Guide to Quantitative Research in the Social Sciences

by Christine Davies

Link: https://oercommons.org/courses/a-quick-guide-to-quantitative-research-in-the-social-sciences

License: Creative Commons Attribution Non-Commercial

This text is truly a very quick guide at only 26 double-spaced pages. Nonetheless, Davies packs a lot of information on the basics of quantitative research methods into this text, in an engaging way with many examples of the concepts presented. This guide is more of a brief how-to that takes readers as far as how to select statistical tests. While it would be impossible to fully learn quantitative research from such a short text, of course, this resource provides a great introduction, overview, and refresher for program evaluation courses.