Home Events Designing Evaluations with Comparison Groups: Part 1

Designing Evaluations with Comparison Groups: Part 1

This introductory webinar on March 19 at 12 p.m. ET equips S-STEM PIs and evaluators with essential design thinking to move from descriptive outcomes to stronger, defensible evidence of program impact. The webinar introduces the foundations of quasi-experimental designs by explaining when and why they are used and the kinds of real-world problems they are meant to solve. It highlights common threats to validity in single- and multiple-group studies and focuses on the central challenge of selection bias—how to think carefully about what would have happened in the absence of a program or intervention. The session then walks through practical examples of comparison groups commonly used in education settings, emphasizing how thoughtful design choices strengthen conclusions and could be applied to the evaluation planning for S-STEM PIs and their evaluators.

Speakers

Headshot of Amanda Parsad.

Amanda Parsad

Managing Director, AIR

Amanda Parsad is a managing director at American Institutes for Research (AIR). In this role, she provides methodological support and quality assurance across AIR’s portfolio of federal education evaluation studies. With over 25 years of experience in research and evaluation, she leads, designs, and conducts experimental, quasi experimental and descriptive studies that inform education policy. Ms. Parsad’s research focuses on evaluating education programs and policies designed to help students prepare for, transition into, succeed in, and afford college.

For the U.S. Department of Education’s Institute for Education Sciences (IES) National Center for Education Evaluation and Regional Assistance, Ms. Parsad has designed studies to assess the effectiveness of programs aimed at preparing students for college, increasing access and success for students historically underrepresented in postsecondary education, and improving college affordability. Her work includes studying potential improvements to two prominent federal college access programs, Upward Bound and GEAR UP. She led the design and analysis of two randomized controlled trials within these programs to examine how program changes could increase college enrollment and persistence for students from low-income households. She also serves as the principal investigator on a set of rigorous studies examining how changes to federal student aid policy affect student enrollment and persistence in college.

Beyond her work with IES, Ms. Parsad has served as the director of analysis on several evaluations funded by the National Science Foundation (NSF) where she was responsible for designing quasi-experimental studies to determine potential long-term program effects of various NSF programs. These included the Faculty Early Career Development (CAREER) Program, IGERT/GK-12 Traineeship programs, International Research Fellowship Program (IRFP)/East Asia Pacific Study Institutes (EAPSI), the Robert Noyce Teacher Scholarship Program (Noyce), Scholarships in Science, Technology, Engineering, and Mathematics (S-STEM), Partnerships for International Research and Education (PIRE), as well as National Aeronautics and Space Administration (NASA’s) higher education portfolio. These studies involved constructing comparison groups using propensity score matching (PSM) and bibliometric analyses.

Joseph A. Taylor

Principal Researcher, AIR

Joseph A. Taylor is a principal researcher at AIR where he directs intervention impact studies, syntheses of the effects of science education interventions, and contributes to AIR’s work on the What Works Clearinghouse (WWC). Dr. Taylor is currently a co-Principal Investigator of a US Department of Education funded grant (Education Innovation and Research program) that is studying the effects of a promising professional development program for elementary science teachers. He is also overseeing impact analyses on a US Department of Education funded grant (Supporting Effective Educator Development program) that is also studying the effects of professional development on teacher and student outcomes. He is senior advisor of a NSF funded meta-analysis of the effects of the 5E instructional model. Dr. Taylor also consults on AIR projects with regard to psychometric validation of measurement instruments. In his work, Dr. Taylor draws upon his experience teaching statistics and measurement at the University of Colorado, Colorado Springs, as well as prior experience as a high school physics and math teacher. 

Headshot of Melissa Rodgers.

Melissa A. Rodgers

Researcher, American Institutes for Research

Melissa A. Rodgers, PhD is a Quantitative Researcher at American Institutes for Research (AIR). Rodgers is the project director of the IES-funded meta-analytic grant evaluating the impact of postsecondary nudging interventions on college enrollment, persistence and achievement. Rodgers is also the quantitative lead on multiple educational research projects including meta-analytic studies, evaluation of teacher preparation and training programs, and student web-based interventions to improve academic achievement in math, English and civic education. In this role, her primary responsibilities include the development and implementation of methodological strategy, management of large relational datasets, conduct analyses using advanced analytic methods, and management of junior staff to assist with these efforts. In August 2026, she obtained her doctoral degree in Quantitative Methods in Educational Psychology from The University of Texas at Austin. Her methodological research focuses on research synthesis in education and psychology, specifically the evaluation and advancement of methodologies to handle the influence of design, selection and publication bias.

FAQs

Downloads

Image

Join the S-STEM REC Email List

Sign up to receive a monthly S-STEM REC newsletter containing resources and upcoming events!