2023 Annual Meeting
2023 Annual Meeting
 
2023 Annual Meeting
Print

AAPE Business Meeting & Reception

Business Meeting: Accreditation, Assessment, and Program Evaluation in Education Preparation SIG

Thu, April 13, 8:00 to 9:30am CDT (9:00 to 10:30am EDT), InterContinental Chicago Magnificent Mile, Floor: Lobby Level, Avenue East

Learn why you would like to join the Accreditation, Assessment & Program Evaluation in Ed Prep SIG. Not only will you find out valuable information on how to commit to a culture of continuous improvement, you will also create contacts and friendships with people interested in this work. This work can include valuable scholarship. Come to discuss how to combine continuous improvement study with scholarship.

Reception: Accreditation, Assessment, and Program Evaluation in Education Preparation SIG

Fri, April 14, 6:00 to 7:30pm CDT (7:00 to 8:30pm EDT), Chicago Marriott Downtown Magnificent Mile, Floor: 6th Floor, Michigan/Michigan State

Come join the Accreditation, Assessment, and Program Evaluation (AAPE) in Educator Preparation SIG for an open reception. All are welcome to attend.

Chicago Sessions in April

Session: Quality Data Informing Continuous Improvement

Sun, April 16, 2:50 to 4:20pm CDT (3:50 to 5:20pm EDT), Chicago Marriott Downtown Magnificent Mile, Floor: 4th Floor, Armitage - Avenue Ballroom

Evaluating programs is not just a “compliance” issue, but rather an opportunity to inform and provide guidance for continuous improvement. In this session, we examine three examples mechanisms for continuous improvement, including tools for collecting data and examining what differences in data might really mean.

  • Chair: Barbara L. Bales, University of Wisconsin – Milwaukee

#1: A Community of Practice Approach to Inform Continuous Improvement

  • Todd McCardle, Eastern Kentucky University
  • Nicole Manson, Eastern Kentucky University
  • Stacey J. Korson, Eastern Kentucky University

Abstract: We examine how our community of practice ensures continuous improvement in our educator preparation program (EPP) and share results from our study that asks the following question: In what ways does our community of practice ensure continuous improvement across our EPP? Using Lave and Wenger’s (1991) Communities of Practice and socialization theory lenses (Austin & McDaniels, 2006), we found that our perspectives as recent faculty members turned administrators played a significant role in making substantial, necessary curricular, clinical, and data changes to improve our EPP.

#2: Rater Agreement on the Summative Intern Evaluation: Forge Ahead or Agree to Disagree?

  • Elayne Colon, University of Florida
  • Lori Dassa, University of Florida
  • Thomas M. Dana, University of Florida

Abstract: To meet accreditation expectations, TPPs must demonstrate their candidates are evaluated using assessments that yield sound, reliable, and valid data. Institutional beliefs regarding best practices and external stakeholder demands often influence the design, implementation, and evaluation of these tools. This study extends prior research to investigate the summative intern evaluation and discrepancies in the ratings between mentors and university supervisors. How important is interrater agreement on the intern evaluation? Should we simply dismiss these differences, forge ahead to minimize disagreement, or seek to better understand how and why limited agreement might occur? In this paper, we explore methods to evaluate interrater agreement between mentors and university supervisors and the ways in which these data can inform purposeful continuous programmatic improvement.

#3: Evaluating Candidate Proficiency and Program Effectiveness Through Danielson and Interstate Teacher Assessment and Support Consortium (InTASC)–Aligned Instrument

  • Norma J. Boakes, Stockton University
  • Stacey Culleny, Stockton University

Abstract: This study examined the relationship between preservice teachers’ content knowledge through an adapted Danielson Framework instrument and edTPA scores. Analysis of 24 preservice secondary teachers during student teaching indicates positive correlations within items of the adapted instrument and edTPA, but not between the two tools. For the adapted instrument, original Danielson components are linked to program-created InTASC Standard 4 & 5 aligned indicators. Performance on EdTPA rubrics 10-12 was strongly correlated. Findings validate the adapted evaluation instrument and serve as a way to address known weaknesses in Danielson’s framework to capture content-specific performance. Conclusions support the need for educator programs to use multiple instruments to validate candidate readiness and illustrate a need for instruments that more accurately measure overall candidate competency.

Session: Using Research to Inform Program Evaluation

Thu, April 13, 11:40am to 1:10pm CDT (12:40 to 2:10pm EDT), Chicago Marriott Downtown Magnificent Mile, Floor: 4th Floor, Belmont - Avenue Ballroom

As educational researchers we often have opportunities to examine practice of our candidates and completers. In this session we share three examples where research can inform both programs and program evaluation.

  • Chairperson: Ismail Dilek, University of Iowa
  • Discussant: Francesca T. Durand, Russell Sage College

#1: Inspiring, Supporting, and Propelling Urban Educators: Understanding the Effectiveness of a University-Based Induction Support Program

  • Candice Pettaway, Georgia State University
  • Claudia Hagan, Georgia State University
  • Tilifayea Griffin, Georgia State University
  • Jocelyn Belden, Georgia State University
  • Carla Tanguay, Georgia State University
  • Joyce Many, Georgia State University
  • Ruchi Bhatnagar, Georgia State University

Abstract: Covid-19 drastically altered the clinical teacher preparation experiences of individuals who graduated in 2020-2021. To provide support for graduates of our urban teacher education program, we developed and launched a university-led, virtual Induction Support Program (ISP) for these educators as they entered the profession. Our research is designed as a Participatory Action Research project (McIntyre, 2008) conducted by stakeholders involved in the ISP. Findings indicate the ISP impacted (a) learning and development by supporting these novice urban educators as they personalized the art of teaching within the constraints of the school or curriculum, (b) mentor and mentee professional identity by promoting a holistic view of self as teacher and reducing alienation, and (c) mentee criticality as they engage in agency.

#2: Program Completers' Implementation of Culturally Responsive Pedagogy: A Measure of Program Continuous Improvement

  • Donna Wake, University of Central Arkansas
  • Julie Workman, Arch Ford Education Service Cooperative
  • Karen Norton, Arch Ford Education Service Cooperative

Abstract: This mixed-methods study examined how novice teachers implemented culturally responsive pedagogy (CRP) to support the learning of diverse student populations. Participants (n=177) provided impact data, descriptions of teaching context, and reflections on teaching practice in support of diverse students according to a provided CRP model. Findings indicate participants were comfortable linking content to students’ experiences and exploring multiple perspectives. CRP recommendations less prevalent in the data included evidence of participants’ abilities to connect content to real-world issues, communicate high expectations to diverse students, collaborate with families, and communicate in a linguistically and culturally responsive manner. Practical implications focus on alignment of findings to recommended best practices in support of program continuous improvement in accordance with current accreditation expectations.

#3: Supercharging Alternative High Schools: An Implementation and Impact Study of the Personalized, Relevant, and Engaged for Postsecondary Intervention

  • Manuel Vazquez Cano, Education Northwest & University of Oregon
  • Nettie Legters, University of Oregon
  • Angela Roccograndi, University of Oregon

Abstract: This study examines the implementation and impact of PREP (Personalized, Relevant, and Engaged for Postsecondary)—an innovative approach to improving engagement and academic outcomes for students in alternative secondary schools. To measure implementation and impact, the authors developed a fidelity of implementation matrix and used a quasi-experimental design to estimate the impact of PREP on measures of engagement and progression in high school. Results from the study found that three key components of the PREP intervention were implemented with adequate fidelity and suggestive evidence that PREP may have had positive effects on student outcomes by mitigating declining rates of attendance and retention in the PREP schools during the 2020–21 school year—a year impacted by the COVID-19 pandemic.

Virtual Session in May

Session: Adaptations due to COVID: Assessment, Accreditation, and Program Evaluation

Thu, May 4, 8:00am to Fri, May 5, 6:00pm CDT (Thu, May 4, 9:00am to Fri, May 5, 7:00pm EDT), SIG Virtual Rooms, Accreditation, Assessment, and Program Evaluation in Education Preparation Virtual Poster Room

We have learned many things during this pandemic. In this session we look at how the shift to meet the needs of providing a safe space for learning impacted novice teachers. The second paper looks at how one adaptation - remote test taking impacts test scores.

  • Chairperson: Mary E. Yakimowski, Samford University

#1: Lessons From the Pandemic: What Novice Teachers Need in Their Own Words

  • Robyn Thomas Pitts, University of Denver
  • Jessica Lerner, University of Denver
  • Anne Holland

Abstract: The onset of the COVID-19 crisis plunged the teaching profession into a new status quo. There has been little guidance on supporting teachers to meet the demands of these new conditions – especially for novice teachers – contributing to fears of a country-wide “mass exodus of teachers” (EdWeek, 2022). Abandoning our ongoing research, we conducted an evaluation study in February 2021 of the types of support novice teachers wanted during the pandemic. We surveyed 84 recent TEP graduates using three scales and open-ended items. Findings from this study illuminate teachers’ stress levels, coping strategies, and needs and suggest some ways administrators and policymakers might support teachers to remain in the profession.

#2: Outcome and Psychometric Comparability of Live Remote–Proctored and Test Center–Proctored Credentialing Examinations

  • Michelle Y. Chen, Prometric
  • Gemma M. Cherry, Dublin City University
  • Michael O'Leary, Dublin City University
  • Li-Ann Kuan, Prometric
  • Linda Waters, Prometric

Abstract: Many testing programs use both live remote proctoring (LRP) and test center proctoring (TCP) to deliver examinations. However, as a relatively recent phenomenon in the testing industry, there is a lack of empirical evidence to support the validity and comparability of high-stakes examinations delivered via LRP. This study employs a multimethod approach to evaluate the comparability of test outcomes and psychometric properties. We compare pass rates, decision accuracy and consistency, test time, and item performance between the two delivery modes. The analyses are replicated across 14 multiple-choice tests from eight credentialing programs in healthcare, security, and energy sectors. Using a variety of statistical and psychometric approaches, we provide strong evidence of comparability for high-stakes examinations administered through TCP and LRP.