RoSE Committee Applications Now Open!
DATE: Thursday, 31st July, 2025
TIME: 07:00 - 20:30 UTC
PROGRAMME: https://bit.ly/rose2025_programme
LOCATION: Online
COST: Free
REGISTER: https://forms.gle/ebM2kPyYAhbvTyHAA
CONTACT: Andrew Pua - Conference Chair
🔑 Keynote Session
⚡️ Lightning Talk
💡 Idea Development Session
📝 Journal-to-Conference Talk
💬 Panel Discussion
🌟 SIG Talks
07:15 - 07:45 UTC, Session A ⚡️
By Reymund Gonowon (De La Salle University - Manilla)
Abstract: My current work is the next iteration in my action research. I believe that Grade 11 senior high school students have existing attitudes towards Statistics and Probability that influence their engagement, and these attitudes may change after exposure to a gamified class. My research aims to answer: a) What were these attitudes in terms of affect, cognitive competence, difficulty, interest, value, and effort? b) Do these attitudes change when the students take a gamified course in Statistics and Probability? C) How well were the students engaged in such a course? To address these, I designed a course for a blended flexible learning modality with gamification elements. These elements were choice, progress, experience points, badges, missions, and party and guilds. Varied tasks were prepared based on the learning styles and player types of the participants. The learning style was determined using Honey and Mumford’s Learning Style Questionnaire. The player type was identified with Marczewski’s User Type Test. My study follows a mixed methods approach where the quantitative data comes from the students’ attitudes measured through the Survey of Attitudes Towards Statistics (SATS-36), while the qualitative data came from the students’ experiences in the gamified course, recorded through reflection entries and a focus group discussion. The first iteration showed that students in the Accounting, Business, and Management strand showed attitudes that moved towards the negative and did not indicate improvement. The difference between the pre- and post- SATS-36 scores were statistically significant in four components, with effect sizes ranging from small to large. The analysis brought out themes that supported the quantitative results. In the current iteration, two strands of students will participate in the gamified Statistics and Probability course. These are the Humanities and Social Sciences strand and the Arts and Design strand. This iteration will implement changes to the gamified course in conjunction with my teaching style such as reevaluating the delivery method of the content and continuing the use of choice in the learning activities.
07:15 - 07:45 UTC, Session A ⚡️
By Kieran Lyon (University of Nottingham)
Abstract: Across university courses - healthcare, engineering, economics, psychology - students must learn statistics. However, students often find this difficult, relying on rote memorisation rather than genuine understanding, and frequently requiring help revising statistics for their final-year projects. Therefore, both students and educators can benefit from resources helping students to learn and revise statistics. Statistics Launchpad is one such resource.
Statistics Launchpad is a free, publicly available teaching resource designed to help students at all levels learn statistics. Consisting of six sequential toolkits, it assumes no prior knowledge and guides students through statistical concepts, beginning with descriptive statistics and building up to common statistical tests such as ANOVA and regression. Statistics Launchpad was developed in line with empirical recommendations for statistics education, focusing on the “big ideas” of statistics rather than series of equations, and each toolkit was designed with student feedback to ensure that the material was accessible and levelled appropriately.
Each toolkit includes a combination of video and text pages, downloadable datasets allowing students to perform calculations to understand where the numbers come from, and interactive apps to simulate and visualise statistical concepts. Additionally, multiple-choice questions provide immediate feedback to help students test their understanding.
Originally created for a Psychology Conversion MSc, Statistics Launchpad is now widely used across the University of Nottingham.
This presentation will include a demonstration of this resource. The presenter will also provide a link to Statistics Launchpad and encourage its use by statistics educators across disciplines.
07:15 - 07:45 UTC, Session A ⚡️
By Umberto Noe (University of Edinburgh)
Abstract: Statistics is a core course in many degrees across both STEM and non-STEM subjects. However, many students experience significant statistics anxiety, impacting the quality of their engagement, learning, and wellbeing at university. Research shows this anxiety disproportionately affects certain groups, such as females and students from marginalised backgrounds, making it an equality, diversity, and inclusion issue.
A team of lecturers at the University of Edinburgh co-created with students a set of evidence-informed statistics teaching guidelines applicable across subject areas to promote student confidence and reduce statistics anxiety. These guidelines, available at https://uoepsy.github.io/statanx/, emerged from a collaborative process that included: (1) a narrative review of literature to identify intervention targets, (2) focus groups with undergraduate and postgraduate students from both STEM and non-STEM subjects across Scotland, and (3) a large-scale survey across UK institutions investigating student perceptions of identified intervention targets.
Students suggested six key areas for attention. First, instructors should avoid assuming uniform prior knowledge and instead conduct pre-course surveys to understand their cohort’s backgrounds, sharing results to help students feel less isolated. Second, setting clear expectations through explicit communication about course structure and providing active scaffolding is crucial to build student confidence and to help them see the bigger picture. Third, teaching approaches should incorporate interactive methods such as hands-on activities, real-world examples, and varied teaching styles to accommodate different learning preferences. Fourth, supporting students requires attention beyond the classroom, including clear signposting to university support services, facilitation of peer support networks, and connections to external communities of practice or networks. Fifth, assessment design is linked to anxiety levels. Suggestions included moving away from high-stakes examinations towards varied assessment methods that emphasise skill development, offering choice in formats, incorporating collaborative work, and providing clear marking rubrics. Finally, implementing regular feedback mechanisms throughout courses creates a responsive learning environment where students feel valued.
A recurring theme was the importance of community-building within statistics courses. Student forums, peer support systems, and external networks help students realise they’re not alone in their challenges while providing additional learning resources. This suggests that addressing statistics anxiety requires not only pedagogical changes but also cultural shifts in statistics education.
07:15 - 07:45 UTC, Session A ⚡️
By Dean Langan (University College London)
Abstract: The Centre for Applied Statistics Courses (CASC) at University College London (UCL) has delivered statistics training to students and professionals for nearly 20 years. Responding to the growing demand for flexible, self-directed learning, CASC has adapted by converting courses into a self-paced format, featuring short videos and interactive exercises. This initiative has been evaluated using demographic data, completion rates, participant feedback, and financial metrics. The findings highlight that self-paced learning serves as a valuable complement to traditional teaching methods, with many learners even expressing a preference for this format. Feedback has been overwhelmingly positive, with praise for the clarity and thoroughness of course design. Furthermore, the external market presents a sustainable funding model, enabling continued development and offering flexibility for professionals balancing career and learning commitments. This talk will present the evaluation findings and share practical insights on designing, implementing, and sustaining self-paced courses to empower educators and institutions in transforming statistics education.
07:45 - 08:30 UTC, Session B 💡
By Margaret MacDougall (University of Edinburgh)
Abstract: Are you teaching statistics to non-specialists in a context where there is considerable disparity in learner readiness for required statistical learning? Perhaps you recognise a mismatch between student prior learning or aptitude for learning statistics and achieving required learning outcomes for competent professional practice. Widening participation routes to higher education programmes (HE) can present considerable challenges to empathetic educators who encounter varying capacities among students to grasp the learning content within their courses. The tension between seeking to address this diversity in learning needs while maintaining authenticity and academic integrity should not be considered lightly. A real concern in such contexts is the capacity to oversimplify the subject matter, thus reinforcing the potential for misconceptions being perpetuated in later years when students seek to interpret their dissertation-based or extracurricular research findings. The health sciences and Medicine are a paradigm for disciplinary areas where misconceptions have grave consequences for stakeholders. However, there are other disciplinary areas, including psychology and economics, to name a few, where oversimplification of statistical learning is unethical for similar reasons.
What can statistical educators and researchers do to find optimal solutions for addressing diverse student prior learning needs for required statistical learning in our HE courses?
This session offers an opportunity to open discussion across the disciplines to address this question. A key focus will be the possibility of using within-course questionnaires to identify student learning challenges at a granular level. We will also consider, however, the importance of having a realisable plan for putting research findings into practice. Our discussion will therefore extend to consideration of curricular opportunities and challenges in delivering prior learning for students, including in contexts where students are following heavily loaded learning programmes in their main disciplinary areas.
Outcomes arising from the discussion and beyond should therefore include:
an action plan for identifying and addressing student prior learning needs in statistics;
specification of the foreseeable challenges in addressing such prior learning needs;
development of a collaborative research group to ensure accountability in reporting of research findings;
agreement on a strategy for contrasting and comparing findings across institutions, countries, and disciplinary areas.
If you would like to explore the above research question collaboratively and contribute to these concrete outcomes, please join the discussion. This session aims to attract a strong international and cross-institutional representation to ensure robustness of
educational research questions and resultant analyses of respondent data.
07:55 - 08:20 UTC, Session A 📝
By Osmar Vera (Universidad de Cádiz)
Abstract: The aim of presenting the article "How do pre-service early childhood education teachers conceive randomness?" (https://thales.cica.es/epsilon/sites/default/files/2024-07/epsilon117_002.pdf) after its publication is to broaden its impact and foster professional collaboration. Its oral dissemination allows it to reach broader and more specific audiences, such as teacher trainers, researchers, and curriculum leaders. This presentation facilitates debate and feedback, enriching future research and generating opportunities for collaboration. Furthermore, the concept of randomness is a topic that is difficult to understand (Batanero & Álvarez-Arroyo, 2024) and little explored in the training of early childhood education teachers (Franco & Alsina, 2022). Therefore, its presentation can highlight the need to include probabilistic content from an early age. In educational contexts, the study provides evidence for reflection on the weaknesses and strengths of future teachers' stochastic knowledge (Alsina, 2017; Alsina & Vásquez, 2016), which may motivate changes in curricula. Presenting the article also consolidates an emerging line of research in statistics teaching and promotes the transfer of knowledge to real-life educational contexts. In short, the subsequent presentation serves the purpose of dissemination, curriculum improvement, and strengthening the field.
The relevant details of the article are described below, based on their objectives, theoretical framework, methodology, results, and conclusions. The objective was to analyze how future early childhood teachers define randomness, what examples they propose (games and everyday phenomena), and their ability to discriminate between random and deterministic events. The theoretical framework is based on the different historical meanings of randomness and on student conceptions: causality, equiprobability, frequency approach, subjectivity, and the Von Mises conception of disorder.
The methodology was qualitative, descriptive, and exploratory. The instrument consisted of a four-item questionnaire, adapted from Hernández-Salmerón (2015), administered in November 2023. The participants were 132 second-year students in the Bachelor's Degree in Early Childhood Education, with no prior university training in mathematics.
In the results, 57.6% offered definitions with correct elements (such as unpredictability or disorder), while 28.8% held outdated conceptions (as opposed to cause and effect). Regarding games, the most frequently cited were bingo, cards, dice, and lottery, which demonstrates a narrow view focused on recreational chance. Regarding everyday phenomena, although games were not mentioned, many cited lottery. Examples also included meteorological events, accidents, and school situations. In the discrimination of events, the majority correctly identified the deterministic case (93.8%), but only 53.5% recognized the weather forecast as random.
In conclusion, the study reveals a partial and incomplete understanding of randomness in part of the sample, with a tendency to restrict it to games of chance and outdated intuitive conceptions. The study underscores the need to strengthen probability training for future preschool teachers to promote adequate probability teaching from the earliest levels of education. This reflects recent recommendations and enhances statistical literacy from an early age (Alsina, 2017; Batanero & Álvarez-Arroyo, 2024).
08:30 - 09:15 UTC, Session A 💬
Chaired by: Jenn Gaskell (University of Glasgow)
Panellists: Jenn Gaskell, Ozan Evkaya (University of Edinburgh), Vinny Davies (University of Glasgow), Carol Calvert (Open University)
Abstract: The rapid evolution of artificial intelligence tools presents both a profound opportunity and a pressing challenge for statistics education. This panel will explore how generative AI is reshaping the teaching and learning of statistics.
Panellists will discuss the implications of AI on multiple fronts: from how students use tools like ChatGPT and Copilot to support learning, to how educators can adapt their teaching to embrace, rather than resist, these changes. We will explore the potential of AI tutors tailored to course materials, capable of providing contextualised, always-available feedback to students, and additional practice questions. At the same time, we will address concerns about overreliance, inequity in access, and the challenges of ensuring students do not bypass the learning outcomes when AI is so successful at solving assessments.
Another key focus will be on assessment design. If AI is part of the modern professional landscape, how can we authentically assess student learning in ways that incorporate AI use? The panel will share examples of innovative assessments that ask students to critique AI outputs or integrate AI-generated content into their problem-solving.
The panellists have been selected to reflect a range of experience, from curriculum developers for online Masters courses to researchers in the field of machine-learning, all with unique experience of AI usage. Their combined expertise will lead to interesting discussions of how we can responsibly and creatively integrate AI into statistics education, ensuring students are equipped for the realities of data-driven careers in an AI-enabled world.
This timely session will raise foundational questions about what we teach, how we teach, and what we value as evidence of learning - questions that lead to open discussions within the statistics education community. If time allows, the discussions can also include additional technology skills we deem relevant to the modern working world, such as version control and large-scale computing.
Jenn and Ozan have organised several events around AI and Statistical Education, including a TALMO workshop (April, 2025), LMS/RSS/IMA Teaching and Learning Series (July, 2024 and upcoming July, 2025).
09:25 - 10:15 UTC, Session A 🌟 [Artificial Intelligence in Statistics Education SIG]
Chaired by: Chelsi Slotten (Sage Publishing)
Speaker: Alun Owen (Coventry University)
Abstract: A team of colleagues who support the learning of statistics in a number of different universities across the UK and Australia, have undertaken a survey to explore students’ current levels of awareness, usage, and experiences of Generative AI in supporting their learning, and specifically its potential as an aid for learning statistics. Data was collected by an online questionnaire during 2024 and 2025 and some of the initial overall findings from that survey are being shared at other conferences. In this particular talk we present findings which explored the extent to which students are using Generative AI, for tasks such as choosing the right method of statistical analysis, uploading a dataset for analysis, uploading computer output and asking for help with interpreting the results, and how to formulate conclusions from the results. We also explore students’ experiences of how they have found using Generative AI for these tasks and what guidance they feel we need to give them as educators, to help them use these tools more effectively to support their learning of statistics.
09:25 - 10:15 UTC, Session A 🌟 [Artificial Intelligence in Statistics Education SIG]
Chaired by: Chelsi Slotten (Sage Publishing)
Speaker: Ralitza Soultanova (Université Catholique de Louvain Saint-Louis Bruxelles)
Abstract: Artificial Intelligence (AI) is increasingly utilised in student assessment, particularly for creating and grading tests. However, current applications largely focus on multiple-choice questionnaires. In this contribution, we present an innovative case study employing AI, specifically natural language processing techniques, for the evaluation and feedback of research papers submitted by students.
We designed and implemented a structured experiment involving second-year undergraduate students enrolled in an introductory statistics course for the social sciences. Students were required to submit a 20-page research paper, including a literature review, descriptive statistics, hypothesis formulation, and testing. The aim was not to fully automate evaluations but rather to create a pragmatic workflow that meaningfully complements human judgment while addressing institutional, ethical, and pedagogical considerations.
Through iterative testing with actual student assignments and previously graded submissions, we assessed AI’s ability to deliver motivational feedback, pinpoint areas for improvement, provide rubric-aligned scoring, and detect inconsistencies or possible academic misconduct. In an initial implementation, AI reliably delivered surface-level praise and basic rubric-driven evaluations but struggled with deeper contextual judgments and accurate fraud detection without explicit guidance.
To overcome these limitations, we established a four-step workflow: (1) defining red flags based on prior subject matter expert (SME) knowledge; (2) scanning submissions for red flags, inconsistencies, and suspicious data (AI followed by a verification of the SME); (3) rubric-based evaluation supported by justifications and specific quotes (AI); and (4) SME intervention to finalise feedback using the AI-generated insights.
Key insights include the effectiveness of “chain-of-verification” prompting, the necessity of developing domain-specific red-flag rubrics collaboratively with faculty, and navigating the strategic balance between delivering substantial feedback and ensuring time efficiency. Properly guided AI should help reduce expert timeload, enhancing efficiency and foster student engagement.
Instead of replacing human expertise, our proposed model strategically leverages AI to highlight areas where expert intervention is most beneficial. In environments where students increasingly expect detailed feedback but faculty face time constraints, AI can offer a supportive, motivational, and pedagogically appropriate "light-touch" solution. This presentation provides practical examples, detailed workflow diagrams, and valuable insights for educators aiming to responsibly and effectively integrate AI into statistics education.
10:25 - 11:40 UTC, Session A 🌟 [Statistics Anxiety SIG]
Chaired by:
Speaker: Ellen Marshall (Sheffield Hallam University)
Abstract: Research into statistics anxiety frequently emphasises the relationships between statistics anxiety, attitudes, and performance rather than focusing on strategies to mitigate anxiety related to statistics. Although numerous studies propose methods to alleviate statistics anxiety (Onwuegbuzie and Wilson, 2003; Chew and Dillon, 2014), few assess the effectiveness of these strategies. When evaluations are conducted, they often involve instructional interventions rather than approaches that students can employ to reduce anxiety or improve performance.
This observational study utilises matched survey responses from Undergraduate Mathematics and Psychology students at UK institutions, gathered during or after their introductory statistics courses, along with their respective module grades. The objective is to evaluate how attitudes and learning behaviours impact on the reduction of statistics anxiety from the student's perspective, in the absence of any specific intervention. Key attitudes investigated include how interested in learning statistics students are, control of learning and the value a student places on learning statistics. Aspects of learner behaviour include self-regulated learning, peer learning, help-seeking, passive and active engagement with course content and addressing anxiety. Based on previous findings, two key aspects of interest are help-seeking anxiety and interest in learning statistics. The relationship between anxiety and performance was found to be moderated by the level of interest a student has in learning at the start of the course in a similar way to the psychology flexibility model and willingness to learn discussed in Sandoz et al, (2017). Anxious students can reduce the impact of statistics anxiety on performance if they are interested in the top but this is likely to be due to employing more effective learning behaviours.
Key questions of interest to be addressed within the study include:
Do levels of statistics anxiety change during a course without interventions?
When and how do the different dimensions of statistics anxiety impact on performance?
How do attitudes to learning impact on learning behaviour and anxiety reduction?
Which learning and coping strategies impact on anxiety reduction?
This talk will discuss key findings from the study and invite discussion around the results including suggestions for encouraging the effective learning behaviour most beneficial for anxiety reduction within the talk.
Other method based discussion could include whether or not to keep students with very low starting anxiety scores and/or adjusting for starting scores and how to unravel the impact of multiple related independent variables.
10:25 - 11:40 UTC, Session A 🌟 [Statistics Anxiety SIG]
Chaired by:
Speakers: Mahmoud Elsherif (University of Birmingham), Sheeza Mahak (Loughborough University), Stephanie Malone (Griffith University), Christopher Hand (University of Glasgow), Kinga Morsanyi (Loughborough University)
Abstract: Anxiety is commonly experienced by neurodivergent individuals participating in Higher Education (HE). Among neurodivergent adults, anxiety is argued to relate to both cognitive performance and academic achievement. However, scant research considers how anxiety in mathematics and statistics - despite its clear importance - relates to neurodivergence. We examined associations between mathematics and statistics anxieties, cognitive reflection test (CRT) performance, and neurodivergence. 9,009 neurotypical and 799 neurodivergent individuals completed measures assessing anxiety and social cognition, cognitive abilities, motivation, and statistics and mathematics anxieties. Results showed that neurodivergent adults exhibited greater intuitive CRT performance and higher mathematics and statistics anxieties than neurotypical adults. Neurotypical adults scored higher on CRT deliberate responses. In neurotypical individuals, better CRT performance was associated with higher fear of negative evaluation, self-efficacy, and attitude towards mathematics but lower mathematics anxiety, test anxiety, and creativity anxiety. These variables did not predict CRT performance in neurodivergent adults. These findings underscore the complex relationship between neurodivergence and mathematics and statistics anxieties. While neurodivergent students experience heightened anxiety, it does not necessarily hinder their analytical thinking or performance in mathematics and statistics assessments. These findings have implications for instructors within HE (and beyond) with respect to supporting neurodivergent individuals and encouraging positive wellbeing.
10:25 - 11:40 UTC, Session A 🌟 [Statistics Anxiety SIG]
Chaired by:
Speakers: Megan Barnard (University of Nottingham), Mahmoud Elsherif (University of Birmingham), Jenny Terry (University of Sussex)
Abstract: The number of students declaring a specific learning difficulty (SpLD) within Higher Education is increasing. The number of UK students declaring an SpLD increased by over 30,000 from the 2014/15 academic year up to the 2023/24 academic year (HESA, 2025). Thus, an increasing number of students may need support for their studies. We know from the literature that students with SpLDs or neurodiverse conditions, such as dyslexia, ADHD, and dyscalculia, still feel the need for additional support from Higher Education as a whole (Ryan & Brown, 2005), which many feel they are not receiving (Griffin & Pollak, 2009).
One way in which we can support students with SpLDs in statistics-based content is by trying to alleviate any increased levels of anxiety associated with the subject. We know that in the student population, the thought of studying statistical content is associated with anxiety and negative behaviours such as avoidance or procrastination. However, less is known about statistics anxiety and performance in students with additional needs. The literature suggests that conditions such as ADHD and dyslexia are associated with higher levels of maths anxiety (Canu et al., 2017; Jordan et al., 2014). Only one recent paper has investigated the relationship between SpLDs and statistics anxiety. Based on a secondary analysis of the SMARVUS dataset (Terry et al., 2023), this suggested that only students with dyscalculia experienced higher levels of statistics anxiety than their neurotypical peers.
However, this analysis only analysed statistics anxiety based on an overall score from the Statistics Anxiety Rating Scale (STARS). It is possible that different subsets of statistics anxiety, such as a fear of asking for help, may be present in some SpLDs but not others. Furthermore, there is little understanding in the literature as to whether SpLDs may impact levels of attainment on statistics courses. Understanding this is also important, as it may provide further justification for the provision of support tailored to those with additional needs.
The current study will conduct a secondary data analysis on the SMARVUS dataset, comparing neurotypical and SpLD samples on statistics anxiety across all three STARS anxiety subscales, as well as on levels of attainment. The study will also assess whether any differences are impacted by the year group students belong to, to understand if any differences in anxiety or attainment change the longer a student has been in Higher Education. This presentation will discuss these findings, as well as what needs to be done to further our understanding of the needs of SpLD students taking statistics modules in Higher Education.
10:25 - 11:05 UTC, Session B 💡
Chaired by:
Speakers: Madeleine Pownall (University of Leeds), Alyssa Counsell (Toronto Metropolitan University), and Richard Harris (University of Leeds)
Abstract: Decisions around which statistical software program to use in psychology statistics courses/modules continues to be a topic of debate among instructors and across programs and departments. One challenge is that research examining students’ experiences with statistical software or attitudes toward statistical software are generally confined to a single course/module or institution, making it difficult to know whether the results would generalize to other contexts. The authors of this session have discussed the possibility of engaging in a large research team project that parallels Pownall et al.’s (2023) review of the evidence base for current pedagogical methods for open science and student outcomes. Consequently, we are hoping to use the time to discuss ideas with potential collaborators interested in joining this research project and brainstorm some ideas for appropriate learning outcomes and target populations to start narrowing the scope of the review. We aim to leave the session with a shared document of preliminary research questions and outcomes of interest as well as to have interested collaborators exchange contact information to facilitate planning the next research steps. Note: attending the session does not imply a definitive commitment to future involvement in the project.
12:00 - 12:45 UTC, Session A 💬
Chaired by: Florian Berens (Eberhard Karls Universität Tübingen, RoSE Network Research Co-director)
Speakers: Susan A. Peters (University of Louisville, SERJ Editor), Helen MacGillivray (Queensland University of Technology, Teaching Statistics Editor), Milan Stehlik (University of Valparaiso, Research in Statistics Editor)
Abstract:
12:55 - 13:55 UTC, Session A 🔑
By Iddo Gal (University of Haifa)
Abstract: Statistics education (broadly viewed, including data science and related fields) is a rapidly expanding research field. However not all research, even research that is technically executed very well, necessarily contributes in the same way to development of needed knowledge and effective educational practice. The talk aims to problematize some things we take for granted regarding the what (content) and why (motivation and purpose) of research in statistics education. I will offer personal reflections on needs and opportunities in studies on, e.g., attitude change and dispositional issues, learning in online environments, statistical practices outside the classroom, systemic factors, and related topics. My goal would be to illustrate some knowledge gaps as well as point to issues that early career scholars but perhaps also seasoned researchers can take into account when interpreting current research or planning new research projects, as well as in writing research articles related to statistics education.
14:05 - 15:20 UTC, Session A 🌟 [Statistics Pedagogy SIG]
Chaired by: Alyssa Counsell (Toronto Metropolitan University)
Speaker: Angel Tan (Aston University)
Abstract: Statistics literacy is a critical skill for 21st-century employability. However, teaching it remains challenging due to students’ diverse abilities, high anxiety levels, and generally negative attitudes toward the subject. This study implemented and evaluated a brief mindfulness intervention designed to support university students who experience stress or anxiety when learning statistics. The intervention integrated mindfulness practices with online instruction on statistical concepts. Situated within the field of psychology, the research addressed two key questions: 1) Does a brief mindfulness intervention improve students’ statistics attainment, attitudes, and anxiety levels?; 2) What theoretical mechanisms explain the potential benefits of mindfulness in the context of statistics learning?. We employed a between-participants design involving psychology students who completed pre- and post-intervention measures on statistics and psychological constructs. Preliminary analyses reveal significant improvements in specific aspects of students’ attitudes toward statistics following the intervention. Implications for teaching practice and directions for future research are discussed.
14:05 - 15:20 UTC, Session A 🌟 [Research and Scholarship SIG]
Chaired by: Alyssa Counsell (Toronto Metropolitan University)
Speaker: Laura Bandi (Toronto Metropolitan University)
Abstract: Statistical literacy is a crucial skill in today’s data-driven world, but effectively measuring it can be a challenge. The Basic Literacy In Statistics assessment (BLIS-3; Ziegler, 2014) is a valuable tool for evaluating statistical literacy in introductory statistics students. Although comprehensive, its length, with 37 items taking 45-60 minutes to administer, limits its practical use alongside other statistics education constructs. The aim of this study was to develop a short-form version of the BLIS-3 that would retain or improve its psychometric properties, while extending its usability to a broader student population. Data were collected from 414 undergraduate and graduate students with varying levels of statistical experience. Most participants were psychology majors, but students from other majors requiring applied use of statistics (e.g. sociology, business, biomedical sciences, etc.) were also included. We used Exploratory Factor Analysis (EFA) and Item Response Theory (IRT) to select 14 items to capture a unidimensional statistical literacy construct. The resulting short-form outperforms the original BLIS-3 in various psychometric properties, offering a more efficient and accurate tool for assessing statistical literacy. In addition to extending usability, our findings revealed patterns in student performance, with many students struggling with items related to certain topics, such as confidence intervals. This highlights potential gaps in statistical literacy that may not currently be adequately addressed in statistics courses. The short-form BLIS-3 (S-BLIS) offers educators a practical and versatile tool to assess students’ statistical literacy skills. It can be used as a non-graded assessment, a pre- and post-course measure, or within statistics education research. By identifying gaps in statistical literacy, educators can better target areas of difficulty, ultimately improving students’ ability to critically evaluate statistical information in both academic and real-world contexts.
14:05 - 15:20 UTC, Session A 🌟 [Research and Scholarship SIG]
Chaired by: Alyssa Counsell (Toronto Metropolitan University)
Speaker: Marjorie Bond (Penn State University)
Abstract: In 2020, the Motivational Attitudes in Statistics and Data Science Education Research (MASDER) team received an NSF grant (DUE - 2013392) to develop a family of six instruments to measure students’ attitudes toward statistics or data science and instructors’ attitude toward teaching statistics or data science as well as the learning environment in which the two interact. The students and instructors' instruments are based on an established psychological theory of motivation, Situated Expectancy-Value Theory (SEVT), (Eccles et al, 1983 and Eccles & Wigfield, 2020) and were developed using a rigorous process (Whitaker, Unfried, & Bond, 2019 and Unfried et al, (submitted)). The environment inventories are based on a model developed by the team and measure institution and course characteristics, general and discipline-specific pedagogy, the student-instructor relationship, and the environment (physical classroom or online). The team has completed a national sample based on a stratified sample of colleges and universities in the United States. Besides the lasting impact of these family of instruments, a website has been developed that will allow instructors and researchers the ability to administer the instruments on their own and receive a generated report based on the student data.
The general eight constructs suggested by SEVT that we measure are Expectancy, Difficulty, Goal Orientation, Self-Concept, Utility, Interest-Enjoyment, Cost/Benefit, and Attainment. Currently, the student Survey of Motivational Attitudes toward Statistics (S-SOMAS) and toward Data Science (S-SOMADS) have nine constructs with 39 items and eight constructs with 54 items, respectively. The extra construct for the S-SOMAS developed by the negatively worded items gathering together to form a construct that we named, “Aggregated Cost.” The instructor surveys, I-SOMAS and I-SOMADS, have ten constructs with 68 items and 81 items respectively. Since the instructor surveys are measuring attitudes towards teaching statistics or data science, there was a need to address general teaching and teaching in the specific field for two constructs, Goal Orientation and Cost/Benefit which is why there are two additional constructs.
To capture contextual factors beyond individual attitudes, the MASDER team developed the Environment, Pedagogy, Institution, and Course inventories (EPIC-S for Statistics or EPIC-DS for Data Science). The EPIC inventories have a portion that is filled out at the beginning of the term and another part that is filled out at the end of the term.
During Summer 2025, the MASDER team plans to write journal articles and to complete the website that will allow researchers to administer our surveys with their consent forms. Presenting at the RoSE conference will provide a valuable opportunity to share our family of instruments with the statistics education community as well as receive constructive feedback before we submit any articles.
14:05 - 14:50 UTC, Session B 💡
By Victoria Celio (York University)
Abstract: Introductory statistics courses are crucial to many undergraduate programs (e.g., Psychology). Unfortunately, many students do not look forward to completing introductory statistics courses (e.g., Bourne, 2018; Murtonen et al., 2008; Onwuegbuzie, 2004). Educational applications (EA) created through the Shiny package in R (version 1.10; Chang et al., 2015) offer a promising approach to improving students' experiences in these vital courses. These applications are user-friendly tools accessed via phone or computer browser that reinforce statistical concepts through interactive functions (Chance, 2007). These applications can be used for various purposes, including teaching general statistics concepts, inferential analyses and coding (Wang et al., 2021). The versatility of these applications makes it easy for instructors to incorporate them into their curriculum, regardless of their specific learning outcomes for students. These EAs also provide a user-friendly medium to incorporate interactivity, simulation and data visualization into instruction, which can be beneficial to students learning of statistics concepts (Forbes et al., 2014; Hazudin et al., 2017; Iten et al., 2014; Wang et al., 2021). The integration of EAs aligns with best practice guidelines from the Guidelines for Assessment and Instruction in Statistics Education, which encourage instructors to foster statistical thinking and conceptual understanding in students and incorporate technology, active learning, assessments and real data into instruction (GAISE, 2016). Unlike traditional methods for creating online applications, Shiny provides instructors with a method for developing EAs based on the R coding language (Doi et al., 2016; Wang et al., 2021). Many instructors are familiar with coding in R, making creating EAs much easier than traditional methods (Wang et al., 2021). The benefit of creating applications (versus using pre-made applications) is that instructors can tailor these applications to their specific learning outcomes for students and incorporate student feedback into the learning process (Doi et al., 2016). During the idea development session, I will briefly discuss the benefits of incorporating EAs into instruction and demonstrate how these applications can be incorporated into statistical pedagogy through applications I created for different statistics topics. During this session, I will also facilitate conversation regarding whether individuals have incorporated EAs into their instruction, their experience with EAs (from the instructor and student perspective), potential positives and negatives of EAs and feedback regarding the applications I have created.
15:30 - 15:55 UTC, Session A 📝
Chaired by:
Speaker: Sarah Rhodes (University of Manchester)
Abstract: This talk will describe the findings of Martella et al (1) relating to the methodological quality of research on Active Learning in STEM education. The authors looked at articles comparing active learning to traditional lecturing; those included in a highly cited meta-analysis by Freeman et al (2) plus a sample of more recent articles relative to active learning research. They coded articles according to 12 ‘internal validity controls’ which were aspects felt to be critical threats to the reliability of conclusions. All 260 articles looked at were judged to have not addressed at least one control aspect and 62% of studies were judged to have addressed less than half of the control aspects. The majority of studies either did not control for key confounders such as instructor, class size, calendar time, content, attrition, outcome measurement or dose, or did not report their design in sufficient detail to allow the reader to assess whether these aspects were controlled for.
As one of the Co-Leads of the Active Learning SiG I am keen to promote collaborative research on Active learning in statistics. When we are looking to evaluate Active Learning strategies, it is critical that we adopt and promote research methods that allow us to assess effectiveness in a robust way that minimises bias and allows replication. The brief overview of the Martella et al. paper will be followed by an open discussion on ideas and examples of robust methods of evaluation of active learning methods in statistics education. We will use the list of controls used by Martella et al as a guide when considering the strengths and weaknesses of any proposed designs. Well-designed ambitious collaborative research, via the RoSE network, involving multiple educators working at multiple sites and on multiple courses may allow opportunities to overcome many of the challenges identified in the Martella et al. paper (e.g. via cluster randomised, cluster randomised crossover or stepped wedge trials). It is hoped that discussion and networking will produce ideas that can be developed and utilised in prospective research to evaluate and compare Active Learning strategies. Ultimately this will allow statistics educators to make choices based on robust evidence.
16:05 - 17:20 UTC, Session A 🌟 [Statistics Anxiety SIG]
Chaired by: Fareena Alladin (The University of West Indies)
Speaker: Maisha Ahasan (Toronto Metropolitan University)
Abstract: Negative attitudes and statistics anxiety predict poorer course outcomes. However, few studies have explored how students understand statistics in the social sciences before taking their first class. Research in statistics education has focused on expectancies—beliefs about success—rather than expectations, which refer to what students believe will happen in the future. In statistics education research, the two terms are often used interchangeably, failing to capture misunderstandings about course content. Attempting to address issues of negative attitudes towards statistics and statistics anxiety without directly confronting misconceptions and misunderstandings may contribute to the little change in statistics attitudes or anxiety typically observed in research studies. This study, part of an ongoing Master’s thesis, explores first year undergraduate psychology students’ expectations before taking their first statistics course. I conducted 15 qualitative interviews and am currently analyzing data. Preliminary findings suggest participants often lack a clear understanding of what a statistics course entails or how statistics are used in psychology. Students that are anxious about the course equate statistics with math, sparking anxiety tied to prior negative experiences. In contrast, three students who had practical experience with statistics (e.g., conducting a research study as a part of a high school course) reported feeling little to no anxiety about the course and held neutral or positive attitudes toward statistics. Most students described how learning about what is expected in a statistics course helps ease their anxiety and improves preparedness to some extent. This study will deepen our understanding of students' misconceptions of statistics and inform instructors on how to address mismatches that may influence anxiety and attitudes. The current study is expected to be completed before the conference date.
16:05 - 17:20 UTC, Session A 🌟 [Statistics Anxiety SIG]
Chaired by: Fareena Alladin (The University of West Indies)
Speaker: Johanna Loock (Toronto Metropolitan University), Alyssa Counsell (Toronto Metropolitan University)
Abstract: Statistics anxiety is a well-documented barrier to student success, having associations with lower course grades and learning challenges such as procrastination, class avoidance, and academic dishonesty. Understanding the extent to which statistics anxiety is an issue for students, as well as the factors driving this anxiety, is critical for developing effective interventions and supporting students experiencing it. In this presentation, I will share results from a large-scale meta-analysis examining statistics anxiety, as measured by subscales of the commonly used Statistics Anxiety Rating Scale (STARS). The six STARS subscales capture both statistics attitudes (Worth of Statistics, Computational Self-Concept, Fear of Statistics Teachers), and statistics anxiety (Interpretation Anxiety, Fear of Asking for Help, Test and Class Anxiety). We identified 75 data sources capturing over 15,000 participants as having sufficient data for analysis through a comprehensive search of the literature and openly available datasets. The prevalence of statistics anxiety and negative attitudes was assessed by summarising descriptive statistics across each identified sample using random effects models. The results from this analysis provide no evidence that high levels of statistics anxiety and negative attitudes are pervasive, suggesting that statistics anxiety may not be as widespread as previously assumed, or that the STARS may not be completely capturing this construct. Significant between-study variability was detected. A meta-regression approach was therefore employed to attempt to capture some of this heterogeneity with commonly investigated and potentially important predictors. Participant sex, age, and degree level (undergraduate/graduate) were examined, as they have frequently been tested as associates of statistics anxiety with mixed results. The meta-regression models also included the year of study publication, which ranged from 1995 - 2025 to identify any changes in statistics anxiety over time. Finally, scale language (English/translated) was added to the model to capture differences between responses to translated versions of the scale compared to the original version. The meta-regression models revealed that, depending on the STARS subscale, the tested predictors explained 0% to 24.53% of between-study variability. Participant sex was a consistent predictor, with female participants tending to report higher anxiety and more negative attitudes than male participants. Higher anxiety was typically identified on translated versions of the STARS, suggesting that linguistic and cultural factors warrant further investigation. Graduate students also consistently had lower anxiety and more positive attitudes towards statistics compared to undergraduate students, but this difference was only statistically significant for the Worth of Statistics subscale. Participant age and the year of publication had no effect on any subscale means. Overall, although some predictors are influential, a significant proportion of heterogeneity remains unexplained. This project is complete and is currently being prepared for publication, however, feedback, questions, and discussions about future avenues for this line of research are more than welcome.
16:05 - 17:20 UTC, Session A 🌟 [Statistics Anxiety SIG]
Chaired by: Fareena Alladin (The University of West Indies)
Speaker: Kristof Csaba (California State University - San Bernardino), Miranda McIntyre (California State University - San Bernardino)
Abstract: This study investigated the impact of collaborative mock exams on test anxiety and self-efficacy in undergraduate statistics courses. Statistics education often evokes anxiety due to its complexity and high-stakes testing environment, which can undermine students' self-efficacy and academic performance. While collaborative learning and practice testing have shown independent benefits, few studies have examined their combined effects as an integrated intervention.
Participants (N=46) from an introductory undergraduate statistics course were assigned by laboratory section to either a collaborative or individual condition for a mock exam activity. Students completed pre-activity measures of statistics anxiety, test-specific anxiety, and academic self-efficacy within two days before the activity. Following the mock exam, post-activity measures were administered 2-4 days later, before their actual course exam.
Results showed that participants in the collaborative condition reported significantly lower post-activity statistics anxiety (p = .016, d = 0.86) and exam-specific anxiety (p = .033, d = 0.73) compared to the individual condition. No significant difference was found in post-activity self-efficacy between conditions. Within-group analyses revealed a significant decrease in exam-specific anxiety (p < .001, d = 0.67) and increase in self-efficacy (p = .004, d = 0.56) across both conditions from pre to post-activity. However, condition by time interactions were not significant for any outcome measures, suggesting that changes in anxiety and self-efficacy did not differ significantly between collaborative and individual conditions.
Qualitative feedback indicated that students in the collaborative condition particularly valued peer learning opportunities, noting how partners could mutually fill knowledge gaps. Correlational analyses confirmed significant relationships between exam performance, self-efficacy (r = .485), and anxiety (r = -.387).
Despite not all hypotheses being supported, this study contributes to the research on collaborative learning in statistics education and suggests that mock exam activities, whether completed collaboratively or individually, can reduce anxiety and enhance self-efficacy. Future research with larger samples should explore the implementation of collaborative exams in high-stakes assessments and investigate the role of technology in facilitating productive collaboration across various learning environments.
16:05 - 16:50 UTC, Session B 💡
By Kevin Peters (Trent University), Fergal O'Hagan (Trent University)
Abstract: Null hypothesis significance testing (NHST) has been the dominant approach of how researchers analyze their data for decades. There has also been a growing movement for researchers to include effect sizes and confidence intervals in their work – what some have called the ‘New Statistics’. One of the main reasons behind this movement is the idea that NHST and p-values encourage ‘dichotomous thinking’: A finding is statistically significant, or it is not (Cumming, 2011). Focusing on effect sizes helps researchers and students to move past dichotomous thinking, may help them gauge the practical or clinical significance of research findings (Kirk, 1996; Kraemer & Kupfer, 2006) as well as engage more critically with research. For their part, researchers report effects sizes more frequently, but their meaningful interpretation has not kept pace (Farmus et al. 2022). Part of the reason for this situation may be that researchers have difficulty providing meaningful interpretations of their effect sizes (Fritz et. Al., 2012). Indeed, many of us rely on conventional benchmarks, such as those provided by Cohen (1988), while at the same time recognizing their limitations. Another reason may be the emphasis placed on p-values in teaching statistics. In the undergraduate classroom, instructors struggle to get across concepts related to probability, sometimes leaving effect size behind (Unelli et al., 2024). Effect size can be a valuable tool to help students improve critical thinking and their ability to interpret research. Focusing on higher-order abilities, teaching students how to meaningfully interpret effect sizes would be consistent with improving their epistemic cognition (how they think about knowledge and justify what they know about the world; Hofer, 2020). The purpose of this collaborative idea session is to develop a network of undergraduate statistics instructors who are interested in working on projects related to documenting and improving pedagogical practices around the meaningful interpretation effect sizes. Examples of possible future projects include performing a scoping/systematic review of relevant literature, developing more effective ways to teach this material, and assessing the effectiveness of how we teach students this important topic.
Despite not all hypotheses being supported, this study contributes to the research on collaborative learning in statistics education and suggests that mock exam activities, whether completed collaboratively or individually, can reduce anxiety and enhance self-efficacy. Future research with larger samples should explore the implementation of collaborative exams in high-stakes assessments and investigate the role of technology in facilitating productive collaboration across various learning environments.
17:35 - 18:15 UTC, Session A ⚡️
Chaired by:
Speakers: Fareena Alladin (The University of West Indies), Keisha Samlal (The University of West Indies)
Abstract: Statistics anxiety in general, and anxiety with using statistical software in particular, are well-recognised challenges facing both educators and students in higher education. The improvement of statistical literacy has the potential to decrease the presence and impact of these anxieties among university students. This, however, calls for an instructional approach that builds from an existing conceptual foundation into more practical applications of statistical knowledge. Additionally, such an approach should foster familiarity and skill-building in the use of data analysis processes. Using collaborative reflection this study explores the experiences, thoughts and feelings of both instructors and students who were involved in the teaching and learning of a newly introduced statistical software course at a Caribbean university. In this paper, we apply Gibbs’ reflective model three-fold: to describe our intention for introducing the course to undergraduate social sciences students; to evaluate both course delivery and the use of formative assessments; and finally, to narrate our experiences of using these strategies to decrease statistics anxiety and improve student learning. To do this, we employ a student feedback survey to capture student reflections, and collaborative reflections from the two course instructors. Preliminary findings indicate students appreciated the opportunities for student-directed learning, both inside and outside the classroom. Furthermore, the complementary focus of the formative assignments was positively viewed as strengthening self-efficacy in working with statistical software. As instructors, the value of a problem-based learning approach and authentic assessments have been reinforced as teaching and learning strategies in the classroom. In combination, these findings lend support to the role of reflection as part of the learning and teaching process. Moreover, they reinforce the need for teaching to extend beyond the physical classroom, and strategies which facilitate continuous learning as a means of reducing statistics anxiety and improving statistical literacy.
17:35 - 18:15 UTC, Session A ⚡️
Chaired by:
Speakers: M. Jospeh Meyer (The University of Virginia), Karen Schmidt (The University of Virginia), Xin Cynthia Tong (The University of Virginia)
Abstract: Generative AI (GenAI) usage is becoming more prevalent in our everyday lives, both in and outside the classroom. Due to the rapidly evolving nature of current and upcoming GenAI, it has become increasingly more difficult for teachers and students to keep up with the newest features and power of AI, and the potential benefits and pitfalls of using it. While GenAI can be incredibly helpful - especially for those who can benefit from its dynamic nature to more easily assist with accessibility of course content, such as creating additional practice problems for students, dynamically creating alt text for images, and making annotations of textbooks and slides easier – there is also a common sense of confusion and apprehension towards GenAI, especially in terms of privacy, ownership, and underlying ethical concerns.
As teachers of quantitative methods in the social sciences, we have heard from our peers and have felt these positive and negative thoughts and feelings about AI ourselves. While guidelines for GenAI usage have been previously proposed by academic administrations and organizations and can give excellent general feedback, these guidelines have been traditionally limited in depth to cater to breadth across fields, and difficult to easily follow when integrating GenAI at the classroom level. One potential solution would be to create a measurement tool that would measure specific experiences from students and teachers, and use this information to create more customizable recommendations that can be more easily applied to individual courses; however, to our knowledge, there isn’t any such known tool. We thus propose creating a multi-domain questionnaire that will help capture and assess the current sentiment of using AI in the classroom, potential pedagogical or learning gaps that can be minimized and improved by the use of AI, and considerations with regards to field-specific learning and research that should be addressed before AI can be effectively used, such as ethical issues regarding using real-world datasets with AI-powered tools. More specifically, we plan to explore four facets of AI usage: Behaviors, Ethics, Emotions, and Thoughts (The BEET^AI scale). After validating this scale and using it to measure responses in statistics education and similar fields, we plan to use the results to create clearer, yet also easily applicable guidelines for teachers and students to help them more easily maximize pedagogy and learning through the use of generative AI, and to navigate the increasingly complex world of large language models. These guidelines will be invaluable for assisting and leading students, researchers, and teachers on the practical and ethical implementation of GenAI usage in and around the classroom. Our current work is still in the development stages of the BEET^AI scale, and our hope in presenting our proposal is to get preliminary sentiment and thoughts about this project, and learn both from those who have researched AI in their own fields, or have similar firsthand experience with using AI in their courses, so we can gain more information about how other teachers and scholars use AI as we continue our development of this scale.
17:35 - 18:15 UTC, Session A ⚡️
Chaired by:
Speakers: Sandra Cristina Martini Rostirola (Instituto Federal Catarinense), Elisa Henning (Universidade do Estado de Santa Catarina), Ivanete Zuchi Siple (Universidade do Estado de Santa Catarina)
Abstract: The ENADE (National Student Performance Exam, from the Portuguese Exame Nacional de Desempenho dos Estudantes) is part of the National Higher Education Evaluation System (SINAES) in Brazil. Its purpose is to assess the academic performance of graduating students in undergraduate programs in relation to the curriculum guidelines of their respective programs, as well as the competencies and skills required for professional practice. ENADE is a national exam held every three years and includes items that measure specific professional competencies. This study focuses on the ENADE editions of 2014, 2017, and 2021, specifically analyzing the statistical content present in the exams taken by graduating students in Mathematics Teacher Education programs. The aim is to identify the statistical concepts addressed in these assessments. A qualitative analysis was conducted of the exam reference matrices and the items related to statistics, combinatorics, and probability. The investigation considered the required competencies and content in each item, in light of the professional profiles outlined by each edition. A total of ten items related to statistics were identified in the subject-specific sections. The 2014 ENADE included two items classified as statistical knowledge, integrated with concepts from calculus, emphasizing data analysis skills and the use of various representations—graphical, symbolic, or numerical. The 2017 edition featured three questions involving statistics, focusing on combinatorial analysis and probability, assessing problem-solving and data interpretation skills. The 2021 edition included five items covering graph interpretation, combinatorial analysis, probability, measures of central tendency, and historical aspects of probability. These items align more closely with the Basic Education curriculum, which is significant for the training of future mathematics teachers. The competencies assessed range from the use of multiple representations of mathematical concepts to evaluating teaching methodologies, as well as critical data analysis and understanding the historical development of mathematical knowledge. These domains were associated with professional profiles such as problem-solving creativity, commitment to continuous education, critical thinking, collaboration, and proactiveness. This trend highlights the importance of aligning initial teacher education programs with ENADE’s evaluative frameworks, as they directly influence teaching practices. From this analysis, six domains of statistical knowledge emerged: understanding curricular documents, knowledge of learning processes and statistical reasoning development, civic statistics, inferential statistics, data analysis, and descriptive statistics. These domains represent the statistical knowledge expected of future mathematics teachers, as outlined in the exam's curricular matrices. It is therefore concluded that teacher education programs should be structured to promote the development of these statistical competencies, ensuring that pre-service teachers engage with a variety of resources that contribute to building their didactic and professional repertoire.
17:35 - 18:15 UTC, Session A ⚡️
Chaired by:
Speakers: Samantha Estrada (New Mexico State University), Meera Nair (University of Texas at Tyler), LadyByrd Wong (New Mexico State University)
Abstract: Biostatistics is a branch of applied statistics focused on the health sciences. In Master of Public Health (MPH) programs, the introductory biostatistics course may be the only exposure students have to statistics and research (Sullivan et al., 2014). Researchers have emphasized the need for careful and well-considered interpretation in biostatistics (Arreola et al., 2020). Average individuals are constantly presented with data and the interpretation of statistics can shape every day discourse; thus, statistical literacy is an important facet of education (Engel, 2017; Engel & Ridgway, 2022; ProCivicStat Partners, 2018). A well-rounded biostatistics education is essential, as future public health researchers will likely inform public policy (Lawrence, 2016). Researchers and educators have called for special attention to the core competencies of the biostatistics course in MPH programs encouraging discourse and improvement in the field (Sullivan et al., 2014). Therefore, there is a need for additional studies to examine how biostatistics is taught across MPH programs to identify trends, and areas of growth.
Previous research has examined syllabi in different contexts, including analyzing multicultural teacher education course design (Gorski, 2009); assessing statistics instruction in doctoral counseling programs (Ord et al., 2016); conducting feminist critical discourse analysis of Science, Technology, Engineering, and Mathematics (STEM) syllabi (Parson, 2016); exploring technology ethics in computing education (Fiesler et al., 2020); and investigating the inclusion of religion and spirituality in social work elective courses (Cole, 2022). Thus, we approach our study focusing on biostatistics education to describe current trends in biostatistics education at the graduate level in MPH programs and provide guidance on curriculum design.
This study analyzes preliminary data collected from N = 45 syllabi from MPH programs across the United States. Data were collected through snowball sampling. We contacted the CAUSE listserv, reached out to administrative assistants to request syllabi, and searched online for publicly available syllabi from 2023 to 2025. The data collection process is ongoing, and we anticipate presenting our findings in the future.
We will perform a content analysis of the preliminary data, conducting a comprehensive review of biostatistics syllabi to assess the emphasis and time allocated to each topic. Additionally, we will examine the inclusion of specialized topics (e.g., logistic regression) and explore whether different types of programs vary in their coverage of these topics. Furthermore, we are interested in different approaches to teaching statistics, such as a focus on data analysis versus mechanical computations and the use of active learning strategies.
This study will provide a snapshot of the content and pedagogy used in teaching biostatistics at the master's level in MPH programs across the United States. Our conference presentation will share current findings and propose directions for future research.
17:35 - 18:15 UTC, Session A ⚡️
Chaired by:
Speakers: Maria Tackett (Duke University), Sinem Demirci (California Polytechnic State University)
Abstract: In this study, we explore how the nine principles of Universal Design for Instruction (UDI) (Scott et al., 2023) are implemented in undergraduate statistics and data science courses. We examine how faculty incorporate the UDI principles in their course design and pedagogy, and explore the motivation for their instructional choices. We use Expectancy Value Theory (EVT) (Wigfield and Eccles, 2000) as a framework for understanding motivation, specifically focusing on faculty’s expectations of success in implementing UDI principles, their beliefs about the value of the principles, and the costs and barriers to implementation. We are in the preliminary stages of this research, so the talk will focus on presenting our theoretical framework and qualitative interview methodology. There will be an opportunity for attendees to provide feedback and indicate interest in participating in the study.
17:35 - 18:00 UTC, Session B 📝
Chaired by:
Speakers: Wilma Coetzee (North-West University)
Abstract: In this talk, I would like to share snippets of my PhD journey. I used a critical systems thinking approach to identify aspects that should be improved in the education of data analytics students to enhance their employability. This South African study covered a wide range of topics, and the various topics were published in accredited journals.
Each of the topics covered provides rich opportunities for worldwide collaboration, especially for those seeking to collaborate with the Global South.
Topics covered include:
A scale to measure aspects students are struggling with, such as statistical anxiety, academic procrastination, a negative attitude toward statistics, and a lack of motivation.
Students’ views on how they would like a flourishing statistics classroom to be.
How learning management systems can be used to track academic procrastination.
Employability competencies needed by data analytics graduates.
Data practitioners’ perspectives on the data science talent gap, with a focus on the soft skills lacking.
Publications featuring these topics are provided below.
Coetzee, W. 2021. Determining the needs of introductory statistics university students: A qualitative survey study. Perspectives in Education, 39(3):197-213. http://dx.doi.org/10.18820/2519593X/pie.v39.i3.15
Coetzee, W. 2022. Measuring risks associated with students of introductory statistics: Scale development and implementation. Perspectives in Education, 40:143-158. https://doi.org/10.18820/2519593X/pie.v40.i2.11
Coetzee, W. & Goede, R. 2022. Critical systems thinking in education: A literature perspective and demonstration. In. Paper presented at: 66th Annual Proceedings of the International Society for the Systems Sciences, Online.
https://journals.isss.org/index.php/jisss/article/view/4042/1219.
Coetzee, W. & Goede, R. 2023. Making sense of students’ procrastination habits: a combined approach incorporating systems thinking and learning analytics. In. Paper presented at: 16th annual International Conference of Education, Research and Innovation, Seville, Spain.
Coetzee, W. & Goede, R. 2024a. A Strategy for Designing a Research Project Using Critical Systems Heuristics: A Research Design Addressing Data Analytics Students’ Employability. Systemic Practice and Action Research, https://doi.org/10.1007/s11213-024-09676-0
Coetzee, W. & Goede, R. 2024b. Combining critical systems heuristics, action research and Habermas’s worlds to guide interdisciplinary research: A demonstration to improve the employability of data science students. [Video] Talk presented at: ITD24, https://youtu.be/1e5psAu8yfg Date of access: 5 Nov. 2024.
Coetzee, W., & Goede, R. (2024). Employability competencies needed by data analytics graduates: An analysis of online job listings. South African Journal of Higher Education, 38, 33-55. https://doi.org/ https://dx.doi.org/10.20853/38-6-5915
Coetzee, W., & Goede, R. (2025). Investigating the data science talent gap: Data practitioners’ perspectives. SA Journal of Human Resource Management, 23. https://doi.org/https://doi.org/10.4102/sajhrm.v23i0.2983
18:25 - 19:15 UTC, Session A 🌟 [Research and Scholarship SIG]
Chaired by:
Speakers: Leigh Harrell-Williams (The University of Memphis), Charlotte Bolch (Midwestern University)
Abstract: This presentation highlights the process and outcomes of the work conducted by the Statistics Education Synthesis group within the National Science Foundation-funded Validity Evidence for Measurement in Mathematics Education (VM²Ed) project. The project’s primary objective was to create a searchable repository of mathematics and statistics education assessments and instruments through multiple rounds of literature searches. The project is grounded in the 2014 Standards for Educational and Psychological Testing framework (American Educational Research Association, American Psychological Association & National Council on Measurement in Education). This presentation will summarize the findings of the Statistics Education Synthesis group as of the repository’s launch in 2024 and provide a brief introduction to the repository.
The process undertaken by the Statistics Education Synthesis group consisted of three distinct rounds. In the first round, the team identified relevant statistics education journal articles and conference proceedings published between 2000 and 2020 that mentioned or utilized assessments and instruments, and chapters from the 2018 International Handbook of Research in Statistics Education (Ben-Zvi, Makar, Garfield). From these sources, a list of statistics education tests and instruments (that were written in English) was compiled. In the second round, the team further explored articles, proceedings, and dissertations that referenced or employed these identified instruments. In the final round, validity evidence for each instrument was gathered and classified for inclusion in the repository using the 2014 Standards framework.
Of the 91 statistics education instruments initially included in the repository, approximately 75% of these instruments were classified as single-use or single-user, indicating either one-time project or team-specific use. Approximately 41% of the instruments included explicit statements regarding score interpretation, while only 30% provided clear use statements. Furthermore, approximately 46% of the instruments contained explicit claims about their validity or reliability evidence. The most frequently identified type of evidence pertained to test content or internal structure, with very few instruments offering evidence related to response processes or the consequences of testing.
This initiative represents a substantial advancement in the field of Statistics Education by enabling researchers and educators to easily identify existing instruments and assess their validity evidence. Additionally, it promotes the development of research studies focused on addressing gaps in validity evidence. In addition to the instrument search feature, the repository (http://mathedmeasures.org/) features virtual training modules on how to use the repository, guidelines for submitting new information (e.g. adding instruments or new validity evidence) for inclusion, and an overview of instrument validation based on the 2014 AERA/APA/NCME Standards for Educational and Psychological Testing framework.
18:25 - 19:15 UTC, Session A 🌟 [Research and Scholarship SIG]
Chaired by:
Speakers: Aishvien Nagendran (Toronto Metropolitan University)
Abstract: Cognitive appraisals, such as perceiving a situation as a challenge or threat, have been linked to academic outcomes. However, the role of cognitive appraisals on perceived competence in the statistics classroom has not been investigated. Existing research often attributes low perceived competence to statistics anxiety, but threat appraisals may be a more accurate explanation. The present study investigates how threat and challenge appraisals at the start of a statistics course relate to perceived cognitive competence in statistics at both the beginning and end of the course. Additionally, we examined how gender moderates the relationship between cognitive appraisals (i.e., each of threat and challenge) and cognitive competence. Data included 487 undergraduate and graduate students who completed a pre-post survey assessing attitudes toward statistics and software while enrolled in a statistics course. The sample consisted of students from 38 unique statistics courses across two large Canadian urban universities and one medium sized university in the United States. Participants from Canada were enrolled in a statistics course within a social science program. Whereas the American participants were all graduate students in an analytics program. Challenge and threat appraisals were measured using the Stress Appraisal Measure (SAM), while perceived cognitive competence was assessed using the cognitive competence subscale of the Survey of Attitudes Toward Statistics (SATS-36). Results from multiple regression analyses indicate three key findings: i) challenge and threat appraisals significantly predicted cognitive competence, with threat being a stronger predictor, ii) threat appraisals were more salient for women, while challenge appraisals were more relevant for men, and iii) perceived cognitive competence did not meaningfully change over the duration of the course. These findings have practical implications for teaching practices, suggesting that educators may benefit from framing statistics as useful and challenging rather than a fear-inducing subject. This contribution is part of a completed, yet unpublished, undergraduate thesis.
18:25 - 19:10 UTC, Session B 💡
By Milo Schield (Augsburg University)
Abstract: Confounder-based statistical literacy is a new and different subject. As a topic, it embraces the 2016 ASA GAISE guidelines to emphasize multivariable thinking and confounding. As a course, it is designed for students in non-quantitative majors. It doesn't use or require algebra or computers. It has less than a 30% overlap with classical introductory statistics. The course has a heavy focus on using English to describe and compare ratios. It uses Schield's Kendall-Hunt textbook: Statistical Literacy: Critical Thinking about Everyday Statistics. As students increase their use of AI, their need for critical thinking increases. Starting this fall, this intellectual foundations course is being required by all statistics majors at the University of New Mexico, and by all incoming students at New College of Florida. Students value this course: 50% agree or strongly agree that it should be required by all students for graduation (40% neutral). This textbook and the aforementioned topics and claims are based on 100+ research papers that have received over 1,400 Google-scholar citations. This session presents over a dozen research opportunities associated with these claims, topics, textbook and course.
This course:
Emphasizes confounding in observational studies. Students learn what it means to take something into account quantitatively by working problems using weighted averages. Students can see Simpson's paradox using a simple graphical technique. Peter Holmes (RIP; organizer of the first ICOTS in Sheffield) said that seeing this graph was the first time he understood what caused Simpson's paradox.
Emphasizes that how statistics are assembled (defined, counted, measured, summarized and presented) can influence the numerical results.
Introduces important distinctions: hard science vs soft; crude statistics vs. adjusted, likely vs. frequent, and attributable to vs. due to, because of, or caused by. Introduces new topics: the Cornfield conditions (necessary conditions for a binary confounder to nullify or reverse a two-group association), journalistic significance, Scanlan's paradox, and the diabolical denominator. Shows how a statistically-significant comparison can become statistically-insignificant after controlling for a measured confounder (and vice versa).
Makes statistically-controversial claims: Uses Bayes' rule to argue that if the research hypothesis is more likely than not to be true, then a statistically-significant result gives at least a 95% chance the Null is false. Proposes an exponential distribution with a mean RR = 2 as a useful model for the distribution of unknown confounders. Uses this distribution to argue that an effect size of at least 4 is needed for a crude two-group comparison to be confounder-significant (less than 1 chance in 20 of being nullified or reversed) in a cross-sectional study.
Makes socially-controversial claims such as those involving the male-female income gap and the black-white family income gap. Statistical educators are argued to have no expertise in determining whether a disparity is due to systemic discrimination, but they have considerable expertise in investigating how an observationally-based association can be influenced. In both aforementioned cases, the income disparity decreases after controlling for plausible confounders. Cases are needed in which the crude association increases to avoid implying that the crude association is always bigger than an adjusted comparison.
Related publications: www.StatLit.org/Schield-Pubs.htm
Proposed pre-conference reading list:
Schield, M. (2024). GAISE 2024 Proposal. ECOTS. http://www.statlit.org/pdf/2024-Schield-ECOTS.pdf
Schield, M. (2024). Statistical Literacy: A New Course. ISLP. http://www.statlit.org/pdf/2024-Schield-ISLP.pdf
Schield, M. (2018). Confounding and Cornfield: Back to the Future. www.statlit.org/pdf/2018-Schield-ICOTS.pdf
Textbook at: https://he.kendallhunt.com/product/statistical-literacy-2023-critical-thinking-about-everyday-statistics
19:25 - 20:15 UTC, Session A 🌟 [Statistics Pedagogy SIG]
Chaired by:
Speaker: Rose-Marie Gibeau (University of Ottawa)
Abstract: Statistics education is a major issue in today's society that relies on “Big Data” and other statistical or probabilistic information (e.g., Mcmahon et al., 2020; Villarejo-Ramos et al., 2021). However, teaching and learning this subject is a major challenge (Collins & Onwuegbuzie, 2007; Cousineau & Harding, 2017). One avenue of research that attempts to improve statistics education concerns the assessment of post-secondary students' understanding. The tests that currently exist seem to measure students' ability to memorize information and procedural skills rather than their understanding of statistical concepts (Gibeau & Cousineau, in evaluation). The aim of the present study is to develop such a test. This test was constructed without any formulas and with as few numbers as possible in order to capture conceptual understanding. A total of 350 psychology students took this new test. They were asked to answer 26 items measuring five main concepts: Population and Sample, Probability and Chance, Variability and Dispersion, Central Tendencies, and Correlation and Shared Variance. The results suggest a three-factor model with acceptable total reliability (alpha = 0.68; ICC = 0.65 [0.59, 0.70]). Also, this version shows a ceiling effect and acceptable differentiation between ability levels. This study has major implications for statistics teaching, by providing a measure of conceptual understanding without relying on memorization or procedures.
19:25 - 20:15 UTC, Session A 🌟 [Statistics Pedagogy SIG]
Chaired by:
Speaker: Lisa Dierker (Wesleyan University), Kristin Flaming (Wesleyan University)
Abstract: The Digital Intro initiative, a National Science Foundation-funded project, is transforming large general education courses into dynamic, project-based experiences that leverage digital tools and statistics skills. This initiative provides students with hands-on research opportunities while they master disciplinary content, fostering early development of data analytic competencies and building confidence to tackle advanced methods and applied statistics coursework. This presentation will describe the initiative's goals and our work to date offering guidance for incorporating a variety of digital tools into curricula. We will share sample projects involving web scraping with qualitative analysis of jobs data from LinkedIn, indeed, Google, and Glassdoor using Python and Excel and providing students with experience in coding, text classification, keyword extraction and word count. The presentation will also provide opportunities for instructors to access two new educational technology platforms developed for this initiative. 1) OpenLab, a project sharing space for students to build portfolios of their work and 2) MasteryZone, a memory retrieval and assessment tool designed to optimize learning. https://digitalintro.wescreates.wesleyan.edu/