Glasgow Theses Service

Good practice for formative assessment and feedback in statistics courses

Paterson, Karina (2009) Good practice for formative assessment and feedback in statistics courses. MSc(R) thesis, University of Glasgow.

Full text available as:
[img]
Preview
PDF
Download (1074Kb) | Preview

Abstract

Feedback to learners about their work is an important part of the teaching and learning process for any subject. Feedback should ensure students are clear on where they went wrong and what they can improve in the future. Without useful feedback students continue to make similar mistakes. However mathematical subjects such as Statistics appear to place less emphasis on feedback compared to other subjects. Statistics has made steady pedagogic progress and now uses a variety of assessment methods, but producing effective feedback for these methods has not made the same progress. This thesis investigates the feedback currently given to some Statistics classes for students who are studying Statistics as part of a degree in another subject, proposes a set of useable guidelines for producing effective feedback and reports on the creation and piloting of a multiple choice, computer-aided assessment system that provides immediate feedback to learners in Statistics courses. The first chapter of the thesis discusses the background of the subject. Key features include the Quality Assurance Agency’s code of practice, which institutions should be following with regards to assessment, the National Student Survey, in which results for assessment and feedback are generally not favourable, and the various models for Statistics assessment suggested by Gal and Garfield in their book The Assessment Challenge in Statistics Education. An interesting thing about this book is that, though the whole book focuses on assessment, there is little mention of how to give feedback for any of the models. Chapter two reviews the literature on feedback. This reveals that feedback can improve or impair performance depending on various factors. A summary is given of the most repeated guiding principles for constructing feedback. How students use feedback, including guidelines for receiving feedback, is also discussed. The final part of the chapter looks at the advice given for constructing 8 multiple choice tests and the lack of guidance for feedback relating to multiple choice questions. Chapter three describes student questionnaires that were implemented in Statistics courses at the University of Glasgow to survey student attitudes to the feedback they received. A questionnaire and follow up questionnaire based on the guiding principles was piloted with a small group of second year Statistics students. Before issuing the follow up questionnaire, the way feedback was produced was changed in line with the guiding principles. When the questionnaires were compared, students were more satisfied with how quickly feedback was returned, the amount of feedback, the detail and the overall usefulness of the feedback after the intervention. The questionnaire was then adjusted to fit with a larger first year class. This included adding the Rosenberg self esteem scale to measure students self esteem. These results showed that the detail of the feedback given needs to be improved more than the amount. The most common reasons given for why the feedback was not detailed enough were that there was no suggestion for improvement, it was unclear where the mark was lost and the feedback was too vague. There may also be a relationship between students self esteem and the attention they pay to feedback. It appears those with a lower self esteem pay less attention to feedback. At the end of chapter three a briefing document is presented that can be used to help train markers. This is a summary of the guiding principles and includes good and bad examples of feedback. Chapter four discusses the construction of a multiple choice testing system and the creation of specific tests for use in a level one Statistics course. First the chapter describes the piloting of another computer-aided assessment system called Model Choice. The results of this were very positive, with all students agreeing the system was easy to use and appreciating the immediate feedback. Next a similar system was created for use with the Statistics class for Psychologists and Social Scientists. Multiple choice questions were constructed 9 for four of this course’s labs, on sampling and interval estimation, multiple regression, experimental design and categorical data. For each question, three incorrect options and a correct option were produced. Feedback was also written for each option explaining why the chosen answer was either correct or incorrect. Students getting the answer wrong first time were then given a second attempt. The literature on constructing multiple choice assessments was consulted during this process. Chapter five focuses on piloting the computer-aided assessment system. The system was initially trialled with postgraduates and staff. The program received an excellent response and a group discussion revealed plenty of constructive ideas to improve the system. The program was then trialled with new third year Statistics students. The final chapter summarises and discusses the results obtained to date and makes suggestions for further work.

Item Type: Thesis (MSc(R))
Qualification Level: Masters
Keywords: Assessment, Feedback
Subjects: H Social Sciences > HA Statistics
Colleges/Schools: College of Science and Engineering > School of Mathematics and Statistics > Statistics
Supervisor's Name: McColl, Prof. John
Date of Award: 2009
Depositing User: Miss Karina Paterson
Unique ID: glathesis:2009-623
Copyright: Copyright of this thesis is held by the author.
Date Deposited: 13 Mar 2009
Last Modified: 10 Dec 2012 13:20
URI: http://theses.gla.ac.uk/id/eprint/623

Actions (login required)

View Item View Item