McKay, John (2009) Significant events in general practice: issues involved in grading, reporting, analyses and peer review. MD thesis, University of Glasgow.
Full text available as:
PDF
Download (2MB) |
Abstract
General practitioners (GPs) and their teams in the United Kingdom (UK) are encouraged to identify and analyse significant health care events. Additionally, there is an expectation that specific significant events should be notified to reporting and learning systems where these exist. Policy initiatives – such as clinical governance, GP appraisal and the new General Medical Services (nGMS) contract - attempt to ensure that significant event analysis (SEA) is a frequent educational activity for GP teams. The presumption from policymakers and healthcare authorities is that GP teams are demonstrating a commitment to reflect on, learn from and resolve issues which impact on the quality and safety of patient care. However, there is minimal evidence to support these assumptions while there is no uniform mechanism to ensure consistency in the quality assurance of SEA reports.
One potential method of enhancing both the learning from and the quality of SEA is through peer review. In the west of Scotland an educational model to facilitate the peer review of SEA reports has existed since 1998. However, knowledge and understanding of the role and impact of this process are limited. With the potential of peer review of SEA to contribute to GP appraisal and the nGMS contract, there was a need to develop a more evidence-based approach to the peer review of SEA.
The main aims of this thesis therefore are:
• To identify and explore the issues involved if the identification, analysis and reporting of significant events are to be associated with quality improvement in general practice.
• To investigate whether a peer feedback model can enhance the value of SEA so that its potential as a reflective learning technique can be maximised within the current educational and contractual requirements for GPs.
To achieve these aims a series of mixed-methods research studies was undertaken:
To examine attitudes to the identification and reporting of significant events a postal questionnaire survey of 617 GP principals in NHS Greater Glasgow was undertaken. Of the 466 (76%) individuals who responded, 81 (18%) agreed that the reporting of such events should be mandatory while 317 (73%) indicated that they would be selective in what they notified to a potential reporting system. Any system was likely to be limited by a difficulty for many GPs (41%) in determining when an event was ‘significant.’
To examine levels of agreement on the grading, analysis and reporting of standardised significant events scenarios between different west of Scotland GP groups (e.g. GP appraisers, GP registrar trainers, SEA peer reviewers) a further postal questionnaire survey was conducted. 122 GPs (77%) responded. No difference was found between the groups in the grading severity of significant events scenarios (range of p values = 0.30-0.79). Increased grading severity was linked to the willingness of each group to analyse and report that event (p<0.05). The strong levels of agreement suggest that GPs can prioritise relevant significant events for formal analysis and reporting.
To identify the range of patient safety issues addressed, learning needs raised and actions taken by GP teams, a sample of 191 SEA reports submitted to the west of Scotland peer review model were subjected to content analysis. 48 (25%) described incidents in which patients were harmed. A further 109 reports (57%) outlined circumstances which had the potential to cause patient harm. Learning opportunities were identified in 182 reports (95%) but were often non-specific professional issues such as general diagnosis and management of patients or communication issues within the practice team. 154 (80%) described actions taken to improve practice systems or professional behaviour. Overall, the study provided some proxy evidence of the potential of SEA to improve healthcare quality and safety.
To improve the quality of SEA peer review a more detailed instrument was developed and tested for aspects of its validity and reliability. Content validity was quantified by application of a content validity index and was demonstrated, with at least 8 out of 10 experts endorsing all 10 items of the proposed instrument. Reliability testing involved numerical marking exercises of 20 SEA reports by 20 trained SEA peer reviewers. Generalisability (G) theory was used to investigate the ability of the instrument to discriminate among SEA reports. The overall instrument G co-efficient was moderate to good (G=0.73), indicating that it can provide consistent information on the standard achieved by individual reports. There was moderate inter-rater reliability (G=0.64) when four raters were used to judge SEA quality. After further training of reviewers, inter-rater reliability improved to G>0.8, with a decision study indicating that two reviewers analysing the same report would give the model sufficient reliability for the purposes of formative assessment.
In a pilot study to examine the potential of NHS clinical audit specialists to give feedback on SEA reports using the updated review instrument, a comparison of the numerical grading given to reports by this group and established peer reviewers was undertaken. Both groups gave similar feedback scores when judging the reports (p=0.14), implying that audit specialists could potentially support this system.
To investigate the acceptability and educational impact associated with a peer reviewed SEA report, semi-structured interviews were undertaken with nine GPs who had participated in the model. The findings suggested that external peer feedback is acceptable to participants and enhanced the appraisal process. This feedback resulted in the imparting of technical knowledge on how to analyse significant events. Suggestions to enhance the educational gain from the process were given, such as prompting reviewers to offer advice on how they would address the specific significant event described. There was disagreement over whether this type of feedback could or should be used as supporting evidence of the quality of doctors’ work to educational and regulatory authorities.
In a focus group study to explore the experiences of GP peer reviewers it was found that acting as a reviewer was perceived to be an important professional duty. Consensus on the value of feedback in improving SEA attempts by colleagues was apparent but there was disagreement and discomfort about making a “satisfactory” or an “unsatisfactory” judgement. Some concern was expressed about professional and legal obligations to colleagues and to patients seriously harmed as a result of significant events. Regular training of peer reviewers was thought to be integral to maintaining their skills.
The findings presented contribute to the limited evidence on the analysis and reporting of significant events in UK general practice. Additionally, aspects of the utility of the peer review model outlined were investigated and support its potential to enhance the application of SEA. The issues identified and the interpretation of findings could inform GPs, professional bodies and healthcare organisations of some of the strengths and limitations of SEA and the aligned educational peer review model.
Item Type: | Thesis (MD) |
---|---|
Qualification Level: | Doctoral |
Keywords: | significant event analysis, significant event reporting, peer review |
Subjects: | R Medicine > R Medicine (General) |
Colleges/Schools: | College of Medical Veterinary and Life Sciences > School of Health & Wellbeing |
Supervisor's Name: | Lough, Dr. J.R. Murray |
Date of Award: | 2009 |
Depositing User: | Dr John McKay |
Unique ID: | glathesis:2009-1176 |
Copyright: | Copyright of this thesis is held by the author. |
Date Deposited: | 17 Nov 2009 |
Last Modified: | 10 Dec 2012 13:35 |
URI: | https://theses.gla.ac.uk/id/eprint/1176 |
Actions (login required)
View Item |
Downloads
Downloads per month over past year