Healthcare Complaints Analysis Tool: reliability testing on a sample of Danish patient compensation claims

Søren Bie Bogh, Jonas Harder Kerring, Katrine Prisak Jakobsen, Camilla Hagemann Hilsøe, Kim Lyngby Mikkelsen, Søren Fryd Birkeland*

*Corresponding author for this work

Research output: Contribution to journalJournal articleResearchpeer-review

42 Downloads (Pure)

Abstract

Objective The study aim was to test the intra-assessor and interassessor reliability of the Healthcare Complaints Analysis Tool (HCAT) for categorising the information in the claim letters in a sample of Danish patient compensation claims. Design, setting and participants We used a random sample of 140 compensation cases completed by the Danish Patient Compensation Association that were filed in the field of acute medicine at Danish hospitals from 2007 to 2018. Four assessors were trained in using the HCAT manual before assessing the claim letters independently. Main outcome measures Intra-assessor and interassessor reliability was tested at domain, problem category and subcategory levels of the HCAT. We also investigated the reliability of ratings on the level of harm and of the descriptive details contained in the claim letters. Results The HCAT was reliable for identifying problem categories, with reliability scores ranging from 0.55 to 0.99. Reliability was lower when coding the 'severity' of the problem. Interassessor reliability was generally lower than intra-assessor reliability. The categories of 'quality' and 'safety' were the least reliable of the seven HCAT problem categories. Reliability at the subcategory level was generally satisfactory, with only a few subcategories having poor reliability. Reliability was at least moderate when coding the stage of care, the complainant and the staff group involved. However, the coding of 'level of harm' was found to be unreliable (intrareliability 0.06; inter-reliability 0.29). Conclusion Overall, HCAT was found to be a reliable tool for categorising problem types in patient compensation claims.

Original languageEnglish
Article numbere033638
JournalB M J Open
Volume9
Issue number11
Number of pages8
ISSN2044-6055
DOIs
Publication statusPublished - 25. Nov 2019

Fingerprint

Delivery of Health Care
Medicine
Outcome Assessment (Health Care)
Safety

Keywords

  • Acute care
  • Health service research
  • healthcare complaints
  • patient safety
  • reliability

Cite this

@article{4b40393a8e16428388a16485f169368e,
title = "Healthcare Complaints Analysis Tool: reliability testing on a sample of Danish patient compensation claims",
abstract = "Objective The study aim was to test the intra-assessor and interassessor reliability of the Healthcare Complaints Analysis Tool (HCAT) for categorising the information in the claim letters in a sample of Danish patient compensation claims. Design, setting and participants We used a random sample of 140 compensation cases completed by the Danish Patient Compensation Association that were filed in the field of acute medicine at Danish hospitals from 2007 to 2018. Four assessors were trained in using the HCAT manual before assessing the claim letters independently. Main outcome measures Intra-assessor and interassessor reliability was tested at domain, problem category and subcategory levels of the HCAT. We also investigated the reliability of ratings on the level of harm and of the descriptive details contained in the claim letters. Results The HCAT was reliable for identifying problem categories, with reliability scores ranging from 0.55 to 0.99. Reliability was lower when coding the 'severity' of the problem. Interassessor reliability was generally lower than intra-assessor reliability. The categories of 'quality' and 'safety' were the least reliable of the seven HCAT problem categories. Reliability at the subcategory level was generally satisfactory, with only a few subcategories having poor reliability. Reliability was at least moderate when coding the stage of care, the complainant and the staff group involved. However, the coding of 'level of harm' was found to be unreliable (intrareliability 0.06; inter-reliability 0.29). Conclusion Overall, HCAT was found to be a reliable tool for categorising problem types in patient compensation claims.",
keywords = "Acute care, Health service research, healthcare complaints, patient safety, reliability",
author = "{Bie Bogh}, S{\o}ren and {Harder Kerring}, Jonas and Jakobsen, {Katrine Prisak} and Hils{\o}e, {Camilla Hagemann} and Mikkelsen, {Kim Lyngby} and {Fryd Birkeland}, S{\o}ren",
year = "2019",
month = "11",
day = "25",
doi = "10.1136/bmjopen-2019-033638",
language = "English",
volume = "9",
journal = "B M J Open",
issn = "2044-6055",
publisher = "BMJ Group",
number = "11",

}

Healthcare Complaints Analysis Tool : reliability testing on a sample of Danish patient compensation claims. / Bie Bogh, Søren; Harder Kerring, Jonas; Jakobsen, Katrine Prisak; Hilsøe, Camilla Hagemann; Mikkelsen, Kim Lyngby; Fryd Birkeland, Søren.

In: B M J Open, Vol. 9, No. 11, e033638, 25.11.2019.

Research output: Contribution to journalJournal articleResearchpeer-review

TY - JOUR

T1 - Healthcare Complaints Analysis Tool

T2 - reliability testing on a sample of Danish patient compensation claims

AU - Bie Bogh, Søren

AU - Harder Kerring, Jonas

AU - Jakobsen, Katrine Prisak

AU - Hilsøe, Camilla Hagemann

AU - Mikkelsen, Kim Lyngby

AU - Fryd Birkeland, Søren

PY - 2019/11/25

Y1 - 2019/11/25

N2 - Objective The study aim was to test the intra-assessor and interassessor reliability of the Healthcare Complaints Analysis Tool (HCAT) for categorising the information in the claim letters in a sample of Danish patient compensation claims. Design, setting and participants We used a random sample of 140 compensation cases completed by the Danish Patient Compensation Association that were filed in the field of acute medicine at Danish hospitals from 2007 to 2018. Four assessors were trained in using the HCAT manual before assessing the claim letters independently. Main outcome measures Intra-assessor and interassessor reliability was tested at domain, problem category and subcategory levels of the HCAT. We also investigated the reliability of ratings on the level of harm and of the descriptive details contained in the claim letters. Results The HCAT was reliable for identifying problem categories, with reliability scores ranging from 0.55 to 0.99. Reliability was lower when coding the 'severity' of the problem. Interassessor reliability was generally lower than intra-assessor reliability. The categories of 'quality' and 'safety' were the least reliable of the seven HCAT problem categories. Reliability at the subcategory level was generally satisfactory, with only a few subcategories having poor reliability. Reliability was at least moderate when coding the stage of care, the complainant and the staff group involved. However, the coding of 'level of harm' was found to be unreliable (intrareliability 0.06; inter-reliability 0.29). Conclusion Overall, HCAT was found to be a reliable tool for categorising problem types in patient compensation claims.

AB - Objective The study aim was to test the intra-assessor and interassessor reliability of the Healthcare Complaints Analysis Tool (HCAT) for categorising the information in the claim letters in a sample of Danish patient compensation claims. Design, setting and participants We used a random sample of 140 compensation cases completed by the Danish Patient Compensation Association that were filed in the field of acute medicine at Danish hospitals from 2007 to 2018. Four assessors were trained in using the HCAT manual before assessing the claim letters independently. Main outcome measures Intra-assessor and interassessor reliability was tested at domain, problem category and subcategory levels of the HCAT. We also investigated the reliability of ratings on the level of harm and of the descriptive details contained in the claim letters. Results The HCAT was reliable for identifying problem categories, with reliability scores ranging from 0.55 to 0.99. Reliability was lower when coding the 'severity' of the problem. Interassessor reliability was generally lower than intra-assessor reliability. The categories of 'quality' and 'safety' were the least reliable of the seven HCAT problem categories. Reliability at the subcategory level was generally satisfactory, with only a few subcategories having poor reliability. Reliability was at least moderate when coding the stage of care, the complainant and the staff group involved. However, the coding of 'level of harm' was found to be unreliable (intrareliability 0.06; inter-reliability 0.29). Conclusion Overall, HCAT was found to be a reliable tool for categorising problem types in patient compensation claims.

KW - Acute care

KW - Health service research

KW - healthcare complaints

KW - patient safety

KW - reliability

U2 - 10.1136/bmjopen-2019-033638

DO - 10.1136/bmjopen-2019-033638

M3 - Journal article

C2 - 31772109

VL - 9

JO - B M J Open

JF - B M J Open

SN - 2044-6055

IS - 11

M1 - e033638

ER -