Using a new interrater reliability method to test the modified Oulu Patient Classification instrument in home health care

Jill Flo, Bjørg Landmark, Ove Edward Hatlevik, Lisbeth Fagerström

Research output: Contribution to journalArticleScientificpeer-review

11 Citations (Scopus)
56 Downloads (Pure)

Abstract

To test the interrater reliability of the modified Oulu Patient Classification instrument, using a multiple parallel classification method based on oral case presentations in home health care in Norway.Reliability study.Data were collected at two municipal home healthcare units during 2013-2014. The reliability of the modified OPCq instrument was tested using a new multiple parallel classification method. The data material consisted of 2 010 parallel classifications, analysed using consensus in per cent and Cohen's kappa. Cronbach's alpha was used to measure internal consistency.For parallel classifications, consensus varied between 64.78-77.61%. Interrater reliability varied between 0.49-0.69 (Cohen's kappa), the internal consistency between 0.81-0.94 (Cronbach's alpha). Analysis of the raw scores showed 27.2% classifications had the same points, 39.1% differed one point, 17.9% differed two points and 16.5% differed ≥3 points.AimDesignMethodsResults
Original languageUndefined/Unknown
Pages (from-to)167–175
JournalNursing Open
Volume5
Issue number2
DOIs
Publication statusPublished - 2018
MoE publication typeA1 Journal article-refereed

Cite this