BEST PRACTICES: HOW TO REVIEW THE LITERATURE

BEST PRACTICES: HOW TO
REVIEW THE LITERATURE
Karen E
E. Luh
Luh, PhD
PhD, CQE
Corporate Director, Clinical Information Analysis
p Health System
y
Saint Joseph
Objectives
‰ Understand definition and importance of EBPs
‰ Know how to identify two good sources for EBP
information
‰ Explain how quality and strength of evidence are
determined
‰ Know how to apply EBP methods to IP issues
‰ Understand how to evaluate studies
EVIDENCE-BASED
PRACTICE
Evidence-Based Practice
‰ What is it?
‰ What is driving EBP?
‰ What are the barriers?
‰ What are the steps?
Evidence-Based Practice - Definition
‰ Treatment decisions based on
‰ Best available, current evidence
‰ Clinical expertise
‰ Patient values (Source: IOM, 2001)
‰ In the absence of EBP, physicians and nurses rely on:
‰ What they learned in school
‰ Experience (what we’ve always done)
‰ Advice from co-workers
EBP helps us avoid this:
Evidence-Based Practice - Drivers
‰ IOM report Crossing the Quality Chasm (2001).
Ten rules to cross the chasm from the care patients
receive to the care patients should receive
receive, including:
‰ Evidence-based decision making
‰ Demand
e a d for
o greater
g eate accou
accountability
tab ty
‰ Evidence that clinicians often not aware
of most current information
‰ Large variations in the care delivered
‰ IOM report: “illogical”
EBP – Barriers/ Strategies
‰ ENORMOUS increase in available evidence
‰ Strategies: Synopses/ Reviews
‰ Complexity of evaluating quality of
evidence
‰ Strategies: Synopses/ Reviews
‰ Knowledge gaps – no methods to
identify and link clinical research
priorities to research being
p
g done
‰ Strategies: AHRQ supports research into quality of care,
guidelines, EBPs
‰ Strategies:
g
“Smart” EMRs p
push info ((but must be kept
p up-to-date)
p
)
EBP – Barriers/ Strategies
‰ Peer group - Practice patterns based on local standards
‰ Strategies: P4P, public reporting
‰ Knowledge/ skills
‰ Strategies: Continuing education
‰ Strategies: “Expert” caregiver leaders
‰ Attitudes
Attit d - Resistance
R i t
tto “cookbook”
“
kb k” medicine
di i
‰ Strategies: EBP balances evidence with expertise
‰ Strategies: Clinician expertise is key to applying evidence to individual
‰ Patients’ preferences
‰ Strategies: EBP explicitly includes patient in decision-making
Evidence-Based Practice – Steps
Formulate the question - PICO
What is the Patient group or Problem?
What is the Intervention?
What is the Comparison?
What are the Outcomes?
Formulate the question - P
Patient group or Problem
How would I describe a group of patients similar to
mine?
To identify inpatients colonized with MRSA . . .
Formulate the question - I
Intervention
What is the intervention I am considering?
is it cost-effective to do 100% screening at
admission . . .
Formulate the question - C
Comparison
What alternative might I consider?
as compared to using risk factors to screen
selected patients . . .
Formulate the question - O
Outcome
What am I hoping to learn/ accomplish?
to identify need for extra precautions/ isolation?
PICO
P – To identify patients colonized with MRSA
I - is
i it cost-effective
t ff ti to
t do
d 100% screening
i att
admission
C - as compared to using risk factors to screen
selected patients
y need for extra p
precautions/ isolation?
O - to identify
Formulate the question – HH Compliance
Problem –Among healthcare workers in acute care
faclities
Intervention – what is best strategy
Comparison – as compared to current actions
Outcomes – to improve
p
hand hygiene
yg
compliance?
p
SOURCES OF EVIDENCE
Hierarchy of Evidence
EBM P
Pyramid
id and
d EBM P
Page G
Generator.
t
(c) Copyright 2006-2011. Trustees of
Dartmouth College and Yale University.
TOP OF EVIDENCE
PYRAMID (FILTERED)
Systematic Reviews
‰ Best source – pre-appraised/ prefiltered
‰ Performed by subject matter experts
‰ Overview
O
i
off existing
i ti lit
literature
t
‰ Unbiased search of literature
‰ Explicit rules for selecting/ evaluating studies
‰ Evidence is synthesized– may have meta-analysis
‰ Conclusion/ recommended action
‰ Not narrative reviews
Rigor of research method applied to review
Systematic Reviews - examples
‰ AHRQ National Guideline Clearinghouse
‰ http://www.guideline.gov/about/
‰ Cochrane Database of Systematic Reviews
‰ Cochrane.org
‰ Abstracts free – your library may have subscription
‰ Clinical Evidence
‰ ClinicalEvidence.bmj.com
‰ FIRSTConsult
‰ Database with summaries and reviews, including systematic reviews
‰ Centre
C t ffor E
Evidence-Based
id
B
d Medicine
M di i
‰ Bandolier
‰ PubMed: Find Systematic
y
Reviews
Appraise Systematic Reviews
‰ Well-defined
W ll d fi d ttopic;
i h
homogenous sett off studies?
t di ?
‰ Thorough search strategy?
‰ Inclusion/ exclusion criteria clear/ unbiased?
‰ Reviewers blind to sources/ independent?
‰ Review included follow-up
follow up with study authors to fill in
missing information?
‰ Robustness of findings evaluated; possible publication
bi considered?
bias
id d?
‰ Conclusions firmly linked to evidence?
Systematic Review - HH Compliance
Interventions to improve
p
hand hygiene
yg
compliance
p
in
patient care (Review) (2011). Gould, D.J., Moralejo D.,
Drey, N., & Chudleigh, J.H.
Key conclusions:
1 Quality of intervention studies disappointing;
1.
Only four studies met criteria
2. Alcohol-based hand rubs plus education insufficient
3. Multiple strategies, involvement of staff in planning
campaigns may be helpful
4 More
4.
M
research
h needed
d d
Other Syntheses
‰ Databases
‰ TRIP Database (Turning Research into Practice)
‰ UpToDate
‰ DynaMed
‰ Journals
‰ Evidence-Based Nursing
‰ International Journal of Evidence-Based Healthcare
‰ Bandolier
These are just a sampling
pp y
Same evaluation rules apply
BOTTOM OF EVIDENCE
PYRAMID (UNFILTERED)
RCTs
‰ Gold standard
‰ Random
R d
assignment
i
t removes bi
bias b
by randomly
d l
distributing known and unknown variables
‰ Control group means we can be sure that outcome is
related to intervention
‰ Possible to make causal inferences
Hand Hygiene Compliance
‰ Huang (2002) from Cochrane review
‰ 100 nurses randomly
d l assigned
i
d to
t educational
d
ti
l vs control
t l
groups
‰ Direct observation of hand hygiene both before and after
intervention
‰ Findings – significant increase in compliance post
education
d
ti compared
d tto b
baseline
li and
d control
t l group
Cohort Studies
‰ Observational longitudinal (over time)
‰ Compare
C
ttwo groups (cohorts)
( h t ) with
ith diff
differing
i condition/
diti /
treatment/ exposure
‰ Retrospective or prospective
‰ No random assignment – other variables related to the
group could have an effect on the outcome
‰ Not possible to make causal inferences
Case – Control Studies
‰ Compare two groups one with vs without outcome of
interest
‰ Retrospective – often rely on medical records, patient
recall
‰ No random assignment – other variables
ariables related to the
group could have an effect on the outcome
‰ Not p
possible to make causal inferences
Descriptive, Qualitative Studies or Expert
Descriptive
Opinion
‰ Case reports or series
‰ Based on interviews, observations, focus groups
‰ No comparison groups, no hypothesis testing
‰ No
N generalizability
li bilit
‰ Good for generating hypotheses (ideas)
Worst case scenario
Worst-case
FINDING THE EVIDENCE
Finding the evidence - Medline
‰ Start with y
your reference librarian!
‰ Start with your reference librarian!!
‰ Database searches
‰ Boolean logic - and, or, not
‰ Truncation – use word stems (control$ vs controlled)
‰ MeSH (Medical Subject Headings) – key index words
‰ Standard filters – built into search engines
Hand Hygiene Compliance Search
From Cochrane review:
EVALUATING THE
EVIDENCE
Evaluating the evidence
‰ Source – differentiate scientific evidence,, opinion,
p
, and
marketing
‰ Strength – hierarchy of evidence pyramid
‰ Quality – Is the study valid? Well done?
Even in peer-reviewed journal quality not guaranteed
‰ Applicability to your population
Strength
‰ Hierarchy
y
‰ All things being equal, consider evidence hierarchy
‰ Design
g AND execution
‰ Well-done observational study trumps poorly executed RCT
‰ Replication or consistency
‰ Has this been found more than once?
Quality
‰ Randomization – best if offsite
‰ Attrition – are patients lost to follow-up? This could
undermine randomization
‰ Double blinding – group assignment unknown to both
patient and individual(s) assessing outcome
‰ Internal validity – can we be sure that intervention under
study is causing outcome and not some other variable?
Were groups treated identically EXCEPT for intervention?
‰ External validity – can these results be generalized
beyond
y
studyy sample?
p
Integrating evidence with clinical
expertise, patient preference
‰ Efficacy versus effectiveness
‰ Experts working in controlled environment vs “real-world” application
‰ Do these results apply to my patient?
‰ Generalizability plus patient choice
‰ Does this intervention benefit this patient?
‰ Clinical expertise needed
‰ Is this intervention in keeping with this patient’s values
and preferences?
‰ Patient-centered care mandates centrality of patient choice
Summary
EBP is shift:
From reliance on experts
To integration of evidence WITH clinical expertise WITH patient’s
circumstances and values
EBP Resources
Systematic reviews
EBP journals
R f
Reference
librarians
lib i
NOT CONVINCED WE
NEED EBP?
Semelweis (1847)
Post-childbirth infection and handwashing
Current Practice in Infection Prevention –
What does the literature tell us?
‰ Surveillance and public reporting of central line-
associated bloodstream infections
‰ Surveillance vs administrative coding data
‰ Nursing and physician attire as possible source of
h
hospital
it l – acquired
i d iinfections
f ti
‰ Other examples
Quality of Surveillance for CLA-BSI
Quality of traditional surveillance for public reporting of
nosocomial bloodstream infection rates. Lin MY, Hota B, Khan
YM, Woeltje KF, Borlawsky TB et al. (2010). JAMA, 304(18): 2035-2041.
‰ Introduction
I t d ti
‰ lays out rationale
‰ closes with clear statement of research questions:
‰ measure relationship between traditional surveillance and computer
algorithm for identifying CLA-BSI
‰ determine if there is heterogeneity in this relationship among 4 institutions
studied
Research
R
h questions
ti
and
d rationale
ti
l ffor th
those questions
ti
very
clear.
Quality of Surveillance for CLA-BSI
Methods ‰ Sample clearly described – 20 ICUs in 4 academic medical centers
‰ Time frame, methods for obtaining sample described
‰ IPs
IP blinded
bli d d to participation
i i i
‰ Computer algorithm described; reference to publication
‰ Discloses power analysis and statistical tools and methods
Good clear description, it would be possible to replicate
this studyy based on what we’ve learned in the methods
section
WHAT KIND OF STUDY IS
THIS?
Cohort Studies
‰ Observational longitudinal (over time)
‰ Compare
C
ttwo groups (cohorts)
( h t ) with
ith diff
differing
i condition/
diti /
treatment/ exposure
‰ Retrospective or prospective
‰ No random assignment – other variables related to the
group could have an effect on the outcome
‰ Not possible to make causal inferences
Quality of Surveillance for CLA-BSI
Results
‰ statistics with measures of spread
‰ analyses clearly relate to research questions:
‰ measure relationship
l ti
hi b
between
t
ttraditional
diti
l surveillance
ill
and
d
computer algorithm for identifying CLA-BSI: weak relationship
Spearman ρ (rho) = 0.34
‰ determine if there is heterogeneity in this relationship among
4 institutions studied: ρ ranged from 0.83 to nonsignificant
0.10.
Results sufficiently detailed to allow us to determine
whether we agree with the analyses and the findings;
analyses address research questions.
Quality of Surveillance for CLA-BSI
Discussion or Comment
‰ Relates results to original research questions
‰ Relates findings to other research
‰ Discusses
Di
possible
ibl b
bases ffor fifindings
di
(Remember they cannot make causal statements as this isn’t RCT)
‰ Discusses limitations and ideas for future research
Authors avoid over-generalizing their results – stay within bounds of
their methods.
Administrative data compared to NHSN
criteria for identifying HAIs
Administrative coding data, compared with CDC/NHSN criteria,
are poor indicators of health care-associated infections. Stevenson
KB, Khan YM, Dickman J, Gillenwater T, Kulich P, et al. (2008). Amer J Infect
Control, 36: 155-164.
‰ Introduction
‰ Lays out rationale – embeds issue in current P4P/ public reporting
requirements
‰ Closes with clear statement of research question:
‰ “compare directly the accuracy of HAIs identified by ICD-9-CM . . . codes to those
identified by traditional epidemiologic methods . . . .
‰ Clearly lays out concern that coding data are overly – sensitive; does NOT
address issue of whether NHSN criteria are under-sensitive
Doesn’tt adequately address arguments for use of coding data
Doesn
No star
Administrative data compared to NHSN
Methods ‰
‰
‰
‰
‰
Sample clearly described – 1 academic complex
Time frame, methods for obtaining sample described
IPs blinded for initial surveillance since study done retrospectively
Experience/ reliability of surveillance personnel not addressed
Used codes identified by independent sources – avoids investigator
bias in code selection
‰ Methods for review of discordant cases unclear – IPs were NOT
blinded during this review – possible source of bias
Might be possible to replicate this study based on what we’ve
we ve
learned in the methods section with some reservations
No star
Administrative data compared to NHSN
Results
‰ Statistics with measures of spread
‰ Analyses relate to research question:
‰ Discordant cases with coding-identified
coding identified HAIs reviewed by IPs to determine if
infections also matched NHSN definitions
‰ Subsample of discordant CLA-BSIs reviewed (50/ 485) but results
extrapolated to total sample
‰ Accuracy computed AFTER adjustment
‰ Results poorly described – difficult to determine actual findings and
therefore difficult to evaluate
Results somewhat hard to follow – actual numbers of cases buried in
text – tables with counts laid out as described in methods would
have been helpful.
No star
Administrative data compared to NHSN
Discussion or Comment
‰ Relates results to original research questions
‰ Questionable evaluation of finding that ICD-9 codes flagged 59 HAIs
missed by IPs
“only a small number of these discordant cases (59/879, 7%) were subsequently
found to meet CDC/NHSN criteria, suggesting that the case-finding methods
outlined by the CDC/NHSN are reasonably robust.” (p. 161)
BUT initial surveillance identified 318 HAIs and 59/ (318 + 59) = 15.7%
‰ Relates findings to other research
‰ Discusses possible bases for findings
(Remember
(R
b they
th cannott make
k causall statements
t t
t as thi
this iisn’t
’t RCT)
‰ Discusses limitations and ideas for future research
Some appearance of bias in interpretation of results
No star
QUESTIONS?
Contact information:
Karen E. Luh
@ j
g
[email protected]