When citing guidelines begins to replace thinking
Modern clinical practice is saturated with guidance.
Clinical practice guidelines, consensus statements, and expert recommendations are now readily available at the point of care. They distill large bodies of literature into concise, actionable steps. They reduce variation. They improve outcomes.
Used well, they are indispensable.
For a time, I found myself taking quiet pride in how fluently I could cite them.
In teaching sessions, I could quickly list recommended tests, compare competing guidelines, and point out where they differed. I could recall which societies recommended screening for hepatitis C or hepatitis B in patients with suspected immune thrombocytopenia, and which suggested measuring immunoglobulins to evaluate for common variable immunodeficiency, often without a clear sense of how strongly those recommendations were grounded in evidence or how much they mattered in practice.
It felt like expertise.
But something about it was off.
The more easily I could repeat these recommendations, the less I found myself asking why they existed in the first place. The emphasis had shifted, almost imperceptibly, from understanding to recall.
In retrospect, what unsettles me is not that I relied on guidelines.
It is that I mistook the ability to repeat them for understanding.
The appeal of guidelines
There are good reasons to defer to guidelines.
They synthesize large and complex literatures.
They are developed by experts.
They reduce unwarranted variation in care.
Many are grounded in high-quality evidence, including randomized trials.
In environments shaped by time pressure, risk, and complexity, the appeal of ready-made recommendations is obvious.
They provide clarity.
They allow us to act.
None of this is trivial.
What guidelines represent
And yet the clarity that makes guidelines appealing also conceals the interpretive work they embody.
Even the strongest recommendations do not arrive untouched.
They represent evidence that has already been interpreted.
Populations have been defined.
Outcomes have been selected.
Tradeoffs between benefit and harm have been weighed.
Uncertainty has been managed, sometimes with data, sometimes with consensus.
The result is a recommendation.
That recommendation is valuable.
But it is not the same as the underlying evidence, and it is not the same as understanding.
Borrowed authority
At some point, I realized that I was no longer using guidelines as tools.
I was using them as a source of authority, borrowing the conclusions of others rather than reasoning toward my own.
I had stopped saying, “Here is how I understand this problem,”
and began saying, “This is what the guideline recommends.”
The difference was small in language, immense in posture.
Externally, the two sound identical.
Internally, they are very different.
One reflects a model of the problem.
The other reflects the adoption of someone else’s conclusion.
The illusion of certainty
Recommendations often present as more certain than the evidence beneath them.
Even widely accepted guidance may rest on indirect data, plausible reasoning, or expert consensus.
This does not make them wrong.
But it can make uncertainty less visible.
When recited without interrogation, they create the impression that the decision has already been made.
That the work is finished.
The missing step
Beyond their evidentiary limits lies another, more practical one: fit.
Even when recommendations are grounded in high-quality trials, a different question remains.
Do the conditions of the trial meaningfully resemble the situation at hand?
The trial defines a population.
The patient in front of us is an individual.
Between the two lies a step that cannot be bypassed:
judging applicability.
Are the risks the same?
Do the patient’s comorbidities alter the balance of benefit and harm?
Are the endpoints studied the outcomes that matter most here?
That step is not contained within the guideline itself.
A moment of recognition
Looking back, what troubles me is not that I learned the guidelines well.
It is that I began to derive a sense of expertise from being able to cite them.
There was a kind of performative fluency to it.
A confidence that came not from understanding, but from alignment with an external authority.
It felt, at the time, like progress.
In retrospect, it feels like something else.
A displacement.
What guidelines can and cannot do
Guidelines are extraordinarily effective at organizing knowledge and guiding action.
They can tell us what is generally recommended.
They can summarize evidence.
They can help standardize care.
But they do not eliminate the need to understand the problem.
They do not determine whether a recommendation applies in a particular case.
And they cannot replace the reasoning that connects evidence to action.
Reclaiming the work
There is a difference between knowing what a guideline recommends and understanding why a recommendation exists.
The former can be acquired quickly.
The latter requires engagement with mechanism, uncertainty, and judgment.
The alternative is not idiosyncratic medicine, where each clinician reinvents practice from scratch.
Guidelines can support that work.
But they cannot substitute for it.
When they are used as substitutes, they do not strengthen clinical reasoning.
They quietly displace it.
Closing
The goal is not to reject guidelines.
It is to place them correctly.
They are the product of careful synthesis and interpretation.
They deserve respect.
But they are not the source of understanding.
They tell us what has been concluded.
Understanding requires deciding whether those conclusions belong here.