Best`s Review - April 2015 Edition

New Risk: Side Effects
Possible
Side
Effects
Two experts contend that the way humans interact with risk models is a risk in itself.
by the Best’s Review Staff and Lynna Goch
A
s risk modeling plays larger roles in
insurance, investments, corporate risk
management and elsewhere, a team of
academics and insurers warns that over-reliance
on risk models might present even bigger risks
than the perils they are designed to analyze.
These experts also raise the question:
The industry has learned to use the
models, but have we learned to
manage the risk of being increasingly
reliant on them?
The interconnected nature of
systems, perception biases and an
inclination to favor technology-based
solutions over human judgment means
20
catastrophic scenarios, that in earlier days might
have been identified or prevented through
observation and experience, could escape
detection until they are more complex and lethal.
The team sounding the warning is an
alliance between Amlin plc, a global
reinsurance and insurance business
headquartered in London with
its origins in the Lloyd’s market,
and the University of Oxford’s
Future of Humanity Institute, a part
of the Oxford Martin School. In an
interview and a multipart report, Dr.
Anders Sandberg of Oxford’s Future of
Humanity Institute; JB Crozet, FIA, CFA and
BEST’S REV)%7s!02),
Cover_Risk_of_Risk_Models.indd 20
3/11/2015 10:47:18 AM
Key Points
The Issue: Two experts
suggest that insurers invest
too much confidence in the
outcome of risk models.
What It Means: The
over-reliance on model
outcomes has led to insurers
becoming less observant
and working on “autopilot.”
What’s Next: The
underwriter of the future will
manage the risk model and
question its results.
BEST’S REV)%7s!02),
Cover_Risk_of_Risk_Models.indd 21
21
3/11/2015 10:47:40 AM
New Risk: Side Effects
22
head of group underwriting modeling
of economics and finance at
at Amlin; and their team outline their
Loyola University: “[It’s] the risk or
concerns. Their message: Too many risk
probability of breakdowns (losses)
professionals give too much credence to
in an entire system as opposed to
the output from risk models and often
breakdowns of individual parts or
cannot overcome their own assumptions
components, and is evidenced by
in assessing and acting on those results.
co-movements (correlation) among
The pair identified a host of issues,
most or all the parts.”
including:
“As we’re adding more and more
s Organizations overly dependent on
modeling into our decision-making
models run a greater risk of being
process within the insurance industry,
overrun by risks that are outside
we are also at the time bringing in
“It might be that a form of systemic risk,” Crozet said.
of those models. These disasters
the underwriter
are sometimes termed “black swan”
“It makes the industry more efficient
events.
overall but also exposes us to rare but
of the future is
s Users of models often invest great
not as much of a extreme failures in models.”
confidence in their tools, even
One possible trigger of systemic
model user but
though model quality may vary
risk is that all model users could
rather managing use the models in the same manner,
widely. “One reason [models] are
the model, and
helpful is that we humans are very
meaning that everyone would be
looking into it
bad about reasoning about lowwrong about the same subject at the
probability outcomes and biases,”
same time. “That creates the uniformity
saying ‘Why is
Crozet said.
of modeling within each organization
it making this
s Users of models can remain
or each insurance company and also
effect?’”
committed to model results
across different companies,” Crozet
Dr. Anders said. “Regulators are only comfortable
even after the output becomes
Sandberg, with a certain range of models because
questionable or the overall system is
University of Oxford they’re human and there’s only so
headed for trouble.
s Insurers, regulators, investors,
much they can accept and approve.”
analysts and others become
“It’s not so much the models
dependent on a limited number of
themselves. It’s rather that people are
modeling firms and their models, which over
using similar models in similar ways,” Sandberg
time creates a barrier to new entrants. “The
said. “You get systemic risk where every part
number of model vendors is not particularly
works perfectly well but the whole system fails.”
huge, so there are going to be relatively few
In the joint report, Systemic Risk of Modeling,
models,” Crozet said.
Amlin and Oxford authors write: “Unfortunately,
s Modeling firms and their models may fall
the more the industry tends to rely on a single
victim to political, regulatory and market
source of knowledge, the smaller the upside
expectations, making them more uniform and
when it gets things right and the greater the
more likely to be wrong collectively, should
downside when it gets things wrong (as one day
it turn out they were wrong. “Even if we had
it inevitably will).”
the perfect model, they would be problematic
Sandberg, Crozet and other authors believe
if everyone used the same model,” Sandberg
the problem stems from a human propensity to
said. “But we also get them as biased because
follow automation decisions sometimes labeled the
people are wrong in the same way.”
“autopilot problem.” Here are the four features of
this type of thinking, and Crozet and Sandberg’s
The problem grows larger because of the
proposed solutions to the problem:
interconnected nature of systems and risks. A
Loss of Situational Awareness. People
failure in one area could amplify risks elsewhere,
who use devices such as autopilot systems in
contributing to what are known as systemic risks.
airplanes or global positioning system technology
One of the objectives of the Amlin/University of
in mobile phones or cars often become obedient
Oxford report was to investigate what systemic
to the machines rather than skeptical observers.
risks emerge from the use of modeling.
In several incidents, pilots have been too slow to
The Amlin/Oxford report offers many
take back control of their distressed planes, or
definitions of systemic risk, such as this one
drivers have followed obviously dangerous routes
from George G. Kaufman, Ph.D., a professor
because they were blindly following incorrect or
BEST’S REV)%7s!02),
Cover_Risk_of_Risk_Models.indd 22
3/11/2015 10:47:54 AM
out-of-date output from their devices.
or high levels of stress. One solution
This phenomenon has occurred for
is to work further in advance and
years in financial markets, in which
find ways to reduce stress. Another
traders will follow the dictates of a
recommendation is to be skeptical of
modeling system that is generating
using systems developed to analyze one
incorrect information or apply models
scenario on a different scenario without
developed for one product to another
adequate adjustment. Their example:
financial product that may not have
Earthquake models developed for
been the subject of the original
California were applied in New Zealand,
modeling.
but did not adequately account for
Solution: Insurers need to have a
differences in geology. The result is that
more complete risk picture, including
insurers were surprised by the high
“One reason
physical understanding of many of
degree of liquefaction that occurred in
[models] are
the things they insure by observation,
the New Zealand quakes.
mapping and investigation. They should
The Tendency to Adopt
helpful is that
also be on guard for developments
Findings
Generated by Models in
we humans are
that may not be directly related but
the Absence of Other Estimates.
very bad about
ultimately could affect the risks that are
Automated findings and models are
reasoning about measurable and precise, and often the
being modeled.
low-probability
Skills Degradation. Pilots who
issues they are addressing are unknown
outcomes and
fly for long periods with the autopilot
by other means. That doesn’t mean the
engaged run the risk of having their
modeled answer is correct but at least
biases.”
skills reduced and their habits atrophy
it appears in concrete form, rather
JB Crozet,
from under-use. Even drivers of modern
Amlin plc than as a guess. Psychology has shown
cars who engage adaptive cruise
that people are more inclined to trust
control have been shown to react more
a precise number, and it also provides
slowly to encountering fog banks and
“cover” to those who must present
other road hazards, the authors report.
their recommendations to superiors.
Traders that rely on automated trading
Solution: Insurers should be more
systems are less likely to recognize anomalies
comfortable in reporting uncertainty. They should
or incorrect developments. In all cases, the
also be careful about applying models developed
operators are at greater risk because they need
for one domain to other areas.
time to adapt once they elect to retake control
The ultimate goal of modeling is resiliency,
from a model or automated system.
the pair say. “You can make a resilient system by
Solution: Insurers should focus on
using a lot of different models or diversifying
maintaining institutional memory among
very strongly or having code that works in several
underwriters and senior officers who make
different ways and knows what it should do,”
judgments about risk. Those closest to the
Sandberg said. “But that’s really expensive.”
process and others with the most seniority
Sandberg and Crozet are not advocating against
in an organization tend to retain their skills
the use of models. Both expect models to continue
longer. Organizations should focus on keeping
to evolve and become more widely adopted. Their
information and skills current among middleconcern is that the certainty of models can make
level employees.
insurers less observant.
Human Error: Misplaced Trust and
“Black swans tend to show up all the time. So
Complacency. People love to find shortcuts
instead we might want to get better at looking into
and develop habits. For those reasons, regular
models and use them
users of models quickly relax and cede control
in a more transparent
to automated systems. The longer an automated
way,” Sandberg said.
modeling or risk detection system operates
“I think people are
without failure, the more confidence users place
recognizing this more and
nd
in it. Several studies have shown that traders of
more. It might be that the
he
options have followed the directions of their
underwriter of the future
re is
models long after many independent observers
not as much of a model user
would have recognized a problem.
but rather managing thee model,
Solution: The authors report that people tend
and looking into it saying
ng ‘Why is
BR
to act on habit when faced with short deadlines
it making this effect?’”
BEST’S REV)%7s!02),
Cover_Risk_of_Risk_Models.indd 23
23
3/11/2015 10:48:09 AM