A career in research is riddled with failure.
The fact that there is a blog series about academia entitled #failtales
suggests that failure in research is common. If the idea of failure being part
of academia comes as a surprise to you, you are either too early on in your
career to have been beaten down by it – in which case my condolences - or you
are so hyper-successful that it has never impinged upon your exponential upward
trajectory – in which case we are now mortal enemies.
First things first; it is OK to fail. Science
isn’t easy. Science careers are not easy. We therefore need ways of dealing
with failure.
Black Swan
I’ve recently been reading Antifragile, a
book by Nicholas Taleb who is a self-styled errant business philosopher. His
other book, The Black Swan, is about the surprising frequency of rare,
extreme events and the disproportionate impact they have: the book takes its
title from the fact that black swans exist, even if you have never seen one,
i.e. absence of evidence is not the same as evidence of absence.
Antifragile
Antifragile takes this idea further, exploring how to deal
with these random events. The book is not an easy read, but the underpinning
message is revolutionary. In between the weird analogies about Fat Tony, a
prominent character of the book who embodies the anti-intellectual trader who
understands risk without theories, and references to obscure Greek
Philosophers, Taleb makes a case for changing our approach to extreme events.
The author splits the world into 3 categories:
fragile, resilient and antifragile. Fragile systems respond very poorly to
extreme events, and according to Taleb this includes high finance and research
that is directed with a particular result in mind. Resilient (or robust)
systems, including opportunistic research and privately owned businesses, can
cope with such extreme events and recover after they have occurred. Antifragile
systems actually benefit from these same extreme events. His examples of antifragile
systems include the net economic value of Silicon Valley tech start-ups and the
net democratic value of feudal city states, both of which are characterised by
agility in the face of changing conditions because of their relatively small
size. They also benefit from having a large number of starting options upon
which selection can choose the fittest. Interestingly a recent study in Nature
suggested that scientific disruption is driven by small rather than large
teams (https://www.nature.com/articles/s41586-019-0941-9)
So how does antifragile link back to scientific
careers?
Failure is the linchpin of
success.
Without some ideas failing, no ideas are going
to succeed. This is very much in line with what Jon Tennant says in his article about
the scientific record. Without better knowledge of what went before, we as a
community are doomed to repeat ourselves.
A nuance of this is that failure is required to
optimise an experiment: nothing works first-time. One of my more useful
contributions to science literature was a technical note about in vivo
imaging; a technically challenging process that is published in static
beautiful images in a way that says, “this is easy, everyone can do this.” In
actual fact, it turns out that there are a number of ways to mess it up. We
managed to do all of them; fluorescent mouse turds, reflective ink in the pens
we used, mice that were scientifically speaking too hairy, not enough
anaesthetic, and so on. It would have been really helpful if someone had made
this clear earlier. Only through failing were we able to get the studies to
work and publish our own beautiful images, though it seems a shame that we failed to mention
the pain we had been through to get them.
There is of course a separate conversation about
failing fast, when it is right to drop a project if it isn’t working, and how
information about the failed project can then be shared. Alternative publishing
models and partial paper repositories where things can be put together from a
group of studies are appealing, but they aren’t fully viable yet.
Back to the failure.
The aim of antifragility is to go one step
further and make setbacks beneficial. It is easier to see how this works at a
systems level. My failure and its publication helps other people. Likewise, in
the context of grants, there have to be winners and losers – competition is
important. If the system is working properly, only the strongest ideas survive
and thus the field moves on. While that is great for research as a whole, what
about me and my career?
At an individual level, how do you become
antifragile? The aim is to make every situation a win-win. Taleb describes a
barbell – a bimodal distribution of risk, basically reducing the cost of
failure. He says “If I have ‘nothing to lose’ then it is all gain and I am
antifragile.” For example, if you are going to invest in a project, reduce the
emotional cost and make sure the return is high. Likewise, for antifragility in
terms of experimental design, plan your investigations so that the outcomes are
always interesting, rather than tightly focused on a hoped-for outcome.
We all fail. Grants are rejected. Papers bounce.
Experiments explode. It is clear that we want to avoid the extremely negative
impacts of failure. Part of this is moving away from fragility: not having our
happiness and mental well-being pinned to a single outcome outside of our
control. This means we need to move up the spectrum towards resilience. To
achieve this, we can do a number of things: have a growth mindset, don’t take
things personally, and build a good network of support for a start. Moving
beyond resilience, antifragility can be built into your career. Place yourself
so that regardless of external events, you succeed (or at least fail less).
This involves not tying yourself to one single idea, too narrow a research
field, a single funder: instead, hedge your bets strategically so that if one
area goes down, others may be coming up. This is not easy, but at least
thinking about it may protect us the next time failure calls.
This first appeared on Digital Science