Originally published in Nature: https://www.nature.com/articles/d41586-023-00528-w
And... the first to openly use AI to write a Nature article
ChatGPT may not yet give us sparkling prose and only delivered “generic fluff” when it asked
to write grant application text by John Tregoning. But it can free scientists up to focus on more
stimulating writing tasks, he says.
Many of us have already
been trying ChatGPT. If you’ve checked science-social-media recently, it’s
likely you’ve already seen many of its writings. In common with many other
researchers I worry about AI replacing me. I’m a vaccine
researcher and spend much of my time writing
grants, papers and articles about science careers, so I set it the task of writing
an opinion piece about the use of AI in grant writing.
In less than a minute
ChatGPT produced the following:
In my opinion, ChatGPT has the potential to
revolutionize the process of writing scientific grants. Traditionally, writing
a scientific grant has been a time-consuming and often frustrating process.
Researchers spend countless hours crafting proposals, only to have them
rejected by funding agencies. This can be demoralizing, and it can also be a
barrier to progress in scientific research. ChatGPT has the potential to change
all of this. By using natural language processing and machine learning, ChatGPT
can help researchers write more compelling and effective grant proposals. It
can also help reviewers assess grant proposals more efficiently, allowing for a
more efficient and fair grant review process. Of course, ChatGPT is not a magic
solution to all of the challenges facing scientific research. But it has the
potential to make a real difference, and it is worth exploring as a tool for
improving the grant writing and review process.
Which, as the journal editor says is pretty good for an AI, but not good enough to
consider publishing as a standalone piece of writing. It is, however, a massive
step forwards. The simplicity and utility of platforms such as ChatGPT means
that we’ll see them quickly drifting into everyday use; Microsoft is already
working closely with OpenAI, the company that developed ChatGPT– and you may already be using an
AI platform to help you with some writing tasks. At the time of writing (February
2023), ChatGPT was freely available, but things are moving extremely fast, so
it is hard to know where things are heading.
The text here isn’t my
first use of AI for science writing. My first real
use of AI chat bots (beyond asking it to write lyrics to a song called ‘Eggy
Eggy Woof Woof’ for my daughter) was when I got fed up of writing one part of a
grant. I was asked to explain the world-changing ‘impact’ my science would
have, if I was lucky enough to receive funding.
So I asked ChatGPT: ‘What impact could vaccine research have?’ and
got 250 words of generic fluff. It suggested: reducing the burden of disease,
saving lives, improving global health and supporting economic development.
None of what it said was in any way original or enormously surprising, but it
was an excellent starting point, that I could then flesh out with specifics.
So far I’ve found AI to be enormously helpful at generating
a lot of the low-level filler documents that otherwise take up time. For
example, it can write easy, generic,
on-the-one-hand-this-but-on-the-other-that statements about internet usage
policy or data
management.
However, as it’s still early days much more thought needs to go into exploring
what using AI means with regards to plagiarism and attributing credit –
one issue is that the source documents that trained it are owned by someone.
One example where
I have found it extremely useful is writing multiple choice questions. I found
it relatively easy to write the question and the right answer, but coming up
with 3 plausibly wrong answers is tricky. I found that if I prompted ChatGPT with
the following: ‘write a multi choice answer about ‘topic of interest’
with 4 answers, and not using all of the above as an option’ it came up with
good wrong answers. This was incredibly helpful. And as someone suggested on
twitter – if students are going to use ChatGPT to help them write answers, we
might as well use it to set the questions!
As another example, I entered the
prompt ‘Write a diversity statement for a grant application’. The output
starts:
Diversity is a core value at our organization and we are
committed to promoting and maintaining an inclusive and equitable environment
for all individuals.
And continues for 169 words that were generic
to the point of meaningless. Anyone can write this and it requires no evidence
or backing. Instead, it’s better for the form to ask questions that an AI can’t
answer, like: “give examples of how your organisation is promoting diversity
and evidence of the impact these activities have had.” This could be applied to
a whole range of different questions that we are forced to answer.
This made me reflect: if there is a section in a grant application that
can be written by an AI then does that section really serve any purpose? If a
computer can churn up something deeply generic that still answers the question (more
or less), why are we
asking people to address the question at all? I think the answer is clear: these sections
never really did serve a purpose and certainly don’t now. For science and the
process of grant writing to be improved, two things need to happen, one – the
needless sections (that can’t pass a writer’s Turing test, or in
other words, can’t be recognised as being written by a human) are removed; and two –
the sections that remain are changed in scope to be shorter and be action
centred.
For now though, while we are forced to
fill in unnecessary boxes on forms, AI offers a way to free up headspace, which
should be a good thing. In an article last month about the pace of science disruption slowing
down [https://www.nature.com/articles/s41586-022-05543-x] one of the
suggestions was that academics needed ‘the gift of time’. AI could well give us
this gift.
The question is then how do we use the
time given? One
comparator is the automatic
washing machine as it became universal
in the 1970’s – it freed up time, which was then replaced with other household
tasks. The sociologist Joann Vanek argued in 1974 that in spite of new
household devices, there was no change in the time devoted to housework in the
past half century https://www.jstor.org/stable/24950221. Her argument has been debated since then, but
the key question is what impact do time saving devices have? Are we going to
fill the time saved by AI with other low value tasks or will it free us to be
more disruptive in our thinking and doing?
I have some unrealistically high hopes
of what AI can deliver. I want low-engagement tasks to take up less of my
working day, allowing me to do more of what I need to do to thrive (thinking,
writing, discussing science with colleagues). And then because I don’t have a
Sisyphean to-do list I go home earlier because I have got more of the thinking,
writing, and discussing done in working hours rather than fitting them around
the edges.
We are
unlikely to arrive at these
sunlit uplands without some disruption. Just as domestic appliances
significantly shrank the need for domestic staff, AI is going to change the
labour market. For
some tasks, AI will replace people. The aim of the game is
don’t do a job that can be replaced by an AI. To which end, hopefully, I have
persuaded you that whilst AI can write, it isn’t immediately going to replace
me. I’m not the only one to think this – the
songwriter Nick Cave put it much more eloquently here https://www.theredhandfiles.com/chat-gpt-what-do-you-think/. Even if you’re
not convinced that AI won’t make writers obsolete, one piece of good news in
terms of not immediately replacing me, is that AI isn’t very good at telling
jokes – I will leave you with its best effort:
Why was the math book
sad?
Because it had too many problems.
No comments:
Post a Comment