Physiology News Magazine

Full issue

The impact of artificial intelligence on teaching writing skills to life science students

Membership

The impact of artificial intelligence on teaching writing skills to life science students

Membership

https://doi.org/10.36866/pn.132.38

Dr Matthew Hardy
University of Bradford, UK


On 28 November 2022, writing for The Guardian, Rob Reich published an article with the headline “Now AI can write students’ essays for them, will everyone become a cheat?” (Reich, 2022). I cannot know whether this was written with prior knowledge, or if Rob Reich simply demonstrated remarkable insight; this was just two days before the company OpenAI released a version of ChatGPT for free access to anyone with an online device. Since then, the use of ChatGPT, as well as other large-language models (LLMs), has become a topic of intense scrutiny within the higher education sector. One of the reasons for this, as indicated in Rob Reich’s article, is because of the potential of LLMs to “write” responses to academic questions.

An LLM can be defined as an artificial intelligence (AI) algorithm that has been trained with a large dataset to summarise and predict content. A closely related term is that of generative AI – AI that has been designed to create text-based content; ChatGPT algorithms are examples of this. Since the release of that early version of ChatGPT, there are now a number of LLM/generative AI algorithms available for use, either with subscription or for free. The most well-known include not only the differing versions of ChatGPT, but also Google’s chatbot Bard (powered by the LLM LaMDA) and Microsoft 365 Copilot.

For me, the sudden widespread awareness of LLMs, and their potential for (mis)use in writing, is perhaps of more interest than for most academics. I don’t just teach my students about the intricacies of physiology and pharmacology. I am also responsible for delivering academic writing skills to students across programmes within the Faculty of Life Sciences at the University of Bradford. Even if only a minority of students are likely to use AI to cheat in their assessments, this is something I have to address.

I am of the belief that, when it comes to using AI for writing academic content, “the cat is out the bag”, so to speak. In the US, a recent survey of 1,000 students demonstrated that more than a fifth had used an AI application to help complete academic assignments or exams (Welding, 2023) . There is no reason to think that the numbers would be much different within the UK. Therefore, if students are using these tools anyway, perhaps we should be showing them how to use them in an ethical and effective manner. In this regard, I am not alone, as can be seen by the following principles outlined by the Russell Group universities last July (Russell Group, 2023):

  • Universities will support students and staff to become AI-literate.
  • Staff should be equipped to support students to use generative AI tools effectively and appropriately in their learning experience.
  • Universities will adapt teaching and assessment to incorporate the ethical use of generative AI and support equal access.
  • Universities will ensure academic rigour and integrity is upheld.
  • Universities will work collaboratively to share best practice as the technology and its application in education evolves.

Thus, my response to the widespread use of AI has been to adapt and rewrite the lectures and workshops that I use to deliver writing skills to my students; I now not only include more traditional approaches to developing written content, but also discussion of alternative means: namely the use of AI. This is not only limited to the use of generative AI for creating content; I have also developed lessons for using AI to: develop assignment plans; search and map literature (using tools such as Elicit), transcribe/write from spoken word (using tools such as Audiopen); and to refine language using either generative AI or alternative AI tools (e.g. Grammarly). There are also ethical and legal considerations – for example when submitting papers to applications that “summarise” articles (e.g. ChatPDF), there are risks of breaching copywrite (Chatpdf.com, 2023), as well as of inaccurate reporting and cognitive dissonance. These are concerns that students need to be aware of.

At the time of writing, I have already delivered the first of these workshops to a group of Year 1 students. Students were not told when they could and couldn’t use AI. Instead, we looked at a variety of approaches together and they could make up their own minds whether they were beneficial. The content included examples of an essay prepared using generative AI. Whilst on the surface the subject matter seemed quite impressive, the class identified how it demonstrated a number of flaws. These included LLMs’ tendency to “hallucinate” facts and in some instances, references. Even when the references were real, they were often not the most appropriate for the topic and were frequently outdated. Additionally, we looked at different types of writing, including a reflective essay. Initially, students were surprised that the reflection was written by an algorithm as it presented as a very personal piece of writing. However, students soon realised that this was immaterial as, when studied using a rubric, the essay was not of passable quality. It was also acknowledged that the AI checker in Turnitin identified the writing being discussed as being AI-generated – were these real assignments, the work would have been cited for academic misconduct.

We didn’t just look at the negatives; whilst it was acknowledged that everything would need to be fact-checked, asking a generative AI about a topic one was unfamiliar with wasn’t necessarily a bad place to start. Similarly, repeated adjustment of the prompts (or inclusion of specific parameters) used to elicit a response could generate sequential improvements for a plan or outline of an assignment (by this stage, it was accepted that simply using generative AI to write an assignment was not good academic practice, but that other approaches may be used to help in the generation of content without compromising the final piece of work).

My closing remarks are as follows: I am not teaching writers; I am teaching scientists. The rapid uptake and evolution of accessible AI means that it is likely at least some of my students will use the same or similar technology in their future careers. On this basis, it can be argued that we have a responsibility to teach students to use AI tools in a responsible and ethical manner. However, the following is worth noting: the last exercise in the workshop was to prepare a plan for a drug monograph using whatever means students felt appropriate. Despite having identified some useful approaches to using AI, not a single student opted to use them.

This article was prepared without the use of AI.

Site search

Filter

Content Type