Some thoughts about how to think about the role of generative AI in research and writing.
While in K-12 education and undergraduate programs, generative AI is a hot topic, there seems to be less frenzy about how graduate students might think about or use tools such as ChatGPT. In a show of extremely unscientific survey research, I texted my son, who is a new Ph.D. student, emailed my capstone chair, and spoke to a faculty member with whom I share an office, who is almost finished with his dissertation. All three of them said their department is providing no guidance to graduate students on how to use ChatGPT.
If your grad program has been so silent on large language models that you aren’t even sure what I’m talking about, here is a thorough and helpful explanation from the folks at Understanding AI.
ChatGPT and Graduate School in the News
Not a whole lot has been written on ChatGPT use in graduate school in either the popular or scholarly press. I thought I had caught a cheating scam when I found “Using ChatGPT for Graduate School Applications,” but it turns out the author was merely suggesting that prospective graduate students could research programs and cost of living using ChatGPT. I’m not sure such research would be maximally effective, given the bot’s lack of post-2021 information, but it isn’t as sinister as it first appeared.
Of course, there are all the stories about ChatGPT passing business, law, and medical exams. A good analysis of those claims is here and here. But what I’m interested in – and probably you are too – is how a large-language model such as ChatGPT could be used by graduate students in their academic work.
I hit pay dirt with a YouTube channel called Science Grad School Coach. Dr. Alana Rister has several videos on using ChatGPT for graduate-level research and writing. In one called “Using Chat GPT for Scientific Writing: The Dos and Don’ts,” she walks us through writing an introduction, literature review, and results section of a scientific article with ChatGPT. It’s interesting to see the process, and you should watch it for yourself.
My bottom-line takeaway is that Chat GPT is minimally useful in this capacity. Dr. Rister kept responding to the AI-generated text with comments along the lines of, “Well, I’m not really talking about that,” and “As a chemist, I need to change these things.” She also notes, both in the video and in written comment: “Please note that citations created by ChatGPT are often not real. As mentioned in this video, you should never directly use anything ChatGPT provides without your own research.”
What Is Your Goal?
All this kind of ignores the main elephant in the room – you still have to do your own reading and conduct the research. Dr. Rister makes the excellent point that AI is only going to put us out of a job if we try to get it to do our job. If you want to trick an AI detector, you can probably do it. But what will you have accomplished? If your goal is to actually be a chemist (or a teacher or a literary theorist or a theologian or an attorney or an aerospace engineer or . . .) you’ll want to do the thinking, researching, and writing yourself. So, at best, ChatGPT can be kind of a thinking partner – it can give you a skeleton for a research paper that you will then need to correct, amend, and flesh out using what you have actually read, done, and thought.
ChatGPT and Learning Goals
With my undergraduate writing class, I approached ChatGPT through the lens of goals. We talked about the Learning Goals of the class, discussed their experience with ChatGPT, did some reading and experimentation with the tool, and determined from there how we should – and should not – use it.
My capstone chair said that he thinks about ChatGPT and other tools in a similar way. In his email, he wrote, “I teach adults who typically have their own learning agendas, and I also don’t want to restrict potential learning with a preconceived notion that places my values above the student’s. So, as long as we are fulfilling the learning contract regarding subject matter, I’m typically good with the situation.” He said that at this point, he expects students to take credit only for work that is their own, hasn’t given guidelines for using ChatGPT, is confident he can spot AI-generated work, and so far, has not seen students try to pass off computer-written papers as their own.
Drawing from my experience with undergrads and my chair’s experience with graduates, I’m going to suggest a goal-informed approach to your use of ChatGPT in your graduate work. What are you here to do? Can ChatGPT or other AI help you move toward your goals?
But What About Ethics?
Focusing on your goals, though, begs another question. Let’s say your goal is to get through graduate school with as little actual work as possible. You want the degree, and you don’t care if you actually learn anything. I hope this isn’t you, but if it is, and if you are smart and crafty, you probably can use ChatGPT to achieve that goal. Is that okay?
Point One: Of course not. But you know that.
Point Two: Why is it not okay to use ChatGPT to do as much of your work as you can get away with? This is a huge question. Ethical issues surrounding AI include not only truthfulness, but also what it means to be human, what it means to think, the role of trust in society, and what is your own work anyway. These are not new questions. It’s always been possible to get someone else to do your work and then take the credit (as, perhaps, too many graduate students know firsthand, having been on the “doing someone else’s work” end of that arrangement). It’s always been possible to fabricate research, as recent fraud accusations directed at a Harvard professor remind us.
A Bit of Philosophy and Theology
The office mate I mentioned earlier is getting his Ph.D. in philosophy. He said that while there has been no guidance on the use of ChatGPT by graduate students, people in his program are interested in ChatGPT on a higher level. I can’t compete with a graduate program in philosophy, so I won’t try. But I do think we can’t live, much less pursue a degree, without considering ethics. For my contribution to the discussion, I will promote Matthew 7:12, a.k.a., the Golden Rule: “So whatever you wish that others would do to you, do also to them.”
Part of John Calvin’s commentary on this passage is instructive here: “Where our own advantage is concerned, there is not one of us, who cannot explain minutely and ingeniously what ought to be done. And since every man shows himself to be a skillful teacher of justice for his own advantage, how comes it, that the same knowledge does not readily occur to him, when the profit or loss of another is at stake, but because we wish to be wise for ourselves only, and no man cares about his neighbors.”
The Importance of Trust
Like I said, we know it’s not okay to use a tool, however sophisticated, to bypass learning and lie to others. I think it’s not stretching this point too much to say that a good place to start thinking about ethics with ChatGPT, as with other things, is to consider what kind of a world we want to live in. I’d like to be able to trust that my students are writing their own papers. And I think they would like to be able to trust that I, and not a machine, am providing the feedback.
I have some thoughts on ethical issues specific to ChatGPT that are less comprehensive than trying to define human nature, so I’ll revisit this topic in a future blog. For now, here are my tips for thinking about and using ChatGPT in graduate school.
- At the most basic level: Ask your department chair, program director, or advisor for guidelines.
- Zooming out: Think about your goals for graduate school, and how this tool might help you (or not).
- The biggest picture: Consider the Golden Rule.
Discover more from The Well-Ordered Mind
Subscribe to get the latest posts sent to your email.