In a previous blog, I shared some guidelines for using ChatGPT in graduate school. I mentioned then that I had some general thoughts about the tool. So, in case you don’t have enough on your mind, here are four things I worry about with ChatGPT.
Voice
Chat GPT has the potential to rob students of their voice and flatten diversity in written communication.
In 1974, the Conference on College Composition and Communication issued a statement called Students’ Right to Their Own Language. This document affirms that there is not one correct English, but rather many dialects and forms of English. The authors argue that students should be free to write in a way that reflects the English they speak. In my mind, this is not only a matter of equity, but also a matter of beauty. The diverse and unique ways in which people speak and write reflects the unique contribution of each individual and culture to humanity as a whole.
This has been widely celebrated in poetry. Gerard Manley Hopkins was driven to make up not only his own rhythm, but sometimes his own words to express his wonder at creation (see “The Windhover” for a great example). James Whitcomb Riley used his “Hoosier dialect” in his popular poetry, such as “The Raggedy Man.” More recently, the question has moved to the area of academic writing. University of Waterloo professor Vershawn Ashanti Young, for example, published an article in 2010 in the Iowa Journal of Cultural Studies that asked “Should Writers Use They Own English?” However one answers that question, it’s clear that the English produced by ChatGPT is no one’s own English, but a standard, common denominator version of the language.
The way the AI writes makes me think of George Orwell’s 1946 essay “Politics and the English Language,” Way back then, Orwell wrote that someone who strings together tired stock phrases and calls it speech “has gone some distance toward turning himself into a machine.” This seems relevant – does machine-made communication have the potential to make us more machine-like?
Bias
Chat GPT has the potential to exacerbate common stereotypes.
There has been a lot written about how ChatGPT can reproduce false or biased information. But here, I’m thinking more about the way the tool works. ChatGPT operates by pattern recognition. (If you want to learn more, this is a good, though long, article.) Guess what else operates by pattern recognition? Stereotypes, prejudices, and implicit biases. It’s a good idea to think before trusting what is (to put it in an overly simplistic way) the most expected response to a question.
Community
It’s one more way to avoid interacting with others.
If ChatGPT can answer our questions, we don’t have to ask an actual person for help. This isn’t always a bad thing. Sometimes others don’t have time, or they are otherwise unavailable when we need our questions answered. Also, as an introvert, I totally get the appeal. However, with Covid and social media and self-checkout lanes at the grocery and everything else leading to the atrophy of our ability to interact fact-to-face with other humans, we should at least think a bit before we rush to add one more to the portfolio.
On a related topic, ChatGPT and other AI can offer personalized instruction, meeting each student at their point of need. This is, in some ways, very good news. No more waiting for a few students to catch on. No more struggling to keep up with the rest of the class. But might there not be some value for students who quickly learn concepts to also learn patience, while they help their classmates along? And couldn’t those students who take longer to grasp an idea contribute – with their perhaps divergent way of thinking – to the learning of the whole class?
There is obviously a balance here, and I do see the benefits of efficiency. At some level, though, learning in community has to be worth preserving. Also, as a 2023 report from the U.S. Department of Education’s Office of Educational Technology points out, there is “the possibility that some students could be assigned a ‘personalized’ but inadequate learning resource” (p. 21).
Learning through struggle
Learning is growth. Growth is hard. And as author Abbie Halberstadt says in a very different context, Hard Is Not the Same Thing as Bad.
Learning tends to come through struggle and failure, rather than through rote completion of assignments. If we quickly rush to ChatGPT to save us from our struggles, we will certainly miss out on learning. When was the last time you got stronger by watching someone else lift weights?
On the other hand, struggle should not be overwhelming; challenges should be appropriate to a student’s developmental level and prior learning. There is room for help from various sources as we struggle to learn (says Vygotsky in his Zone of Proximal Development theory). The challenge is to figure out when ChatGPT and its AI cousins should be among these sources.
The Importance of Thinking
The most helpful thing about being on a diet is that it forces you to think about what you are eating and make conscious decisions instead of mindlessly munching. In a similar way, I think the most helpful thing to do with ChatGPT is to think about its uses and potential – both good and bad. Then, whatever we do, we can do on purpose.
Discover more from The Well-Ordered Mind
Subscribe to get the latest posts sent to your email.