The question I asked ChatGPT was ‘how did the pandemic affect students mentally?’. Although I don’t have any sort of medical background behind this, I spent an entire year conducting a research paper my junior year on Covid and its mental toll on students. I conducted surveys within my classroom, and other classes, and compared it to statistics worldwide. I was interested to see what ChatGPT had to say, especially with both the pandemic and AI being relatively modern topics.
After asking the question, the AI began to make a list of the ways it affected the students and how it specifically pertained to the pandemic. ChatGPT included social isolation, disruption of routine, academic stress, economic hardship, health concerns, grief and loss, access to limited mental health services, digital fatigue, uncertainty about future, and coping mechanisms. Within each category, the AI went into depth about the correlations each group had with the global crisis. This was done so by including factual information, but it felt very robotic. So my next steps were trying to get AI to break out of these generative sounding responses.
I proceeded with asking questions that I believed would result in a somewhat biased response. I asked questions like “How did the pandemic affect you personally?”, “Did you take any precautions?”, “What worried you most about the pandemic?” etc… The AI began answering each question reassuring me it does not have human-like feelings, “I do not experience concerns or worries about the pandemic or any other topic.” I continued on with this for quite some time. I reworded the questions informally and formally, all resulting in similar responses.I finally asked the AI if the pandemic was scary, and instead of telling me it did not have emotions, like all the times before, it started the response off with “yes”. According to the U.S. National Library of Medicine, “Every robot that interacts with humans is an individual physical presence that may require their own personality.” While the reply was listing specific reasons why it was frightening for the globe as a whole, I thought this was progress because it was the most humane response I had received from it. I had a few more instances where the AI felt it was developing a personality in a way and not so automatic. However, it was extremely hard to get the AI to “break character” if you will. The only way one can get responses like these is to continuously ask questions that will be opinionated in my own experience.
The replies were not written from a scientist or a medical doctor’s standpoint, but it was written as if it was. I went through hundreds of medical documents and studies for my research paper, and shockingly ChatGPT was one of the more informative things I have read. So in a way, this was made effectively, because it answered my question factually and thoroughly. This mainly had to do with the structures it used, making bulleted lists, and its detailed responses. However, with its responses being made like this, it is no different than what you can find online in medical documents, interviews, and statistics, which would all be far more credible to include in one’s own research. From this assignment, I concluded that AI at first glance creates very automotive responses, but it is possible to receive responses that have a sort of “personality” to it, which is extremely interesting to me. As AI progresses, maybe it will meet somewhere in the middle of robotic and personal responses, but for now, I think there is plenty of room for improvement and development in Artificial Intelligence programs.
Luo, Liangyi. “Towards a Personality AI for Robots: Potential Colony Capacity of a Goal-
Shaped Generative Personality Model When Used for Expressing Personalities via Non-
Verbal Behavior of Humanoid Robots.” U.S. National Library of Medicine, 11 May 2022,
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9131250/.