ChatGPT on the Philosophy of BioShock

      BioShock, made by 2K Games and released in 2007, follows a complex storyline and introduces the idea of individual choices in gameplay involving both the influences the player, supposedly, has in the story and the game mechanics … Continue reading

More Galleries | Leave a comment

Last night, I was debating one of my suitemates about whether Lebron James or Michael Jordan was the best player toever play basketball, and it prompted me to ask Chat GPT about the 2016 NBA finals. This is because it was, in my opinion, the greatest playoff series ever, and the best single player performance ever demonstrated in basketball history. So, I asked it the general question, simply, “can you tell me about the 2016 NBA finals.” Not to my surprise, responded very generally back, telling me about the key moments, key players, and why this series was so special.

So, I asked more questions, hoping to get it to sound more human and less like someone who just looked the answer up on google and read it to me. I asked “who was the best player in the series,” and it responded with “Lebron James of the Cleveland Cavaliers was widely regarded as the best player in the series.” This statement was interesting because it showed how it tries to be the least subjective it possibly can be, with phrases such as “widely regarded as” trying to not be one sided, while also stating the facts. It had also been very specific and precise the entire time, with things like giving me statistics, specific time frames, and player evaluation.

Clevelandddddd this is for youuuu.

These things made it feel like it knew more than me, but Istarted to realize it seemed to not understand one thing. Theone thing it wasn’t giving me was analogies, such as how Lebron James legacy had drastically improved after this game, putting him in the debate for the greatest player of all time. It also didn’t make the connection on how the opposing team had the greatest record ever recorded going into that game.

In other words, most of what Chat GPT was telling me was about the series, and not many important aspects of the series that aren’t necessarily directly related. To add on to this, it still sounded very robotic in the text. I don’t know if the creators are necessarily trying to make it sound human, but from my experience, it has no emotion at all, and still chats like a robot. I would definitely respond in a different way if a person were to ask me about the NBA finals, because I would tell them about the Warriors dynasty, the splash brothers, the fact that Kevin Durant would join the next season, and that Lebron James had almost no help against them. The robot was very specific, but also very general, and for this, I believe it was very ineffective in its job. In an article written about the weaknesses of this AI, Sharon Aschaiek states that it“can’t interpret the implications of your faculty’s latest research breakthrough or connect the dots between different trends” (64). If I were to ask about something I did not know a lot about, I would get a response with not very much detail. I believe experts still have an advantage over this AI.

Aschaiek, Sharon. “Promises and Pitfalls of ChatGPT.” Inside Higher Ed, www.insidehighered.com/opinion/blogs/call-action/2023/01/31/promises-and-pitfalls-chatgpt. Accessed 22 Sept. 2023.

Posted in Generative AI and Writing, Standard English and Inclusive Language | Leave a comment

ChatGPT and Conversation; Can it really keep up?

For the last couple of years, generative AI has been slowly becoming more prominent in everyday life. Their usage ranges from writing entire essays to answering simple questions. Some people even believe that generative AI can even be used instead of a human in a conversation. A question many people have been asking  is “Does the AI have the ability to take over human writing/conversation?” 

I decided to attempt to answer this question and ask one of the more popular generative AI’s ‘ChatGPT’ about a topic I would like to think I know and could talk a lot about- Matthew Grey Gubler’s character in Criminal Minds, Dr. Spencer Reid. As someone who has seen all 12 seasons of the series (and rewatched it more times than I’d care to admit) I was feeling very confident that ChatGPT would not know anything I didn’t/be able to engage in a more interesting conversation with me about the character than another human. To gauge its level of knowledge I began by asking ChatGPT what it knew of the character- a generally broad question- to which it provided a lengthy and detailed response. At first glance, ChatGPT’s answer to my asking what it knew of Spencer Reid as a character seemed very complex, but after reading it I found that it didn’t say anything about Reid that I (unsurprisingly) didn’t already know. The AI touched on Dr. Reid’s “intellectual prowess, deep passion for reading and learning, and somewhat socially awkward demeanor.” as well as his development as a character over the course of the series, his relationships with other characters, and even Reid’s background and education-all of which you’d know if you simply watch the series-or read an article about the character. I then asked it a (usually conversation-engaging) question-” What do you think of the character Spencer Reid” to which it responded with “I don’t have personal opinions or feelings, but I can provide information based on the general perception of the character Spencer Reid in “Criminal Minds.”  It then repeated its original analysis of the character, providing further evidence that it is, in fact, not at all engaging to ‘conversate’ with. In speaking with ChatGPT, I noticed that its responses are very analytical (obviously), as well as easy to read and understand. Interestingly, the bot did not sound human to me in its responses. While most people claim that a lot of generative AI sounds human-like in conversation, in this particular situation, I knew I was speaking to a robot. 

 

Although some claim that AI’s are becoming more and more like humans in this particular conversation, the lack of enthusiasm or favoritism for any character in particular was very un-human-like. If I were asking a fellow Criminal Minds watcher about the series, they would almost certainly respond with either dislike or enthusiasm for Reid, or any other character, instead of providing an academic analysis. This point, I think, is pretty important when discussing the future of generative AI. Some people are worried about AI taking over writing-that they’ll replace human authors entirely. This point is addressed by Erin Kelly in her article “Warming Up to The Power of ChatGPT” She refers to the writing of ChatGPT as “The equivalent of a microwaved chicken breast. It’s cooked, but not what anyone would point to as ideal.” This point, when thinking about ChatGPT’s responses to my questions about Spencer Reid, really rings true. The bot lacked enthusiasm, favoritism, or any emotion at all when discussing a (typically) emotive topic, making it very different from engaging in a conversation about Spencer Reid with another human being- Criminal Minds fan or not. 

 

Kelly, E. E. (2023, July 13). Warming up to the power of CHATGPT (opinion). Inside           Higher  Ed | Higher Education News, Events and Jobs.  https://www.insidehighered.com/opinion/views/2023/07/13/warming-power-chatgpt-opinion#:~:text=If%20much%20of%20the%20boilerplate,see—and%20more%20widely%20appreciated.

Posted in Generative AI and Writing | Leave a comment

ChatGPT’s response on how to make a PB&J

I asked ChatGPT how to make a peanut butter and jelly sandwich. I am an avid PB&J lover and I have mastered how to make it perfectly. It’s pretty hard to give someone the wrong directions on how to make a PB&J, so I’d say its instructions are pretty accurate.

There were some questionable instructions, for example, ChatGPT said, “Take a butter knife or a spoon and scoop out some peanut butter.” I have never used a spoon to scoop out the peanut butter, nor have I ev

PB&J sandwich

er met anyone who has. It also suggested, “Feel free to customize your PB&J sandwich with additional ingredients like banana slices, honey, or even a sprinkle of cinnamon if you’d like to experiment with flavors.” I believe that if you’re customizing your PB&J with other ingredients, it’s no longer a PB&J. A peanut butter and jelly sandwich is simply just peanut butter and jelly. 

On the other hand, I’d say that most of its instructions were very accurate, for example, it said, “the amount you use is a matter of personal preference.” and, “This step is optional but can make it easier to handle and eat.” It recognizes that making a PB&J has a lot to do with personal preference, such as the amount of each ingredient that you want to use or if you want to cut it.

In the article, Promises and Pitfalls of ChatGPT, the author says, “And when it’s time to inspire new graduates with a president’s convocation speech filled with personal stories, reflections about the academic year and tips for future success, it’s a human, not a bot, who will deliver.” It’s really important to recognize that although AI is a powerful and useful tool, we need to use it with caution. It doesn’t have human emotions and personal stories; we need to be unique in our writing with our own experiences. 

We can ask ChatGPT something as simple as how to make a peanut butter and jelly sandwich and it will give us almost perfect instructions, but we should always be careful not to abuse it. We need to remember that it’s not human and therefore won’t always give us human responses. 

Works Cited 

Aschaiek, Sharon. “Promises and Pitfalls of ChatGPT.” Inside Higher Ed, 31 January 2023, https://www.insidehighered.com/opinion/blogs/call-action/2023/01/31/promises-and-pitfalls-chatgpt. Accessed 21 September 2023.

Posted in Uncategorized | Leave a comment

Supernatural: Religious or Superstitious

Supernatural: Religious or Superstitious

If you were to ask me any questions about Supernatural, the hit TV show written by Eric Kripke that aired in 2005 about two brothers who hunt monsters, chances are I’d be able to answer it. However, I would like to test Chat GTP’s knowledge of this 15 season tv show by going on a deep dive into the religious subtext that the show has to offer. While this sounds subjective, I can promise you it is not. Supernatural plays host to a whole arrange of religions, ranging from Hinduism to Christianity, and even Greek and Norse Gods.

So when asking Chat GTP about the show I considered past conversations with friends that I had about the show and how our emotions and connection to the show go above and beyond the basic understanding that Chat GTP would have. Which begs the question, would Chat GTP be able to replicate the way a fan would talk about the show? Or would it simply be a man-made algorithm talking at me? My first question was as follows:

“Tell me about the religious subtext of Supernatural.”

“Supernatural” is a form of entertainment, and while it may incorporate elements of religious and mythological themes, it is primarily a work of fiction designed for entertainment purposes. It should not be equated with religious texts like the Bible”

Okay, already we’ve hit a roadblock, that’s not a very human thing to say, never in any of my conversations with my friends have they disputed the fact that Supernatural was religious. However, after asking for Chat GTP to write me a lengthy essay about the various religious themes in Supernatural, I was given the answer I was looking for.

” Write me an essay about how there is religious sub-text in Supernatural.”

“At the core of religious experiences is the theme of faith and belief. In Supernatural, the characters, particularly the Winchester brothers, Sam and Dean, exhibit unwavering faith in their mission to protect humanity from supernatural threats.”

Finally! We are getting somewhere, I have finally been given an answer that don’t get me wrong is correct but there is no mention of any specific characters (besides Sam and Dean), there is no reference to episodes, seasons, or any other specific points in the show. The responses I was given were accurate, and I was even reminded of certain themes that I forgot were in the show, such as destiny and divine intervention. Unfortunately, Everything Chat GTP gave to me was base knowledge, there was nothing more in depth than that. This is very similar to how Bhatia was teaching Baby GPT to write.

“Language models are a kind of universal mimic: They imitate whatever they’ve been trained on. With enough data and rounds of training, this imitation can become fairly uncanny, as ChatGTP and its peers have shown us.”

While Supernatural isn’t precisely akin to language modeling, it has a unique language of it’s own. The language of the fan. One aspect I appreciate in Bhatia’s article is her acknowledgement that Baby GPT is essentially an algorithm that learns patterns. If I manually kept feeding ChatGTP information about Supernatural I bet we would see a similar outcome as we saw with BabyGTP. ChatGTP will never fully be able to replicate how a fan will talk about the show which is proven by it’s rather ineffective communication style of the information. In this case, ChatGTP is ineffective due to its inability to delve deeper. Exploring nuances and intricacies is a vital component of discussing pop culture, such as characters in their respective tv shows. Nevertheless, ChatGTP effectively provided sufficient information about the show to engage in a decent conversation. However, the exchange felt mechanical and rigid, lacking the typical enthusiasm of a fan.

 

 

Bhatia, Aatish. “Let Us Show You How GPT Works – Using Jane Austen.” The New York Times, The New York Times, 27 Apr. 2023, www.nytimes.com/interactive/2023/04/26/upshot/gpt-from scratch.html?unlocked_article_code=HiWNX_xHJch0lm7lrjpE7B_4uvDFrmT19lrxy6I9WZ77mUfsh2recNtlTlylbktaK-P7BcSPDHXmNuvk90i5PG-90v0x2htoEMErvTqscz-uicIor7SQaaGfCFOxF3hStudh70l7kYdjCYgTiPyq_jtHrxONe7ILtz_E2CgK2GmqfUV2Z1FeWoIhHYcJi4QrPoAAVwc0R9MBzB48LJlbyJKCyjH2ehiOofKuL9WsrALTwNzUw4mka98apHBhxV7SKObGpYAoMhYfXxnTStUvUnKMrypURh9qEQss4Z8WjQ8YQ9EKXaHMVGXDjR4NVoGHeOOR4J_Rl_wST9J1HlR_okZVCQBQgXKK_A&smid=url-share.

Posted in Uncategorized | Leave a comment

ChatGPT on the Pandemic

The question I asked ChatGPT was ‘how did the pandemic affect students mentally?’. Although I don’t have any sort of medical background behind this, I spent an entire year conducting a research paper my junior year on Covid and its mental toll on students. I conducted surveys within my classroom, and other classes, and compared it to statistics worldwide. I was interested to see what ChatGPT had to say, especially with both the pandemic and AI being relatively modern topics.

 

After asking the question, the AI began to make a list of the ways it affected the students and how it specifically pertained to the pandemic. ChatGPT included social isolation, disruption of routine, academic stress, economic hardship, health concerns, grief and loss, access to limited mental health services, digital fatigue, uncertainty about future, and coping mechanisms. Within each category, the AI went into depth about the correlations each group had with the global crisis. This was done so by including factual information, but it felt very robotic. So my next steps were trying to get AI to break out of these generative sounding responses. 

 

I proceeded with asking questions that I believed would result in a somewhat biased response. I asked questions like “How did the pandemic affect you personally?”, “Did you take any precautions?”, “What worried you most about the pandemic?” etc… The AI began answering each question reassuring me it does not have human-like feelings, “I do not experience concerns or worries about the pandemic or any other topic.” I continued on with this for quite some time. I reworded the questions informally and formally, all resulting in similar responses.I finally asked the AI if the pandemic was scary, and instead of telling me it did not have emotions, like all the times before, it started the response off with “yes”. According to the U.S. National Library of Medicine, “Every robot that interacts with humans is an individual physical presence that may require their own personality.” While the reply was listing specific reasons why it was frightening for the globe as a whole, I thought this was progress because it was the most humane response I had received from it. I had a few more instances where the AI felt it was developing a personality in a way and not so automatic. However, it was extremely hard to get the AI to “break character” if you will. The only way one can get responses like these is to continuously ask questions that will be opinionated in my own experience. 

 

The replies were not written from a scientist or a medical doctor’s standpoint, but it was written as if it was. I went through hundreds of medical documents and studies for my research paper, and shockingly ChatGPT was one of the more informative things I have read. So in a way, this was made effectively, because it answered my question factually and thoroughly. This mainly had to do with the structures it used, making bulleted lists, and its detailed responses. However, with its responses being made like this, it is no different than what you can find online in medical documents, interviews, and statistics, which would all be far more credible to include in one’s own research. From this assignment, I concluded that AI at first glance creates very automotive responses, but it is possible to receive responses that have a sort of “personality” to it, which is extremely interesting to me. As AI progresses, maybe it will meet somewhere in the middle of robotic and personal responses, but for now, I think there is plenty of room for improvement and development in Artificial Intelligence programs. 

 

Luo, Liangyi. “Towards a Personality AI for Robots: Potential Colony Capacity of a Goal-    

     Shaped Generative Personality Model When Used for Expressing Personalities via Non-  

     Verbal Behavior of Humanoid Robots.” U.S. National Library of Medicine, 11 May 2022,      

     https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9131250/

 

Posted in Uncategorized | Leave a comment

ChatGPT anaylzes Hunger Games

In my conversation with ChatGPT, we talked about a subject I know well- the series Hunger Games. This series was my absolute favorite in middle school. I read the books and watched the movies at least a dozen times.  I started off by asking, “Do you know the book Hunger Games?” The A.I responded that they are familiar with it, gave a little description of the book, listed all 3 books in the series, their publication dates and then added how the series are also popular movies. Trying to get a human reaction out of it, I asked it what its favorite character was. The bot replied that it had no “personal feelings or preferences’’, therefore does not have a favorite character. I then asked the bot, “Wasn’t it so sad when Rue died?” The AI replied with, “Yes, Rue’s death in “The Hunger Games” was a particularly poignant and emotional moment in the series.” Then it began to list some basic facts about her character. The AI finished off its response saying, “Many readers found Rue’s death to be a heartbreaking moment in the series, and it contributed to the emotional depth of the narrative.” Once again, it did not imply that they themselves felt sad about the death but rather that it was an objectively sad event. I tried to push a little further asking if it cried when Rue died, and it replied, “I don’t have the ability to cry or experience emotions, as I am just a computer program.” In my conversation with ChatGPT I was unable to get any human-like response from it, its style was very much like I was talking to a robot or reading a list of facts.

 Forgoing trying to get an emotional response out of the bot, I began to ask the bot to analyze the book. My junior year of High School we had to pick a book and analyze it through a feminist theorist lens, so I asked this bot to do the same thing. 2 years ago, I did my paper on this same book so I had an idea of what this AI was supposed to be telling me or what the “correct” answer was. The bot gave me an accurate and very in depth analysis. It touched on all of the main points speaking about Gender Roles and Stereotypes, Patriarchy and Oppression, Female Solidarity, Reproductive Rights, and more. Not only did the bot list all of the key points that I just listed, but it also gave in depth information as to how the Hunger Games touched on these points and why they were important. All of the AI’s information was correct, and detailed. Honestly, if I had copy and pasted this response and submitted it to my teacher two years ago, I probably would have gotten the same grade, if not a better one than I did. 

This raises the question, why wouldn’t people use AI to cheat? Using AI is significantly less work for virtually the same result. Susan D’Agostino brings up some opposing arguments to using AI in her article and one states, “’Even when the outputs are decent, they’re shortcutting the students’ process of thinking through the issue,’ said Anna Mills, English instructor at the College of Marin. ‘They might be taking the student in a different direction than they would have gone if they were following the germ of their own thought.’” I agree with this argument because if I had this tool my junior year I wouldn’t have actually taken the time to analyze the book and how this feminist lens applied to it, but instead just looked it up. The use of AI would have almost completely cut off my thinking process and instead I would have just copied whatever they said. Granted, before AI people could still cheat but it is much easier now since all the information is in the same place and it gives a very direct answer. 

D’Agostino, Susan. “ChatGPT Sparks Debate on How to Design Student Assignments Now.” Inside Higher Ed | Higher Education News, Events and Jobs, www.insidehighered.com/news/2023/01/31/chatgpt-sparks-debate-how-design-student-assignments-now. Accessed 19 Sept. 2023.

Posted in Uncategorized | Leave a comment

Harry Potter (Sorcerer of Stone)

Harry Potter and (The Sorcerer of Stone)

I asked ChatGPT to tell me about Harry Potter and the Sorcerer of Stone).

I would have said it is about Harry a main character trying to figure out his life because his parents were powerful wizards and they both passed away because of Voldemort a very powerful evil sorcerer who wanted the power all to himself so Harry had to go on a adventure to go save his life and his Hogwarts school of magic because his friends Rowling and Hermione were friends with Harry since the beginning of his life. Harry is also having really bad thoughts about his past life with his parents and how he is all alone and his foster parents are so cruel to him and making him feel poorly and unwanted.

Below is chatgpt response.

It seems like you’re referring to the book “Harry Potter and the Sorcerer’s Stone” by J.K. Rowling. This is the first book in the popular Harry Potter series. It was originally published in the United Kingdom as “Harry Potter and the Philosopher’s Stone” in 1997 and later released as “Sorcerer’s Stone” in the United States. The story follows the young wizard Harry Potter as he discovers his magical abilities, attends Hogwarts School of Witchcraft and Wizardry, and embarks on a journey to uncover the mysteries surrounding his past and the dark wizard Voldemort. The book introduces readers to the magical world, its characters, and the beginning of Harry’s adventures. It has since become a beloved classic in the fantasy genre, spawning a successful film series and a dedicated fanbase.  

Well most of the response is correct i was mostly talking about the movie and not the book.

 

 

Posted in Uncategorized | Leave a comment

ChatGBT Vs. The Famous Sitcom ‘Friends’

SitCom Friends‘ according to chatGBT

While grilling ChatGBT with multiple questions on the famous SitCom Friends, I now have a better understanding of what ChatGBT is for someone who has never used it before. Chat GBT seems to be one of the more professional AI I’ve used. As I tried to lead it towards being more humanistic, with asking more personal questions about their own personal opinions. This AI was quick to inform me, “I don’t have personal preferences or feelings as I am just a computer program, so I don’t have a favorite character on the show Friends.”

Moreover, I asked multiple basic questions for starters, “Who was the funniest character?,” “Who was the most attractive character?,”Who had the saddest childhood?” Within each answer for these subjective questions, I got the same exact formatted answer. Each answer would begin with a paragraph, followed by a list, followed by a conclusion paragraph. It was extremely clear to me that for each question I asked was answered with just an almost immediate google search at an inhuman speed. I will not bore you with reciting every subjective answer I received, but for an example, “ “Friends” is a popular sitcom known for its ensemble cast, and humor is subjective, so different viewers may have different opinions on who the funniest character is. However, over the years, several characters have been considered particularly funny by fans:

  • Chandler Bing (Matthew Perry): Chandler is known for his sarcastic and self-deprecating sense of humor. His witty one-liners and awkward social interactions often lead to comedic moments.
  • Joey Tribbiani (Matt LeBlanc): Joey’s simple-mindedness and his catchphrase “How you doin’?” make him a source of physical and situational comedy. His lack of general knowledge and quirky behavior also contribute to his humor.
  • Phoebe Buffay (Lisa Kudrow): Phoebe’s eccentric and offbeat personality, as well as her strange songs and bizarre life experiences, add a unique comedic element to the show.
  • Ross Geller (David Schwimmer): Ross’s neurotic tendencies and awkward situations, especially regarding his failed marriages and relationships, provide plenty of humor.
  • Rachel Green (Jennifer Aniston): Rachel’s character grows and evolves throughout the series, but her comedic moments often come from her misadventures and quirky behavior as she navigates her way through adulthood.Ultimately, the funniest character on “Friends” can vary depending on personal preferences, and many fans have their own favorite moments and characters that they find the most amusing.”

 Furthermore, I was looking for errors amongst some of the responses to my questions… and one occurred. I’m not sure if it is common for there to be incorrect information stated by ChatGBT. Although, I am a Friends die heart fan so I knew an error right when I spotted it.

 For context, I was asking what character had the saddest childhood. I spotted right away when ChatGPT responded with this, “Ross had to contend with being the middle child in his family, and he often felt overshadowed by his younger sister, Monica. His parents’ favoritism toward Ross’s sister caused him emotional distress.” If any of you are fans of this sitcom, you would know the mistake right away. 

To elaborate, Monica is the sibling who felt overshadowed by her older brother, Ross. Throughout the entire series, whenever The Geller’s parents appeared in an episode, the favoritism was extremely clear, and towards Ross. The effect on the forgetfulness of Monica was apparent throughout her relationship towards her parents. 

Equally important, I am fully aware of the potential of ChatGBT and AI in general. As for what this means for people like you and I? We can only hope that it falls in the hands of people with good intentions. On the other hand, the flaw I found between my conversation with ChatGBT makes me think that maybe there are flaws AI needs to figure out just like us humans. 

An article I find rather comforting, “Warming Up To The Power Of ChatGBT,” by Erin E. Kelly. Kelly goes on about the power AI has, but more importantly she ends her article with, “Through thoughtful experimentation and skepticism about both the most optimistic and most doom-filled predictions about ChatGBT likely impacts, we are likely to wind up finding this is one more tool we can live with.”

 To conclude, it is important to remember the crucial benefits that are at our fingertips, but more importantly as a society to get AI in the hands of the right people.

“Warming Up to the Power of ChatGPT: Opinion.” Perusall, https://app.perusall.com/courses/2023-fall-introduction-to-academic-writing-engl-110-25/warming-up-to-the-power-of-chatgpt-opinion.

 



Posted in Uncategorized | Leave a comment

Why a Blog?

One of the main purposes of the course is help you adapt your writing for various audiences. So in addition to writing essays that are directed primarily to your professor (me), I want you to have the opportunity to write for each other, and to practice writing as if your audience were considerably larger and more diverse. So we’re doing two shorter assignments in the form of blog posts, where the form and tone of your writing might be different than it is with an assignment that’s basically “a paper for an English class.”

Posted in Uncategorized | Leave a comment