Last night, I was debating one of my suitemates about whether Lebron James or Michael Jordan was the best player toever play basketball, and it prompted me to ask Chat GPT about the 2016 NBA finals. This is because it was, in my opinion, the greatest playoff series ever, and the best single player performance ever demonstrated in basketball history. So, I asked it the general question, simply, “can you tell me about the 2016 NBA finals.” Not to my surprise, responded very generally back, telling me about the key moments, key players, and why this series was so special.
So, I asked more questions, hoping to get it to sound more human and less like someone who just looked the answer up on google and read it to me. I asked “who was the best player in the series,” and it responded with “Lebron James of the Cleveland Cavaliers was widely regarded as the best player in the series.” This statement was interesting because it showed how it tries to be the least subjective it possibly can be, with phrases such as “widely regarded as” trying to not be one sided, while also stating the facts. It had also been very specific and precise the entire time, with things like giving me statistics, specific time frames, and player evaluation.
These things made it feel like it knew more than me, but Istarted to realize it seemed to not understand one thing. Theone thing it wasn’t giving me was analogies, such as how Lebron James legacy had drastically improved after this game, putting him in the debate for the greatest player of all time. It also didn’t make the connection on how the opposing team had the greatest record ever recorded going into that game.
In other words, most of what Chat GPT was telling me was about the series, and not many important aspects of the series that aren’t necessarily directly related. To add on to this, it still sounded very robotic in the text. I don’t know if the creators are necessarily trying to make it sound human, but from my experience, it has no emotion at all, and still chats like a robot. I would definitely respond in a different way if a person were to ask me about the NBA finals, because I would tell them about the Warriors dynasty, the splash brothers, the fact that Kevin Durant would join the next season, and that Lebron James had almost no help against them. The robot was very specific, but also very general, and for this, I believe it was very ineffective in its job. In an article written about the weaknesses of this AI, Sharon Aschaiek states that it“can’t interpret the implications of your faculty’s latest research breakthrough or connect the dots between different trends” (64). If I were to ask about something I did not know a lot about, I would get a response with not very much detail. I believe experts still have an advantage over this AI.
Aschaiek, Sharon. “Promises and Pitfalls of ChatGPT.” Inside Higher Ed, www.insidehighered.com/opinion/blogs/call-action/2023/01/31/promises-and-pitfalls-chatgpt. Accessed 22 Sept. 2023.