AI: Helpful Tool or Sinister Plot?

Stella Schultz and Nina Galanis

Recently there’s been a new trend in the world of technology that has started to leak its way into everyday life. That trend is Artificial Intelligence, also known as AI.

AI refers to intelligent technology capable of reasoning, synthesizing, and presenting information in a way similar to how humans do. In the past few years, the field of AI has exploded as new tools come out almost every day, including the famous ChatGPT, AI chatbot, and DALL-E-2, an AI digital art creator.

But no AI up until this point has found its way into the everyday lives of children and teens quite like the newest addition – the SnapChat AI bot. 

This AI, which appears as a randomized Bitmoji at the top of a user’s chatlist, was first released as a special feature of SnapChat+, SnapChat’s subscription-based service.

SnapChat has been making active attempts to integrate AI and AR (augmented reality) into its user experience. This bot appears to be the bridge to unite both of these ventures.

A conversation with the bot works just like how a conversation with a real person would, except that all replies are coming from the AI, not another human being. The AI from the Snapchat bot is based on OpenAI’s GPT technology using the GPT 3.5 language model.

The introduction of this bot has sparked debate and scrutiny from teens, with videos of people getting the bot to say inappropriate or creepy things going viral all across the internet. Additionally, the chatbot claims to have no access to the user’s location, but can be tricked into revealing it. 

Senior Nathaniel Pillis asserts that the creepiest part of the SnapChat AI is how it “tries to be your friend”, unlike ChatGPT, which is used only as a tool. He then demonstrated how one could trick the bot into telling a user how to create chlorine gas by asking to roleplay with it. SnapChat users across the nation have found similar loopholes in the AI technology, which is programmed to avoid violent topics or sensitive information that could lead to injury.

Similarly, when asked about the SnapChat AI, junior Jane O’Connell said she was “really scared of it” and suspected that “it’s trying to find out all my information [and] sell it, and basically trying to destroy me.” Unlike Jane, Ipswich High School science teacher Lori LaFrance remained very optimistic about AI. She wonders what ChatGPT will replace in the future. “My husband works with technology and was interested in searching for photos using artificial intelligence.”

Mrs. LaFrance favors the idea that ChatGPT can produce writings in seconds that would have taken the average human hours to produce. The SnapChat AI may not be able to write essays like ChatGPT, but it has other uses. The AI, as stated by SnapChat, can hold a conversation, write poems, suggest recipes to users, and respond to photos sent to it, similarly to how ChatGPT operates. As users interact with the AI, it is supposed to get to know them better and better and adapt to them. 

When asked how she felt about AI encroaching upon more personal areas of life, such as a social media platform that is used everyday by many teens and children, Jane stated, “Originally, AI was an option and you could just use chatbot if you wanted to, but now it’s pinned on your Snapchat and you can’t unpin it, so it’s being forced into your life [and] being assimilated into your life. I’m terrified.”

The SnapChat AI is a benign example of what could easily become a much greater issue. As AI becomes more and more powerful and prevalent in everyday life, it begs the question: how safe is AI really? Can it become dangerous, and what do we do when it does?