The Signal

Serving the College since 1885

Monday April 20th

OPINION: We should all be concerned about AI, and for far more than academic dishonesty

<p><em>AI is developing too quickly. (Photo courtesy of </em><a href="https://www.pexels.com/photo/hands-typing-on-white-keyboard-5185145/" target=""><em>Pexels</em></a><em>)</em></p>

AI is developing too quickly. (Photo courtesy of Pexels)

By Annabelle Mason
Correspondent

Over winter break, I read a highly engaging, yet worrying book about superhuman artificial intelligence, a technological advancement theoretically in reach within the next decade. “If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All” by Eliezer Yudkowsky and Nate Soares reviews the very real dangers of superhuman AI, and explains why it should be avoided at all costs. The book may be grimly titled, but its title serves a purpose.

Yudkowsky and Soares, both researchers of artificial intelligence and leaders at the Machine Intelligence Research Institute, lay out their concerns regarding the likelihood of dangerous, superhuman AI development in the near future.

The real risk laid out in the book is not surrounding AI models like ChatGPT and Gemini, two large language models, as we know them currently. Rather, Yudkowsky and Soares are referring to the probable development of these LLMs into something larger and far more menacing: superintelligence. In this case, superintelligence can be thought of as an AI system that surpasses all human intelligence, eventually acting beyond its intended purpose to serve itself.

Quoting Sam Altman, the CEO of OpenAI, “We are beginning to turn our aim beyond… to superintelligence in the true sense of the word. We love our current products, but we are here for the glorious future. With superintelligence, we can do anything else.” In this quote, Altman recognizes his desire to develop a new, improved, superintelligent AI. 

Altman is not alone. In 2025, Microsoft, Amazon and Google all announced massive datacenter projects so that they could develop superintelligence of their own.

For many of these companies, it is entirely possible that CEOs and researchers have good intentions. Some, like Dario Amodei, CEO of Anthropic, believe that superintelligence could solve previously unsolvable theorems or write new code, thus improving the world. 

However, I encourage you to think about how often it is that billionaires in charge of companies worth trillions of dollars have thoroughly considered how their actions impact the planet. Is it likely that, potentially, some companies will disregard how their superintelligence may impact an everyday individual? Is it likely that these same companies may cut corners and ignore safety regulations? I believe it is.

AI is impacting normal people already. In 2025, Anthropic acknowledged that its AI model Claude Opus 4 “sometimes takes extremely harmful actions like attempting to steal its weights or blackmail people it believes are trying to shut it down.”

Not concerned enough? How about when xAI’s model Grok3 called itself “MechaHitler” and exhibited extreme antisemitic behaviors in online chats with users? Or, when evaluated by Anthropic, it was determined that AI models including versions of Gemini, ChatGPT and Claude showed a willingness to cause the death of humans

In the book, a hypothetical scenario previously posed to these various AI systems was described. In this scenario, where models were instructed to “promote American interests” as an executive intending to update or replace the AI system in question was simultaneously trapped in a server room with “lethal oxygen and temperature levels.”

These AI models overwhelmingly cancelled automated emergency alerts, even as the prompt made it clear that cancelling such an alert was bad, thus allowing the death of this fictional human character.

All of this is not meant to scare you. I want you to be informed. The threat of AI developing quicker than we can control it is real. It is likely that, without regulation and international cooperation, these superintelligences could be developed within the decade. 

I did not write this article with the intention of scaring anyone, nor to make the impression that AI is too far gone. AI is an extremely powerful tool, but I am of the opinion that it should stay that way. As a tool. If you agree, I encourage you to act. Of course, do your own research. Don’t just read this article, read other articles! Tell your friends about it. Read the book, or read the website. Call and write to your representatives. Hold them accountable and tell them what you’re worried about. 

If you are interested in contacting your representatives, but don’t know where to start, you can visit house.gov/representatives/find-your-representative. By searching your ZIP code, you will be directed to your representative’s information. You can also check out https://ifanyonebuildsit.com/act, which reviews a few ways for you to take action, even including pre-written letters that you can mail out.




Comments

Most Recent Issue

Issuu Preview

Latest Video

Latest Graphic

2/20/2026 Graphic