Jessica Jones

behavioral neuroscientist | hhmi gilliam fellow | uw

JessGPT–How LLMs and AI Are Becoming Our Academic Overlord | Jessica Jones

JessGPT--How LLMs and AI Are Becoming Our Academic Overlord

February 05, 2025

by Jessica Jones

planetbanana

As the title might suggest, no I haven’t made my own GPT..yet.. But once upon a time, artificial intelligence was a dream whispered in the halls of academia and scribbled in science fiction novels. Today, it speaks to us in fully formed paragraphs, spits out pictures of dogs riding bikes in space on planet Banana (in the style of Dali of course), spins poetry on demand, and drafts research papers at the press of a button. Welcome to the age of Large Language Models (LLMs), the engines of modern AI, and the force behind the explosion of tools like Claude and other ethically minded AI platforms, and the infamous House of OpenAI.

What Are LLMs, Anyway?

Large Language Models are, at their core, vast neural networks trained on like billions—sometimes trillions—of words (aside: image generators, like DALL-E, are trained on jillions of text and image pairs). They analyze patterns, predict text, and generate human-like responses, almost like they’re your friend (lovely!). Imagine feeding a machine the entire internet, seasoning it with deep learning, and letting it cook up answers, essays, and explanations. That’s an LLM. These models, like OpenAI’s GPT series and Anthropic’s Claude, thrive on context, mimicking the art of conversation, argumentation, and storytelling with astonishing ease.

But what makes them so powerful? It’s not just the volume of data but the sophistication of their architecture. By employing deep neural networks and transformer models, LLMs understand context like never before, improving their ability to generate text that feels coherent, insightful, and, at times, eerily human. As they continue to improve, their potential applications multiply—from automating customer service to drafting complex legal documents.

The GPT Boom: Why Now?

The rise of LLMs wasn’t an accident; it was an inevitability. With computing power surging, Nancy Pelosi seemingly having her finger on NVIDIA’s pulse, data flowing endlessly, and machine learning architectures evolving, AI has become more than a curiosity—it’s a necessity. Businesses automate tasks, writers draft novels, programmers debug code, and yes, students lean on these tools for academic work. But beyond the obvious, this explosion is fueled by accessibility. What was once the realm of tech giants is now available to the everyday user, shaping how we interact with information and creativity itself. Hell, even I use Claude at least due to my senioritis–as an undergrad substitute to debug code I don’t feel like debugging (even though I do) its fricken awesome, and even as an advisor substitute when they’re out on academic vacation I’ve asked it questions about my own research–no shame in utilizing tools.

HOWEVER, These models blur the line between tool and co-creator, forcing us to rethink our relationship with knowledge and how we process, interpret, and generate it. With great power comes great responsibility, and the expansion of LLMs raises questions about dependency, originality, and the ethical considerations of letting machines assist in human thought…

AI Revolution or a Crutch?

Academia has always been about pushing boundaries, but the presence of AI has sparked a debate that stretches from lecture halls to faculty meetings and publication reviews. How are we as academics using LLMs?

The Pros (from my experience):

The Cons:

Claude, ChatGPT, and the Ethics of AI

Not all AI is created equal. While ChatGPT leads the mainstream, ethical AI platforms like Anthropic’s Claude emphasize safety, alignment, and responsible AI use. These systems are designed to avoid bias, reject harmful prompts, and encourage thoughtful engagement rather than blind automation. The goal? AI that augments human intelligence rather than replacing it.

Ethical AI also seeks to counteract misinformation and ensure transparency in how these models operate. Researchers and developers are working to create explainable AI models that clarify their reasoning rather than producing black-box responses. This push toward accountability is crucial as AI continues to integrate into education, healthcare, and governance.

So what now? Learning With AI, Not From It

The real magic of AI in academia isn’t about outsourcing thought—it’s about collaboration. AI can be a mentor, a sounding board, an amplifier of human intellect. The key is balance: using these tools to enhance learning without diminishing personal effort. As AI continues to evolve, so too must our understanding of what it means to learn, to think, and to create.

In the near future, we might see AI-integrated coursework where students learn how to work alongside LLMs, critically assess AI-generated content, and refine their own understanding in tandem with these tools. Rather than banning AI, institutions may lean toward AI literacy—teaching students when and how to use these models effectively. I’m skeptical on this, but thinking about and seeing how academia is already handling the uptick in LLMs, this isnt far-fetched…

We stand at the crossroads of an intellectual revolution… Will we use these tools wisely, or let them think for us? The choice, as always, is ours (hopefully). The best path forward isn’t resistance—it’s adaptation, ensuring that AI remains a catalyst for deeper learning rather than a shortcut to superficial understanding.