After ChatGPT—the powerful, easy-to-use AI chatbot released last November—exploded in popularity, Edward Tian spent three days building an app of his own in an Etobicoke Second Cup. The 22-year-old Princeton student designed GPTZero, a tool that detects AI-generated text. “I was expecting, at best, a few dozen people to try it out,” says the fourth-year computer science major. Instead, it crashed the day after it went online, in early January, due to overwhelming traffic. Since then, Tian has been fielding calls from venture capitalists, media outlets and educators—all while juggling his classes, working to improve GPTZero and, you know, trying to graduate this spring. His very human response to the situation? “It’s been wild.”
First thing’s first: Are you actually John Connor and have you been sent back to save us from the chatbots? Boy, I hope not. Fingers crossed that a future doesn’t exist where the chatbots take over.
At Princeton, you’ve been studying GPT-3, the text-generating neural-network learning model that paved the way for ChatGPT, its more conversational, user-friendly cousin. Why is this new tool such a big deal? Technologically, it isn’t—we’ve been there for a while. I took a natural language processing course last year, and in the first lecture, the professor put up two texts, one written by a human and one by AI, and quizzed students on the authorship of each. Lots of people couldn’t tell them apart. But ChatGPT is unique because of how widely available it is to the public. It’s free and accessible, and anyone can go on the website and put their requests in.
While the rest of us were using it to generate limericks about otters or write a Christmas rom-com starring Jennifer Coolidge, you were turning the tech on its head. Why dive in so fast?
I’m really interested in misinformation and bot detection. I actually took a year off from school to work on that, looking at Facebook bots that already had AI-generated faces. ChatGPT made me think, What would it be like if those bots could talk like they were human? That’s kind of scary. These technologies are brilliant, but we also need to build safeguards so that they’re adopted responsibly. And that’s not something we can do months or years after they’re released.
At one point, 33,000 teachers were on the wait list for the version of GPTZero you built specifically for educators. What are they saying about this tech?
I was on a panel at the Conference of Independent Schools Ontario last week talking to 200 teachers. They have a fairly progressive attitude: This technology is here. How do we integrate it responsibly? Because you can’t just ban it outright.
But that’s exactly what a lot of school boards in the US have done. I don’t think it’s the right approach. Students will always find ways around that. This is the future, and we can’t avoid it. But we shouldn’t enter it blindly. The teachers I spoke to want our detector tool to help start conversations with students. Instead of being a black-and-white detector—decisively determining whether something is written by AI, which is how our beta version worked and what some other inventors are putting out—it now highlights portions of essays that were likely generated by AI.
On top of being a computer science major and a founder, you’re also a journalism minor and a journalist. Is there a place for ChatGPT in writing and reporting? It can’t fact-check. It’s never going to be able to interview people or gather new information or turn the facts into a compelling story. But it’s great for getting started and generating ideas, and that’s what I use it for. It’s also great at writing fake news articles that look like real news, so that’s a cause for concern.
You’ve spoken about your appreciation for human writing and your desire to protect it. Where does that come from? John McPhee, the New Yorker writer. He taught this writing class at Princeton for more than 40 years. He’s over 90 now, and I took his class the last year he taught it, which was definitely an honour. He has this great quote: “No one will ever write in just the way that you do.” Imagine a world where everyone writes with ChatGPT: that’s a sad, grey world, in my mind, because none of that writing would be as beautiful or original as John McPhee’s, for instance, or Alice Munro’s. If we had let ChatGPT take over writing for us several decades ago, we might not have writers like them. And, if it had taken over in the 16th century, we might not have Shakespeare.
Speaking of which, we now have Bard, Google’s new ChatGPT competitor. What’s next for GPTZero? We’re partnering with course-management systems like Canvas and Blackboard to integrate GPTZero into teachers’ workflows. And another big step for us—by which I mean me; my co-founder, Alex Cui; our founding engineer, Yazan Mimi; and a team of about seven researchers—is moving beyond text detection to videos and images. Detecting misinformation and keeping our internet safe is a huge concern.
Okay, pop quiz. I have a joke of sorts, and I’d like you to tell me whether it was written by a human or a machine. Here we go: “Why did the chicken cross the road in Etobicoke? To get to the other side dish.” Interestingly, I’ve actually experimented with this. I wanted to see how much content ChatGPT copies directly from the internet versus how much it produces by itself. I found that, when you ask it to tell jokes, it’s really good at copying directly from, for instance, Reddit. So I’d say that it could be either—maybe it’s a human joke that a machine stole.
Maybe! I heard it from ChatGPT. I’m not sure if a human told it originally, but a cursory Google search says no. Follow-up: Can you explain the joke to me? Because I don’t get it. I don’t either. Maybe there are a lot of side dishes in Etobicoke?
This interview has been edited for length and clarity.
NEVER MISS A TORONTO LIFE STORY
Sign up for This City, our free newsletter about everything that matters right now in Toronto politics, sports, business, culture, society and more.