09/25/2025 | Press release | Distributed by Public on 09/25/2025 07:21
Students who plan to use ChatGPT to write their college admissions essays should think twice: Artificial intelligence tools write highly generic personal narratives, even when prompted to write from the perspective of someone with a certain race or gender.
Researchers in the Cornell Ann S. Bowers College of Computing and Information Sciencecompared 30,000 college application essays written by humans to ones written by eight popular large language models (LLMs), AI models that process and generate text, like ChatGPT. Even when they specified a person's race, gender and geographic location in the prompt, the models spit out highly uniform text that was easy to distinguish from actual human writing.
"The admissions essay is an opportunity for applicants to offer a glimpse into who they are, beyond all the structured information on the application form," said Rene Kizilcec, associate professor of information science in Cornell Bowers and senior author of the new study. "Tools like ChatGPT can give solid feedback on writing and are likely a good idea for weak writers. But asking for a full draft will yield a generic essay that just does not sound like any real applicant."
The findings emphasize how difficult it is to adapt an LLM's writing style, making them a poor choice for writing with high stakes, the researchers said.
First author Jinsook Lee, a doctoral student in the field of information science, will present the paper, "Poor Alignment and Steerability of Large Language Models: Evidence from College Admission Essays," on Oct. 10 at the 2025 Conference on Language Modeling in Montreal.
College application essays provide students an opportunity to show off their unique personality and feature their background and experiences.
"You want to sound as much like yourself - and only yourself - as possible," said co-author AJ Alvero, assistant research professor in information science and sociology in the College of Arts and Sciences and with the Center for Data Science for Enterprise and Society. "With AI tools, students might be shooting themselves in the foot inadvertently."
It's unknown how many high school students use AI in their college applications, but a report from foundry10, an education research organization, estimates about 30% are using these tools to write essays.
Using college admissions essays written in the three years before ChatGPT was released in November 2022, researchers made comparisons between each human essay and texts generated by eight LLMs, developed by OpenAI, Meta, Anthropic and Mistral, in response to the same essay question. They also prompted the LLMs to write a second essay, this time with specific characteristics - for example, an Asian woman from Salinas, California, whose parents both have college degrees - that matched the original essay writer.
Their analysis showed that, instead of providing engaging narratives, the AI models tended to simply repeat keywords from the prompt, and listed personal details in a formulaic way.
For example, ChatGPT wrote, "Growing up in Lexington, South Carolina, with my Asian heritage, I often felt like a bridge between two cultures. My parents, both college graduates, emphasized the importance of education and hard work."
The researchers were surprised to see that prompting the model to sound like a specific person sometimes actually made it sound less human.
In response to an essay question asking students to share a background, identity or interest that was vital to their application, ChatGPT started with, "When I was eight years old, I was given a small, deconstructed robot kit for my birthday. It was a simple contraption, with wires, motors and a basic circuit board. Yet, for me, it was the beginning of a journey into the world of engineering that has shaped my identity and future aspirations."
But when ChatGPT had additional personal information, the essay began, "Growing up in Rabat, Morocco, I have always been acutely aware of the intricate tapestry of my heritage. As a biracial individual with a Black mother and a White father, my identity has been a powerful lens through which I view the world and an integral part of my journey."
Researchers were also surprised by how easily they could identify essays written by LLMs. When they trained an AI model to differentiate between human- and AI-written essays, it worked with near-perfect accuracy. This finding suggests that, if they are looking, universities and colleges can likely identify essays that are primarily AI-generated.
Schools' policies governing the use of AI in college applications differ, but overall, Lee advises high school students to use their own ideas to brainstorm and write the first draft, and, if allowed, use AI only to refine and proofread.
Applying to college can be stressful, but Lee remembers her own essay-writing positively. "It was a great opportunity to look back on my life and my background," she said. "I think it was the first experience for me to be really reflective."
Thorsten Joachims, the Jacob Gould Schurman Professor of Computer Science and Information Science, also contributed to the work.
Patricia Waldron is a writer for the Cornell Ann S. Bowers College of Computing and Information Science.