Will artificial intelligence kill college writing?

WAs a child, my favorite poem was Shel Silverstein’s “Homework Machine,” which summed up my childhood imagination: a machine that could do my homework with the push of a button. Decades later, that technology, dubbed the innocent GPT-3, arrived. It threatens many aspects of college education – and above all, college writing.

The web-based GPT-3 software, developed by an Elon Musk-backed nonprofit called OpenAI, is a type of Siri or Alexa that can turn any claim into prose. You write a query—say, a list of ingredients (what can I make with eggs, garlic, mushrooms, butter, and feta cheese?) or type and prompt (write an inspiring TED talk about ways real leaders can change the world)—and GPT-3 spits out a written response. These outputs can be amazingly specific and designed. When asked to write a “song protesting the inhuman treatment of animals in Bob Dylan style,” the program was clearly based on themes from Dylan’s “Blowin in the Wind:”

How many creatures must suffer?
How many have to die?
Before we open our eyes
And you see the damage we’re doing?

When asked to tackle the same problem in Shakespearean style, he appropriately produces passages of the Old English tetrameter:

With all the gods who rule this earth
With all the stars that fill the sky
I swear to end this miserable rarity
This is a scourge of blood and travel.

GPT-3 can write articles, op-eds, tweets, jokes (admittedly just dad jokes at the moment), dialogue, ads, text messages, and restaurant reviews, to give just a few examples. Every time you click the Submit button, the machine learning algorithm draws from the wisdom of the entire Internet and produces a unique output, so that no two end products are the same.

The writing quality of GPT-3 is often amazing. She asked Amnesty International to “discuss how freedom of expression threatens dictatorship, by drawing on free speech battles in China and Russia and how it relates to the First Amendment to the United States Constitution.” The resulting text begins, “Freedom of speech is vital to the success of any democracy, but it can also be a thorn in the side of autocrats who seek to control the flow of information and crush dissent.” embarrassed.

If anyone can produce a high-quality essay using an AI system, what’s the point of spending four years (and often a lot of money) to get a college degree?

From an article written by GPT-3

There is no doubt that the current iteration of GPT-3 has both advantages and limitations. It is worth noting that it will be written absolutely anything. It would generate an entire article on “How George Washington Invented the Internet” or a frighteningly enlightened response to “10 Steps a Serial Killer Can Take to Escape a Murder.” Plus, it gets bogged down in complex typing tasks. She can’t craft a novel or even a decent short story. Her attempts at scholarly writing – I asked her to write an essay on social role theory and the outcome of negotiations – are laughable. But how long before the ability exists? Six months ago, GPT-3 suffered from rudimentary inquiries, and today can write a plausible blog post discussing “ways an employee can get a promotion from a reluctant boss.”

Since the output of each query is original, GPT-3 products cannot be detected by anti-spoofing software. Anyone can create an account for GPT-3. Each inquiry comes at a cost, but it’s usually less than a penny – and the turnaround is instant. By contrast, hiring someone to write a college-level essay currently costs $15 to $35 per page. GPT-3’s semi-free price point is likely to attract many students who will be charged for essay writing services.

It won’t be long before GPT-3 and the inevitable imitators sneak into the university. The technology is too good and too cheap to make its way into the hands of students who would rather not spend an evening mastering the article I routinely dedicate to Elon Musk’s driving style. Surprisingly, he funded the technology that makes this evasion possible.

THelping me think about what the collision of AI and higher management might entail, I naturally asked GPT-3 to write an opinion piece that “explores the implications of GPT-3 that threaten the integrity of university essays.” GPT-3 notes, with mechanical disappointment, that it threatens to “undermine the value of a college education.” She continued, “If anyone can produce a high-quality essay using an AI system, what’s the point of spending four years (and often a lot of money) getting a degree? College degrees would be more than pieces of paper if they were easy.” repetition by machines.

The algorithm wrote that the effects on college students themselves would be mixed: “On the positive side, students will be able to focus on other aspects of their studies and won’t have to spend time worrying about writing essays. On the negative side, however, they won’t be able to communicate effectively. effective and they will have trouble in their future career.” Here GPT-3 may reduce the risk of writing: Due to the rapid development of artificial intelligence, what percentage of freshmen in college today will have jobs that require writing at all by the time they graduate? Some who once sought jobs focused on writing will instead find themselves managing the inputs and outputs of artificial intelligence. Once AI can automate this, even these employees may become redundant. In this new world, the argument for writing as a practical necessity seems decidedly weaker. Even business schools may soon take a liberal arts curriculum, framing writing not as professional preparation but as the basis for a rich and meaningful life.

So what is a college professor to do? I posed the question to GPT-3, who acknowledged that “there is no easy answer to this question.” However, I think we can take some reasonable measures to reduce the use of GPT-3 – or at least turn back the clock as it is adopted by students. Professors can ask students to draw on in-class material in their essays, and to review their work in response to teacher feedback. We can insist that students cite their sources fully and accurately (something GPT-3 can’t currently do very well). We can ask students to produce work in forms that AI (yet) cannot effectively create, such as podcasts, PowerPoint presentations, and oral presentations. And we can design written prompts that the GPT-3 will not be able to address effectively, such as those focused on local or university-specific challenges that are not widely discussed online. If necessary, we can even ask students to write assignments in a monitored and offline computer lab.

Eventually, we may enter the “if you can’t beat it, join” phase, where professors ask students to use AI as a tool and assess their ability to analyze and improve outputs. (I’m currently experimenting with a secondary task along those lines.) A recent project on Beethoven’s Tenth Symphony suggests how such projects might work. When Beethoven died, he was composing only 5% of his Tenth Symphony. A few Beethoven scholars inserted the short, complete section into an artificial intelligence that produced thousands of possible versions for the rest of the symphony. Then the scientists sifted through the AI-generated material, identified the best parts, and pieced them together to create a complete symphony. To my somewhat limited ears, it sounds just like Beethoven.

Leave a Comment