“Will Artificial Intelligence Kill College Writing?” asks a professor in The Chronicle of Higher Education. Micah Mattix gives what I think is a thoughtful response:
“This is one of the problems with teaching writing exclusively as a tool. Most tools are replaceable.
But I don’t think writing—real writing—is in any danger of being replaced by a computer program. Because much college writing isn’t real writing—that is, it isn’t composed of words chosen to express an idea in the mind of a human being to the mind of another human being interested in understanding it—AI seems more dangerous. In fact, the simulation of AI may replace the simulation of college writing, which isn’t necessarily a bad thing. At least it will help instructors develop writing assignments where meaning is required (I’m probably being too hard on college writing—there are a lot of very good writing assignments, I am sure, across universities and a lot of real writing). But, again, I don’t think there is any real danger of real writing ever being replaced by AI. Real contexts are too specific to simulate. Ideas are too complex. And people are too strange.”
I believe the same can be said of technical writing. Some types of technical writing are highly structured and predictable, such as API documentation. There are well-defined patterns and conventions for describing the components of an API, and so I can imagine a world where those components could be automatically generated from the source code. In theory, all you should need to do is input some terms and parameters, and the AI could build a reasonably accurate output.
But I can also imagine so many ways this could go terribly wrong. A programmer could read the documentation and find that the AI-generated description doesn’t quite fit the situation, doesn’t provide enough context (e.g., by failing to define obscure terms or feature names), or doesn’t quite anticipate the idiosyncratic ways that a human reader might interpret the content and run into problems or errors.
And what about something a little less structured, like instructions? Could AI automate that sort of thing? I am in the middle of writing instructions for a new product right now, and there are parts that seem like they could be programmable, like repetitive steps, menu names, and formatting rules. But one thing I keep finding is that there are a lot of little details and examples that I need to add to explain software behavior that is by no means obvious, and which I only discovered by trying the software myself and running into unexpected scenarios. I am sure that if someone were to recruit participants for a usability test, even more questions would emerge, which I would later address in a revision. I think this is just one example of what Mattix means when he says that AI probably won’t replace “real writing”: “Real contexts are too specific to simulate. Ideas are too complex. And people are too strange.”
But of course I’m biased. I want humans and the act of writing to be too strange to be replaced by AI. I like my job security.