When I tell someone about my new business, I usually get two reactions. First, they think it is an interesting idea. Second, they tell me I’m bonkers because ChatGPT will do it for free. Let me get this straight – I’m worried about many things when starting my business. I’m worried about the economy, about international taxation, about hiring the right people for the job, about having enough coffee to survive another 12-hour shift of hustling. But I’m not worried about ChatGPT.
Our company Red Knot helps to navigate users through cloud infrastructure. We provide various materials, such as guides, sales brochures and help sections. When I recently spoke to an owner of an up-and-coming tech company, he told me the same thing: “Why would I pay for your service, when I can just open ChatGPT?”. Alright, I said, who writes docs and tutorials at your company? The businessman was a little hesitant about his answer, but later admitted that he himself fiddles with documentation pages in the evenings.
It was surprising information, because I expected the head of a multimillion-dollar startup to delegate such things to other people. But I also realized that it is symptomatic of the whole AI hype. People postpone rational decisions today in the hope that tomorrow will bring a completely different reality.

AI is correct, but fails to pinpoint what’s important 

I then asked the gentleman why, then, didn’t he let ChatGPT write his docs. “I did try,” he answered in an ominous tone, “but the AI failed to pinpoint what I really considered important. I mean, the information was technically correct, but I thought it didn’t explain our users what they should really focus on”.

Then I understood. You see, in our business of tech tutorials and manuals, you have to be a teacher. And to teach anybody anything effectively, you have to be able to emphasize with them. It’s no different from a kindergarten teacher teaching kids the names of forest animals. Even with highly technical topics, a good technical writer has to be able to imagine what it is like to be an inexperienced user who is just learning the ropes. And the only way to really understand what’s happening in human’s mind is to be another human.

Replicating correctly doesn’t mean understanding

There is this famous philosophical argument called Chinese room from by American philosopher John Searle. And it presents the following thought experiment:

Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output).

It struck me (as did others) as an apt expression of current limitations with AI. It provides text that has the correct syntax and covers the desired topic extensively. But it can’t really phantom the meaning, because to understand the meaning and value of something for a human, you have to have the experience of living as one.

Magic tools that write manuals automatically

There are several companies on the market who promise that. The bold marketing statement makes you believe that the app will scan your product and spill out all the documentation. Nothing could be further from the truth. There are apps that can take automatic screenshots or finish a line of code for you. And they come handy.
But to explain how a new product works requires understanding what it is used to. And to do that, again, you have to be able to empathize with the human. And to understand the whole mental world that precedes the first use. We’re very far from that.

Let’s work with what’s possible today

In practical terms, I can prompt AI to deliver thousands of pages of documentation, but they’re terrible at teaching users with little to no experience with the given technology. 

Let me appeal to savvy startups now – we can tell if your website was written by ChatGPT. Every seasoned engineer can see the page is generic, then just shuts down the page and go to your competition. In fact, every intelligent person can tell just by the way it is written.

Let me be clear – I stare in awe when dozens and dozens of AI-generated lines of code pop up on my screen. AI is amazing, and we use it often, as everyone should. 

I just think it’s madness when everyone pretends that current AI can handle more than it really does. We can either live in a fictional marketing bubble, or we can just admit that most tasks still require elbow grease.


Leave a Reply