Last of a Kind: I Finished My Book About Trauma Just Before the ChatGPT Era
It took me years to write and finally “completely” finish my first book Traumatization and Its Aftermath. While my publisher offered more time, I soon recognized that the more I kept revisiting the pages, the more tweaks I felt compelled to make. I could have endlessly refined it, never truly certain if it was complete. This situation, I’ve heard, is common among writers or anyone who crafts something deeply personal.
So, I set a deadline on my calendar to submit the manuscript. I was warned that once I sent the files to my editor, there was no way back. Checking my calendar I can see that it says: Manuscript SENT!!!!! on December 13th, 2022. The following day, December 14th I noticed that there were Xmas sales adds everywhere. I had not thought of the time of the year I was living.
On Christmas day, I had a nice dinner and an animated conversation with my family. My son, full of excitement, wanted to share with us something that had ignited his curiosity like few things had before: ChatGPT. He tried to explain us what it was, it’s capacities, how it could make our lives easier, and how it could eventually change the world. I enjoyed his excitement but was not able to really understand what he was saying. However, a couple of months later, I was as excited as my son was. I started testing the AI chat’s knowledge and accuracy. Yet, my initial enthusiasm gave way to disappointment; the references were wrong or fictitious, the explanations were too ordinary and lacked depth, and of course, it missed the capacity to generate anything similar to critical thinking.
Still, not long after, I learned to use its conversational way to draft emails or find some very specific information. I used it as if it was an encyclopedia — a quick source of factual information.
And then, it happened. After I had completed an article on abuse, the thought crossed my mind to feed the piece to the chat and inquire if it could offer corrections. That’s when the unexpected occurred. This artificial intelligence took the liberty of crafting an entirely different article, altering my phrasing, my tone, my style, and even the information I had diligently provided! In just one interaction, I found myself stripped of ownership over my own work. Naturally, I dismissed its version, yet this encounter led me to a profound realization — how fortunate I had been to finalize my book before ChatGPT could have assumed the role of my “editor” or at least “copy-editor.”
My book would have been different. To what extent? Considerable. Once you have an “assistant” capable of improving or “refining” your language and suggesting “corrections” to your ideas, it’s only natural to opt for the “richer” and more accurate version of what you were intending to say. Unfortunately, by the moment the chat gives you its option, you may think that’s exactly what you wanted to say inadvertently losing some of the sentiment you had infused into your words — those feelings that may not have used the accurate, correct, or precise words and grammar. Our emotions show up in our punctuation, our repetitions, our examples. The chat often eliminates whatever seems obvious or repetitive, and inflections like that.
I can picture some writers arguing that they will edit what the chat edited to add their feelings. I have done that. I actually never deliver anything edited by the chat without re-editing it even if it’s a casual email. Yet, I can’t help but imagine how long it would have taken me to finish my book with this back and forth between the chat and my brain. Ideas come and go really fast, and the time that editing and rethinking would take could potentially kill the imagination and the freshness of the ideas.
If you have used any of the chats, you may have noticed that they use very baroque words and ornate sentences, or they make them extremely casual if you as to tone it down. Consistently, it makes all its suggestions longer.
I calculate that the chatGPT's paragraphs are around 15% longer which would have made my publisher either frustrated, or forced to ask me to cut 15% of the book. Besides the length, the chat excludes the feeling that me as author wanted to convey by using pompous or exaggerated words — I think it does that trying to imitate emotion.
Still, for those who have embraced the use of AI — including myself — I believe it’s nearly impossible to go back to live without it, much like our dependence on smartphones, smart TVs, and all the technology that has become part of our lives.
The conclusion I arrived at, at some point, is that my book will belong to a preceding generation. Nonetheless, I can’t help but feel appreciative and perhaps even fortunate to have finalized my manuscript prior to my son introducing me with ChatGPT. Having handled my manuscript almost at the same time of the ChatGPT birth feels somewhat similar to welcoming a child into a world where all newborns will be born after genetic engineering making them perfect.
In that context, my book will be imperfect, marked by its own distinctive flaws even after rounds of meticulous editing by a dedicated team, not by a chatbot. I want to think that it will be one of the last ones that will carry the real essence of all my shortcomings and constraints. My hope remains that readers will wholeheartedly embrace my book in its “humanness,” written and edited before Open-AI language models became the norm.
This article originally appeared on Illumination Magazine.