AI and the Meaningless End of Meaninglessness

A cure for the professional illness of our times

While ChatGPT might be a "bullshit generator," it has unique usefulness in handling mundane or meaningless tasks, providing relief to those burdened by such work, writes Alberto Romero.

 

ChatGPT is a bullshit generator. You’ve heard that before, haven’t you? Well, unlike probably all the other times someone’s said it, I don’t mean it as a put-down but as a compliment. “Bullshit generator” suits ChatGPT because its output doesn’t hold any “regard for truth.” ChatGPT doesn't lie, which is an intentional attempt to conceal the truth. It generates text that happens to be either right or wrong—but only incidentally. Whatever you need it to say, it will oblige with the appropriate prompt. I used to think that was bad.

Why do we want such a shapeless, opinion-less tool, you may ask? A tool that can't differentiate between real stuff and the vast array of imaginary alternatives. A tool that can learn anything, possible and impossible, and will inadvertently and spontaneously confabulate facts about the world. Well, I no longer think that’s bad. It turns out this handy property—automated bullshitting—is singularly useful nowadays. ChatGPT may not be the end of meaning, as I've often wondered, but quite the opposite: The end of meaninglessness. Let's see why.

___

It’s hard to say objectively which work has value and which doesn’t, but it’s easy for each of us to answer this question: Do you spend time on tasks you don’t like, that you believe don’t really need to be done?

___

ChatGPT as a means to recapture sanity

Douglas Hofstadter has been giving mixed comments about ChatGPT and GPT-4 recently. Two weeks ago he publicly admitted to being terrified that AI will eventually eclipse us at everything (a fear that apparently goes back a decade). Yet a few days ago he wrote:

“I frankly am baffled by the allure, for so many unquestionably insightful people (including many friends of mine), of letting opaque computational systems perform intellectual tasks for them. Of course it makes sense to let a computer do obviously mechanical tasks, such as computations, but when it comes to using language in a sensitive manner and talking about real-life situations where the distinction between truth and falsity and between genuineness and fakeness is absolutely crucial, to me it makes no sense whatsoever to let the artificial voice of a chatbot, chatting randomly away at dazzling speed, replace the far slower but authentic and reflective voice of a thinking, living human being.”

In trying to find coherence in his statements, I posted the above excerpt on Twitter. Interestingly, no one pointed out Hofstadter’s conflicting views. Instead, some suggested he may be too disconnected from the struggles of common people. Accordingly, the most liked response was this:

“The problem is that most of us don't get to live in a purely thoughtful, intellectual environment. Most of us have to live with jobs where we’re required to write corporate nonsense in response to corporate nonsense. Automating this process is an attempt to recapture some sanity.”

I kinda agreed with this more quotidian, relatable perspective—who wouldn't use ChatGPT to lift the burden of empty and pointless work?—yet Hofstadter’s comment feels right to me. But after a day of thinking, the apparent contradiction disappeared:

Like Hofstadter, I live in a somewhat “purely thoughtful, intellectual environment,” abstracted from the emptiness of “corporate nonsense.” My professional career has been an incessant effort to not be absorbed into it. That’s why I never really saw the need to use ChatGPT. That’s why I couldn’t understand just how useful—life-saving even—it is for so many people.

Now I get it: ChatGPT allows them to escape what I've been avoiding my whole life. People are just trying to “recapture some sanity” with the tools at their disposal as I do when I write. Whereas for me, as a blogger-analyst-essayist, ChatGPT feels like an abomination, for them—for most of you—it couldn't be more welcome.

People don’t want to use ChatGPT; they need to

Anthropologist David Graeber coined the term “bullshit jobs” in 2013. He wrote:

“…technology has been marshaled … to figure out ways to make us all work more. In order to achieve this, jobs have had to be created that are, effectively, pointless … The moral and spiritual damage that comes from this situation is profound. It is a scar across our collective soul.” He continues:

“The ruling class has figured out that a happy and productive population with free time on their hands is a mortal danger … And, on the other hand, the feeling that work is a moral value in itself, and that anyone not willing to submit themselves to some kind of intense work discipline for most of their waking hours deserves nothing, is extraordinarily convenient for them.”

___

ChatGPT may not be the end of meaning, as I've often wondered, but quite the opposite: The end of meaninglessness.

___

It’s hard to say objectively which work has value and which doesn’t, but it’s easy for each of us to answer this question: Do you spend time on tasks you don’t like, that you believe don’t really need to be done?

“This is a profound psychological violence here. How can one even begin to speak of dignity in labour when one secretly feels one's job should not exist? How can it not create a sense of deep rage and resentment.”

People should be able to work 4-hour days, as Graeber thinks. But the system won’t allow that. That’s exactly why ChatGPT has found such widespread adoption among bullshit workers (and students).

The question was never “why do people choose to use ChatGPT?” as Hofstadter implies, but “why do people feel the need to do so?”

The answer is evident now: What else but a bullshit-generating tool to cancel out bullshit-requiring tasks so people can finally fill their lives with something else?

No one wants to be a bullshiter all day long yet many people have to. ChatGPT can’t help but be one. The match couldn’t be more perfect. ChatGPT isn't emptying their professional lives of meaning. No, it’s emptying them of the modern illness of meaninglessness.

My newfound respect for covert ChatGPT users

ChatGPT is, intentionally, the best AI tool ever created—thanks to years of AI research and development—and, unintentionally, the tool that best fits one of the deepest and more widespread needs of our generation. It is a twofold success and a win-win for people who can automate the bullshit dimension of their jobs.

ChatGPT can be rightfully criticized for many things: How OpenAI gathered all the data without permission or compensation; how the model was fine-tuned by exploiting factory-like workers from poorer countries; how it’s been deemed a milestone toward AGI without giving any details about its architecture or training data (we now have a better idea); and how it’s been implicitly marketed as somewhat foolproof when it’s more akin to a bullshit generator.

But I have a newfound appreciation and respect for people who, in an attempt to recapture their sanity by avoiding becoming victims of the corporate nonsense that pains our times, are exploiting ChatGPT to the extent of their ability without letting anyone know.

I don’t think there’s a more honorable and valuable use of AI than setting oneself free.

Technology was “marshaled” to fill up our time with meaningless tasks, as Graeber pointed out. Ironically, it has now provided a tool to do those tasks for us.

A bullshit tool for bullshit jobs.

Now, where do we go from here?

 

Alberto Romero is an AI & tech writer, Analyst at CambrianAI and author of the AI Newsletter: The Algorithmic Bridge.

Latest Releases
Join the conversation