I found this bit on ZDNet about a Harvard study on genAI use kind of interesting. Boiled down, it says generative AI in the workplace improves people’s work but erodes their motivation and engagement. There are numbers! Apparently. the psychological toll from a loss of agency and control over cognitively demanding tasks results in an 11% drop in intrinsic motivation and a 20% increase in boredom.
Research published Tuesday by Harvard Business Review found that generative AI in the workplace is a double-edged sword: It can improve employee output but also erode one's sense of meaning and engagement at work.
The research -- which comprises four separate studies -- followed 3,500 human subjects as they completed a range of work tasks, from brainstorming project ideas to drafting emails. Some of these tasks were completed with the help of generative AI, and others weren't.
The researchers found that the final results from the AI-assisted workers were generally of a higher quality: Emails drafted with the help of AI, for example, were judged to be more encouraging and friendly.
However, the subjects who initially used AI suffered psychologically when they were forced to switch to a different task that they had to complete on their own. In these workers, and across all four studies, intrinsic motivation dropped by an average of 11%, and boredom shot up by an average of 20% after losing the helping hand of AI. The workers who consistently worked without AI, on the other hand, didn't report any significant emotional shifts.
The upshot is that the benefits of using generative AI in the workplace often produce a kind of hangover, which can harm employee well-being.
I suspect that, as with most things AI-related, they’re doing it wrong.
After about a year of experimenting, I’ve come to a happy place with my use of AI tools. By far the most useful app I’ve found—the one I’m using right now—is MacWhisper Pro, a reasonably cheap piece of dictation software. It’s a hell of a lot better than my old Dragon Dictate setup, about two or three times faster and way more accurate.
I track my word count at the end of every day and when I was using Dragon to dictate my first drafts, I was averaging about 1,500 words a day. With MacWhisper Pro, I’m up around 4000-5,000. There’s nothing generative about it. The app’s simply using a Large Language Model to make its best guess at what I’m mumbling about as I stalk around my office with my underpants on my head. It’s best guess is usually 99% accurate.
(Incidentally, the results of all that extra productivity will start rolling out in the next couple of weeks—and keep rolling out for the rest of the year. I’ve got something like 11 books in progress or planned this year, including both WW3.2 and 3.3, and it’s all come from that one simple software change. Hold on, wait, no, there was another piece of software I picked up this year, a time tracker called Toggl, which I’ll write about separately, here and at Patreon, because although it’s not LLM-based, it’s been just as revolutionary for my output.)
But speech recognition, no matter how good, is not generative AI. I’m not sure what you’d call it. It’s all coming out of the same LLM soup, I guess—but it’s not what most people mean when they talk about this shit.
That recent Harvard study looked at people using ChatGPT to write their emails for them. And while I can see that being useful in an office environment, I’ve reached a different sort of equilibrium with generative AI. I’ve come to believe quite strongly that it’s much less of a threat to those of us working down in the fiction mines than, say, the mass-scale industrialised destruction of humanity’s ability to focus on something for more than 30 seconds.
Also, it still produces garbage storytelling. I suspect it always will because a huge part of any successful narrative is understanding how people think and feel, and if you can’t think or feel, it’s really fucking hard to tell a good story.
What did interest me about that Harvard study, though, was the finding that while people’s work improved when they used AI, their satisfaction with the work went down. It’s as if the loss of agency took something from the process; the craft, the ownership, whatever. Maybe not as important to them as the paycheck at the end of the week, but still important.
I probably wouldn’t have linked to the study at all if I hadn’t already written about that Substack post by the software engineer who sent out 800 job applications and didn’t get a single bite. But it’s all part of the same story, of course.
If our civilisation survives the next 50 years—and that’s a big if—I think we’ll look back on this moment the same way we now view the dawn of the Industrial Revolution. A time of great and terrible change that completely remade the world.
Sometimes in good ways.
Sometimes in really, epically shitty ones.
I suspect all this analysis may have been on productivity (and that is a metric that carries a shit tonne of questionable methodology) of what anthropologist David Graeber classified as Bullshit Jobs. In my public sector work a considerable portion of my time is consumed by providing summaries, updates, reports on measures that various middle and senior management demand, which most don’t read, understand or provide meaningful input.
These would be the sort of tasks which a LLM tool could provide a response. Doing this stuff myself already contributes to alienation in my workplace, so using a LLM I imagine would only exacerbate that.
I cannot in good conscience justify use such tools because the energy demands are exorbitant for such a trivial purpose.
I also don’t believe it is only in the realm of the public sector, I have worked with enough private sector consultants who can confirm the existence of bullshit jobs with in them as well.
.
My alcoholic and definitely autistic father once said that working with your hands would give you satisfaction like almost nothing else would. I tried to explain to his stubborn Glaswegian brain that of course, because it filled the creative need inherent in everyone. He disagreed with me, as he did on almost everything.
Either way, I think the stupid bastard was right. We need to create. Even if it's just emails. Slipping a subversive comment into an email is the delight of a day. What really lights up your whole week is the "I see what you did there" from the other end.
It helps build good working relationships and those are a much larger step towards increasing productivity than any nicely written email generated by the seething horror that is ChatGPT.