Tasty, tasty slop.
I enjoy flicking through Substack Notes of a morning, looking for an essay I might enjoy. Occasionally I’ll find something like ‘Light Travels at the Highest Possible Velocity’, a Sherman Alexie story I read one Sunday morning in bed which delayed my getting up by an hour.
It’s one of the best things I’ve read this year, and I found it by accident and read it for free.
Thanks Sherm.
And then, I got suckered, I think, by this tasty piece of clickbait: 8 Japanese Zen Practices That Will Rewire Your Daily Life. I mean, the clickiness of the bait in that headline…
What was I thinking? I was probably thinking… Zen? I like Zen! And… I wouldnt mind rewiring my life. So I read it and I kind of enjoyed it, and I even learned a bit from it. I really liked Practice Number 7, Isshōkenmei, ‘Give your whole self to what you do.’ It had a cool samurai story to illustrate the point, and what am I if not a cool but frustrated keyboard samurai, cruelly denied my true path of swishing out my sword and cutting motherfuckers who get on my bad side every day?
Anyway, I was going to restack the post, when I remembered the beef I got into with some chode on Notes, who’d dropped into my TL to tell me another post I’d enjoyed and shared (this one on brain health) was just AI slop.
I reacted pretty poorly to the driveby, partly because I’d really enjoyed the piece and I rated it as good advice because I’d been following a lot of it for the last few years and could confirm it really helps.
Did the author, a doctor, use an AI to help him write it? I dont know. But I doubt it. I doesn’t feel generated to me. Did he use an AI to frame or edit the piece, however… Maybe. But I don’t really care. Doctors aren’t writers. I wouldn’t expect one to have the narrative or discursive skills of a Sherman Alexie. My opinion, I decided, was that I didn’t give a shit if I thought the piece was a good read and useful enough to share.
Which brings me back around to that 8 Zen things bit.
I enjoyed it when I read it, after just waking up, a bit groggy and uncaffeinated. So I saved it, meaning to share it later. But when I came back to it, possibly feeling a bit tender from the driveby argument, I looked at it again. And yeah, unlike the doctor’s essay, this did look kinda botty. It’s got all the AI tells. The super short one and two line pars. The chopped, sentence fragments. The repeated repetition.
And yet, I did enjoy it the first time I read it. And the content is kind of useful and of interest to me. But I’m not gonna share it on Notes now. I thought I’d just link to it here and ask what everyone thinks about these essays or posts that feel much more obviously AI generated, but still kind of engaging and useful.
My gut feeling is that I dont give a shit if anyone, especially a non-writer, uses an AI to help frame and edit their own arguments. Honestly, it’s gonna make most of them more readable. But I’m less sanguine about people calling themselves writers who are really just prompting a bot to generate copy and then throwing it out there without much intervention. But then how much intervention makes it worth reading?
I honestly don’t know.



There is a bit of a dirty feeling left in my mouth if i'm listening to music on youboob or something and it feeds me music that at first sounds kind of familiar and quite competent, almost enjoyable. Until i look a bit closer. And its created using AI. Sometimes you can't tell till you start looking on a search engine for the 'band'. Admittedly there are a lot of bands out there these days that sound "just like Led Zepp" or "just like 70s rock" or "just like Jim Morrison" but when AI is doing it i'm feeling a bit off about it. Like i've been duped by a snake oil salesman. I'm finding more and more of a distaste for the "feed" and inclined more along the lines of a bit more agency of self discovery, like the KEXP music channel. They support and introduce so many good bands from around the world that its almost like having that cool big brother that gave you your first album when you were young. Yes, i can see the irony in this just being another 'feed' but its at least of humans by humans and not an algorithm saying "hey, you liked this, here's something that x amount of people, profiled just like you, liked as well"
(btw i searched up that Dr Stickler and there seems to be at least some kind of online presence https://www.a4m.com/daniel-stickler.html but he uses the acronym AI a lot! I also have deep suspicions about his validity. He uses a lot of percentages but there are zero links to studies backing up his claims. Plus with an ad at the bottom for his AI health tool and lots of back patting in the comment section for people positively reinforcing his points. (for a start his advice around glycemic control is terrible for type1 diabetics)
These days we have to be careful we dont fall into the confirmation bias trap because it agrees with everything we think is right. Seems like such a murky world we have landed ourselves with and it just tires you out sometimes. To the point where in the smallest darkest hours you say to yourself "just put the chip in my head already. I'm over it all) :)
Old man shouts at cloud.
Hilariously prophetic double meaning words from Time Traveler Matt Groening.
I feel like this is me more and more. LLMs and their hell born ilk are here to stay no matter what we do. Well, maybe we could start armed mobs and firebomb data centres, but I feel like that would do more harm that good. All those burning plastics are gonna be bad for the environment.
Co-pilot helpfully tries to edit my Word docs (I pay for an Office sub for personal use) and more often than not, I take it's advice. It started with correcting my spelling and now its trying grammar, but making it understand that's just how Eddie the Street Rat talks when in the presence of Mr Beswix, the Crime Lord, is difficult.
These machines have real value, especially in fields with enormous amounts of data. I feel like Chat GPT and cheating on interviews overshadows the actual usefulness of machines capable of examining literal mountains of data and churning out incredibly useful conclusions - especially in fields like medical diagnosis.
Unfortunately, capitalism drives deployment of dollars and right now, clicks and views drive advertising dollars so that is where the effort is placed. Click bait headlines and machine generated articles. We need more government funding pushing these machines to do actual useful stuff like model micro-plastic removal and so on.
Harks back to that post you did about who controls these machines. Should governments fund their own or purchase instances of their own? Governments can't regulate what they don't understand and I bet you'd have to look pretty far past the kooks to find someone who thinks LLMs and machines should be regulated and is willing to tell you how.