We have some rules about which AI thingamabobs we can and can't use, and I think we're currently running a Microsoft CoPilot, um, pilot to see how beneficial it might be for letter writing and whatnot. I tend to stay away from it all.
My searching provider has recently introduced an AI summary option where a patent family will be summarised in three dot points. Having to click on a button and wait a few milliseconds for each family is time wasting, and doesn't usually glean (or gleam, as someone once wrote to me) anything more useful that I can see already. If I could export that summary, that would be more useful perhaps for the end user of any report I write.
I'm just hoping to last another nine years and retire before the juggernaut catches me and runs me down.
well JB, I must respectfully disagree with a couple of points you made here, I don't think we will be enslaved by the robots, nor do I think eventually all the web content will be AI drivel (although I agree with you that plenty of it is already).
like you (and other examples in the comments), I have found a use for the LLMs to flash up some of the doco I write. I work in the Corporate IT game (my other job besides the Booze hustle) and the LLMs are quite fine for drafting most of the dross we need to write, but....it is quite clear when someone sends you an LLM generated piece of drivel and does not take the time to "clean it up" (make it read like a human wrote it), and folks are instantly dismissive of such work. the LLMs are completely devoid of original thought or imagination. If the techheads can solve that problem then yeah, we are probably in trouble. Until that time, they are just tools.
But the more relevant point that gives me hope is a line I heard from an analyst I follow who responded when asked about the LLM phenomena... he said words to the effect of
" I foresee a bull market in original human thought and content creation"
and went on to explain that people intuitively seek this out. And it's true, we love stories, and we love "authentic" stories. Our 'spidy sense" instinctively senses when something is not written by a human, it just doesn't relate.
I see this in action every day now on SubStack, I am a huge consumer of writing on substack that is authentic, and clearly written by a person. I know you are too.
Azimov explored this concept (70 YEARS AGO) in the Robot novels, where humans were repulsed by robots (as they imitate humans) and this led eventually to robots being rejected and removed from the human world. I strongly recommend reading iRobot (for a taster) and then the 4 Robot novels (caves of Steel, Naked Sun, Robots of the Dawn, Robots and Empire) as he explores these concepts in detail which is quite prescient considering computers barely existed at the time.
The Asimov Robot novels are some of my favourites.
The interesting thing about his conception of robotic brains though, and the three laws in particular is that they're entirely rational. Most of the plots end up being detailed post-mortems to explain how a robot could have done something apparently against the rules: intricate chains of logic and effect, hinging like good mysteries on sparingly revealed detail.
It could never have occurred to him that the closest approach to "AI" that we've yet to produce is completely irrational: surface detail with no understanding or logic underneath. Grinning confident bullshitters. The extent that some of the newer ones might appear to hold a line of reasoning only follows from patterns of logic that they've come across in their training data. Throw in a red-herring (per the recent "Apple paper") and even the best drop to worse than guessing...
AI is a huge problem where i work (a university) with students submitting AI written work etc. But Like Mr Barnes we use it for performance reviews (because those things are a drag and in most cases a tick box exercise for HR). And I also use it to help solve a particularly knotty excel problem quite often. We have also started running our employee complaints through AI to remove our voice. Lots of unhappy people in the uni sector and management keep asking for our opinion (then ignoring it), but the current climate makes people think that speaking out makes them a target. So they write something down and run it through AI before posting so its less likely a manager can say "i know who wrote that".
But i recently asked it to write a letter to council in support of maintaining heritage in a local area. It came up with a letter that was much better than anything i could have produced. We are currently going through a local battle with a developer who bought a huge old block of land from a bunch of nuns that ran a retirement home. Council gave them the go ahead to just ignore the 9m LEP height in a heritage area and go gangbusters. So now we are fighting a development with seven 50m long by 18m high buildings. We are fighting a losing battle though - council want to look good to state, state want to look good to federal and federal want to look good to the voters. The only people who dont want a future ghetto are the locals. Its not even located near a train station and being a country town its not like people use the train to commute regularly anyway.
I use AI to write reports on things that I dont have any real knowledge of or care for. I used it to compare planning tools and engagement platforms. The output was better than I could have done and took three minutes to compile and 30 to edit. Gave me more time to spend on stuff I like doing.
Our workplace recently got a external agency to come in a review 'how we are currently using AI and how we should be" what was disappointing to me was these externals had no idea what we did and how there might be legal requirements for how we do stuff as to why we weren't using AI. Most of us us the Microsoft CO-pilot AI whenever we need to generate paragraphs for end of year performance reviews, briefings for senior management that haven't read the dozen other briefings, notes, reports or updates which we endlessly have to prepare that they never read but by_god you have better provided, resorted, indexed or forwarded by the due date. I am many others us it for that. Sort of like how the Large Language Models feed on text we generate more of the same when asked knowing that they are unlikely to read past the first paragraph anyway.
Like Mr Insomniac, I fairly often use my search provider's summary system, to my surprise. It seems pretty accurate so far and it links to actual top-five articles to justify itself.
For actual work or personal output I haven't found a use for it yet. Legal don't let us use it for code, for the obvious excellent reasons. I can't imagine giving a text-thing enough background context information to be able to have a competent shot at the sort of writing that I do.
I might give your idea of getting it to write a TL;DR summary of some of the longer messages that I write. That could be interesting.
I still laugh at the incorrect guesses of the predictive text input systems in both Apple and Google products. Prediction is not communication, as I've said before: category error.
No, not tried Arc. Still a happy Kagi subscriber. I even have the early-adopter t-shirt now...
You put me on to Kagi in the first place: what's Arc got going for it? Since starting Kagi has added Claude and Wolfram Alpha, both of which (especially the latter) seem well worth it. Plus it still soothes the advertising-allergy itch.
JB, you put me onto otter.ai for transcription and I find it so helpful that I now have a paid copy for turning interviews into text. The output still needs some editing but this tool saves me hours of grind per week. On the flipside, I chaired a community meeting on the weekend and the outgoing president used AI to prep his speech - it was soulless and awful, and did not sound like him at all.
Look, I’ve not loved most of the writing I’ve asked it to tweak. Maybe I’m overly arrogant about my writing, but I’ve found it Imhasn’t really been able to preserve the voice and tone.
But I am finding it super helpful for (some) research — “my brother’s pathology test results say ‘x’ what is the likelihood of ‘y’ or ‘z’? — and trouble shooting — “I’m working in a stupid word template that someone else created. How do I make it do ‘a, b, or c’?”
Much more efficient than a Google search.
And I have a colleague who pasted in 100s of pieces of qualitative research data and asked it to analyse the key themes. That was pretty cool.
1) I was that "gifted" kid that was good at writing, and this has certainly served me well in my career. BUT, I didn't realise that my level of reading, writing and comprehension was any better than anyone else's, I just thought I was a nerd. Until I worked for several of Australia's larger private sector organisations and realised how poorly so many people write in business (and received feedback on my own writing that indicated that what I thought was quite standard and mainstream was anything but). To the point where, when reviewing business documents, I've had to fix all the spelling and formatting errors before I could sit down and read it and not be distracted. Where I recoiled in horror at what I considered to be basic spelling and grammatical errors in official internal company comms, coming from the comms and marketing teams, but NO ONE from the CEO down seemed to either notice them or think they were an issue. Which made me break out in a cold sweat at the thought of what we were sending out to vendors, shareholders, government departments and other public facing entities, and how this would make us look.
So the Apple AI thing as an assist to grammatically challenged people makes perfect sense to me. In light of this, the current flood of think pieces on AI as burning oceans of water just to write an email are coming from people who are perfectly capable of writing a coherent pithy email without artificial help, and are revealing an inadvertent elitism with their exhortations to just write the damn thing yourself. It also overlooks the people who take the opportunity to compare/contrast their shitty first attempt with the shiny AI re-write and then apply what they learn in this process to their own writing later on, eventually eliminating the need to lean on the AI tool at all.
2) My workplace is running a privately hosted/developed chat proof of concept, not to replace anyone's job, but to assist with execution (like you said JB - the creative work is still yours, but the AI is making the execution of it faster). So we now have a search function for company policy. Rather than having to trawl through endless labyrinths of intranet pages in search of how to do things, or why we're not allowed to do other things, we can run a search on the policy bot and get directed to the right document straight away. This alone would save hours of work and potentially expensive misunderstandings across the business.
My team uses it to generate meeting minutes. We use our online meeting tool to transcribe our meetings, load the transcript into Chatty and get it to spit out meeting minutes for us, which ensures that some poor schmuck isn't stuck having to write notes and then go back through the recording of the meeting to clarify the bit they missed at the 20 min mark, and also frees them up to actually participate in the meeting, since we're all professionals being paid for our expertise, not a glorified typing pool. I just put in a transcript of a trimester planning session into it and asked it to summarise the key pieces of work we agreed to as user stories, which has saved me several hours of faffing about. They're not perfect but I can now edit them into usable work in a fraction of the time it would otherwise have taken me. It feels like cheating, like I'm being paid a not inconsiderable amount of money to bullshit my way through my job, but I have to remind myself that I've reached a stage in my career where I was hired for my knowledge and expertise, not necessarily the output of tedious manual tasks I complete (and also I paid my dues in the overworked, underpaid mines in my 20s and 30s dammit) 🤷♀️
Yeah. This makes me even more certain that a lot of the whining and bitching about AI as an existential threat to writers is just a lot of whiny writers. Ai does not do what we do. It does other things.
Yeah, I thought my writing and so on was super average despite doing well in english at school and boy howdy was it a surprise when I was regularly replying to Company wide blasts with 'found some spelling mistakes, also grammar'.
I am by no means a writer (I write for fun, not food) but man, I am miles ahead of my colleagues.
What is not fun is cleaning up documentation written by other people. Hell really is other people('s documentation).
By the old gods and new, I FEEL YOU on other people's documentation 😭
Having to coach highly skilled technical people on writing documentation is its own level of hell. Explaining the concept of assumed knowledge, and that you need to write procedures as if you're going to drag some poor schmuck in off the street, sit them in front of a PC and ask them to do disaster recovery of whatever system the documentation relates to...
In a previous role my team would document and hand over our systems to our first level support teams, which meant that we had to ensure that they could understand and follow the procedure for fixing whatever issue the doco related to. Our company intranet was very old, very clunky, very finicky, and the first level support crew HATED having to do anything with it. So they would reject my carefully constructed documentation because they allegedly couldn't follow it. I think we got up to v25 of that document before they gave up and I won the war of attrition in handing it over 😈
I suspect the owner is employing AI instead of writers for a magazine I edit. Certain words (like nestled) and phrases keep cropping up and there is an excess of adjectives. The red pen is getting a good workout.
Saw an article this morning that said that a Polish radio station had sacked its DJs and replaced them with AI, which then proceeded to interview a dead poet...
We have some rules about which AI thingamabobs we can and can't use, and I think we're currently running a Microsoft CoPilot, um, pilot to see how beneficial it might be for letter writing and whatnot. I tend to stay away from it all.
My searching provider has recently introduced an AI summary option where a patent family will be summarised in three dot points. Having to click on a button and wait a few milliseconds for each family is time wasting, and doesn't usually glean (or gleam, as someone once wrote to me) anything more useful that I can see already. If I could export that summary, that would be more useful perhaps for the end user of any report I write.
I'm just hoping to last another nine years and retire before the juggernaut catches me and runs me down.
well JB, I must respectfully disagree with a couple of points you made here, I don't think we will be enslaved by the robots, nor do I think eventually all the web content will be AI drivel (although I agree with you that plenty of it is already).
like you (and other examples in the comments), I have found a use for the LLMs to flash up some of the doco I write. I work in the Corporate IT game (my other job besides the Booze hustle) and the LLMs are quite fine for drafting most of the dross we need to write, but....it is quite clear when someone sends you an LLM generated piece of drivel and does not take the time to "clean it up" (make it read like a human wrote it), and folks are instantly dismissive of such work. the LLMs are completely devoid of original thought or imagination. If the techheads can solve that problem then yeah, we are probably in trouble. Until that time, they are just tools.
But the more relevant point that gives me hope is a line I heard from an analyst I follow who responded when asked about the LLM phenomena... he said words to the effect of
" I foresee a bull market in original human thought and content creation"
and went on to explain that people intuitively seek this out. And it's true, we love stories, and we love "authentic" stories. Our 'spidy sense" instinctively senses when something is not written by a human, it just doesn't relate.
I see this in action every day now on SubStack, I am a huge consumer of writing on substack that is authentic, and clearly written by a person. I know you are too.
Azimov explored this concept (70 YEARS AGO) in the Robot novels, where humans were repulsed by robots (as they imitate humans) and this led eventually to robots being rejected and removed from the human world. I strongly recommend reading iRobot (for a taster) and then the 4 Robot novels (caves of Steel, Naked Sun, Robots of the Dawn, Robots and Empire) as he explores these concepts in detail which is quite prescient considering computers barely existed at the time.
"I foresee a bull market in original human thought and content creation"... Yeah, I'm putting my chips on this.
The Asimov Robot novels are some of my favourites.
The interesting thing about his conception of robotic brains though, and the three laws in particular is that they're entirely rational. Most of the plots end up being detailed post-mortems to explain how a robot could have done something apparently against the rules: intricate chains of logic and effect, hinging like good mysteries on sparingly revealed detail.
It could never have occurred to him that the closest approach to "AI" that we've yet to produce is completely irrational: surface detail with no understanding or logic underneath. Grinning confident bullshitters. The extent that some of the newer ones might appear to hold a line of reasoning only follows from patterns of logic that they've come across in their training data. Throw in a red-herring (per the recent "Apple paper") and even the best drop to worse than guessing...
Grinning confident bullshitter is the best description of AI that I've heard yet.
Yeah I gotta agree with that one. Might borrow that line if you don’t mind Andrew!
AI is a huge problem where i work (a university) with students submitting AI written work etc. But Like Mr Barnes we use it for performance reviews (because those things are a drag and in most cases a tick box exercise for HR). And I also use it to help solve a particularly knotty excel problem quite often. We have also started running our employee complaints through AI to remove our voice. Lots of unhappy people in the uni sector and management keep asking for our opinion (then ignoring it), but the current climate makes people think that speaking out makes them a target. So they write something down and run it through AI before posting so its less likely a manager can say "i know who wrote that".
But i recently asked it to write a letter to council in support of maintaining heritage in a local area. It came up with a letter that was much better than anything i could have produced. We are currently going through a local battle with a developer who bought a huge old block of land from a bunch of nuns that ran a retirement home. Council gave them the go ahead to just ignore the 9m LEP height in a heritage area and go gangbusters. So now we are fighting a development with seven 50m long by 18m high buildings. We are fighting a losing battle though - council want to look good to state, state want to look good to federal and federal want to look good to the voters. The only people who dont want a future ghetto are the locals. Its not even located near a train station and being a country town its not like people use the train to commute regularly anyway.
I suspect university exams, and maybe assignments are going to revert to in person events, written by hand.
all depends on what costs more - dealing with the plagiarism or dealing with the hand written stuff. Uni's are under the big squeeze financially
I use AI to write reports on things that I dont have any real knowledge of or care for. I used it to compare planning tools and engagement platforms. The output was better than I could have done and took three minutes to compile and 30 to edit. Gave me more time to spend on stuff I like doing.
Our workplace recently got a external agency to come in a review 'how we are currently using AI and how we should be" what was disappointing to me was these externals had no idea what we did and how there might be legal requirements for how we do stuff as to why we weren't using AI. Most of us us the Microsoft CO-pilot AI whenever we need to generate paragraphs for end of year performance reviews, briefings for senior management that haven't read the dozen other briefings, notes, reports or updates which we endlessly have to prepare that they never read but by_god you have better provided, resorted, indexed or forwarded by the due date. I am many others us it for that. Sort of like how the Large Language Models feed on text we generate more of the same when asked knowing that they are unlikely to read past the first paragraph anyway.
Like Mr Insomniac, I fairly often use my search provider's summary system, to my surprise. It seems pretty accurate so far and it links to actual top-five articles to justify itself.
For actual work or personal output I haven't found a use for it yet. Legal don't let us use it for code, for the obvious excellent reasons. I can't imagine giving a text-thing enough background context information to be able to have a competent shot at the sort of writing that I do.
I might give your idea of getting it to write a TL;DR summary of some of the longer messages that I write. That could be interesting.
I still laugh at the incorrect guesses of the predictive text input systems in both Apple and Google products. Prediction is not communication, as I've said before: category error.
You ever used the Arc search engine? It's my fave of the newer AI based google killers.
No, not tried Arc. Still a happy Kagi subscriber. I even have the early-adopter t-shirt now...
You put me on to Kagi in the first place: what's Arc got going for it? Since starting Kagi has added Claude and Wolfram Alpha, both of which (especially the latter) seem well worth it. Plus it still soothes the advertising-allergy itch.
JB, you put me onto otter.ai for transcription and I find it so helpful that I now have a paid copy for turning interviews into text. The output still needs some editing but this tool saves me hours of grind per week. On the flipside, I chaired a community meeting on the weekend and the outgoing president used AI to prep his speech - it was soulless and awful, and did not sound like him at all.
Ugh. I can easily imagine exactly what it sounded like.
Look, I’ve not loved most of the writing I’ve asked it to tweak. Maybe I’m overly arrogant about my writing, but I’ve found it Imhasn’t really been able to preserve the voice and tone.
But I am finding it super helpful for (some) research — “my brother’s pathology test results say ‘x’ what is the likelihood of ‘y’ or ‘z’? — and trouble shooting — “I’m working in a stupid word template that someone else created. How do I make it do ‘a, b, or c’?”
Much more efficient than a Google search.
And I have a colleague who pasted in 100s of pieces of qualitative research data and asked it to analyse the key themes. That was pretty cool.
Clearly with all those typos I didn’t use AI to write my comment …
The newish AI created trailer for: Bond….Jamie Bond starring Margot Robbie is disturbingly convincing and compelling.
https://youtu.be/gsi7suv6CGI
I'm mainly using them for an initial dive into research on a topic in the same way that I have used Wikipedia or Stackoverflow
I'm mainly using them for an initial dive into research on a topic in the same way that I have used Wikipedia or Stackoverflow
Two things:
1) I was that "gifted" kid that was good at writing, and this has certainly served me well in my career. BUT, I didn't realise that my level of reading, writing and comprehension was any better than anyone else's, I just thought I was a nerd. Until I worked for several of Australia's larger private sector organisations and realised how poorly so many people write in business (and received feedback on my own writing that indicated that what I thought was quite standard and mainstream was anything but). To the point where, when reviewing business documents, I've had to fix all the spelling and formatting errors before I could sit down and read it and not be distracted. Where I recoiled in horror at what I considered to be basic spelling and grammatical errors in official internal company comms, coming from the comms and marketing teams, but NO ONE from the CEO down seemed to either notice them or think they were an issue. Which made me break out in a cold sweat at the thought of what we were sending out to vendors, shareholders, government departments and other public facing entities, and how this would make us look.
So the Apple AI thing as an assist to grammatically challenged people makes perfect sense to me. In light of this, the current flood of think pieces on AI as burning oceans of water just to write an email are coming from people who are perfectly capable of writing a coherent pithy email without artificial help, and are revealing an inadvertent elitism with their exhortations to just write the damn thing yourself. It also overlooks the people who take the opportunity to compare/contrast their shitty first attempt with the shiny AI re-write and then apply what they learn in this process to their own writing later on, eventually eliminating the need to lean on the AI tool at all.
2) My workplace is running a privately hosted/developed chat proof of concept, not to replace anyone's job, but to assist with execution (like you said JB - the creative work is still yours, but the AI is making the execution of it faster). So we now have a search function for company policy. Rather than having to trawl through endless labyrinths of intranet pages in search of how to do things, or why we're not allowed to do other things, we can run a search on the policy bot and get directed to the right document straight away. This alone would save hours of work and potentially expensive misunderstandings across the business.
My team uses it to generate meeting minutes. We use our online meeting tool to transcribe our meetings, load the transcript into Chatty and get it to spit out meeting minutes for us, which ensures that some poor schmuck isn't stuck having to write notes and then go back through the recording of the meeting to clarify the bit they missed at the 20 min mark, and also frees them up to actually participate in the meeting, since we're all professionals being paid for our expertise, not a glorified typing pool. I just put in a transcript of a trimester planning session into it and asked it to summarise the key pieces of work we agreed to as user stories, which has saved me several hours of faffing about. They're not perfect but I can now edit them into usable work in a fraction of the time it would otherwise have taken me. It feels like cheating, like I'm being paid a not inconsiderable amount of money to bullshit my way through my job, but I have to remind myself that I've reached a stage in my career where I was hired for my knowledge and expertise, not necessarily the output of tedious manual tasks I complete (and also I paid my dues in the overworked, underpaid mines in my 20s and 30s dammit) 🤷♀️
Yeah. This makes me even more certain that a lot of the whining and bitching about AI as an existential threat to writers is just a lot of whiny writers. Ai does not do what we do. It does other things.
Yeah, I thought my writing and so on was super average despite doing well in english at school and boy howdy was it a surprise when I was regularly replying to Company wide blasts with 'found some spelling mistakes, also grammar'.
I am by no means a writer (I write for fun, not food) but man, I am miles ahead of my colleagues.
What is not fun is cleaning up documentation written by other people. Hell really is other people('s documentation).
By the old gods and new, I FEEL YOU on other people's documentation 😭
Having to coach highly skilled technical people on writing documentation is its own level of hell. Explaining the concept of assumed knowledge, and that you need to write procedures as if you're going to drag some poor schmuck in off the street, sit them in front of a PC and ask them to do disaster recovery of whatever system the documentation relates to...
In a previous role my team would document and hand over our systems to our first level support teams, which meant that we had to ensure that they could understand and follow the procedure for fixing whatever issue the doco related to. Our company intranet was very old, very clunky, very finicky, and the first level support crew HATED having to do anything with it. So they would reject my carefully constructed documentation because they allegedly couldn't follow it. I think we got up to v25 of that document before they gave up and I won the war of attrition in handing it over 😈
I suspect the owner is employing AI instead of writers for a magazine I edit. Certain words (like nestled) and phrases keep cropping up and there is an excess of adjectives. The red pen is getting a good workout.
Another AI give away: "in contrast to". The bots love that one for some reason.
Saw an article this morning that said that a Polish radio station had sacked its DJs and replaced them with AI, which then proceeded to interview a dead poet...
https://www.abc.net.au/news/2024-10-25/off-radio-krak%C3%B3w-polish-station-ai-hosts-explained/104516864