15 Comments
User's avatar
Hooper's avatar

Check out `notebooklm` from Google.

You can load all your work into it, then ask it questions (similar to ChatGPT). BUT IT HAS REFERENCES!!!

It will show you exactly where its getting its answers from, in the documents you uploaded, so you can double check.

Expand full comment
Patrick Wilson's avatar

I've heard ChatGPT basically makes stuff up. There was the woman (on the Gold Coast maybe?) who used it to develop her defense in court. The judge in handing down his decision against her, slammed her defense referencing cases that actually didn't exist. Had to laugh. Welcome to the brave new world of AI.

Expand full comment
John Birmingham's avatar

Oh yeah there's a heap of stories like that coming out now.

Expand full comment
Tweeky's avatar

Apparently some Trump's lawyers have been slammed in court because they'd been caught out by the judges writing their briefs using AI often with citations from fictitious cases the AI had made up.

Expand full comment
Carlos Ben Ari's avatar

More than the brave new world of AI, would say the brave old world of laziness and whocaresness.

Expand full comment
Tony Loro's avatar

A lawyer in the second circuit just got a $10,000 fine for using 90% made up cases

Expand full comment
Suse's avatar

I was getting Chat GPT to prep a pitch deck for a novella I have written and hadn't read for ages. I asked for a character report as I couldn't recall the names of five minor characters and was being lazy. It came up with three fictitious names. When I found the names and corrected it - it said aha, that's right. A bit like a naught schoolboy.

Expand full comment
Tony Loro's avatar

A guy I follow very closely called it, not AI but SI. Simulated intelligence.

Expand full comment
karen murray's avatar

AI is one of the four horsmen , it's not clever i give it a wide berth where ever i can

Expand full comment
Mellow and Thriving's avatar

This has been my experience with Gemini, Chat GPT, DeepSeek and Perplexity: they are inaccurate when they pull up facts. If you ask them to analyse existing and provided facts they are helpful but not reliable. However, they can't differentiate between primary sources and secondary sources. They can't even do a referencing style properly. As a paid researcher, I am relieved regarding the longevity of my job, but appalled that anyone uses them as the horse's mouth

Expand full comment
Tweeky's avatar

Maybe Matthew Kolhammer was back uptime when the Clinton CVBG made its' trip back to WW2.

Expand full comment
Fuzzy's avatar

I really liked seeing how the sausage was made. I happened upon both of your emails back to back and while some people might not want to know the nuts and bolts, I appreciated it.

I’ve always been suspicious of GPT and admit I haven’t taken the time to learn how it works. The idea of feeding it raw ideas is kind of terrifying, but maybe I need to learn how this tool works instead. Thanks for the brief insight, any recommended reading/videos to learn more about it?

Expand full comment
Orin's avatar

Training AIs can be tricky. The amount of mucking about with prompt shields and red-teaming I had to do on my video avatar (aka.ms/ClockWorkOrin) was Sisyphean.

Expand full comment
John Birmingham's avatar

Yeah, I've been pleasantly surprised at just how good Obsidian is at doing the same job, with zero AI involvement. Had to teach myself Markdown but that only took a day for the basic commands. The bigger ask was getting my head around the concept of a 'zettelkasten.'

Expand full comment
Orin's avatar

And the chore of being consistent with it ;-)

Expand full comment