A couple of things happened today as I was trying to focus on something else completely. First, I read a LinkedIn post by Mairéad Pratschke, PhD. In the post she asks some really good questions about AI and learning. But instead of thinking deeply about passive vs. deep learning, as she suggested, I opted to go check out the AI-generated podcast feature she described in her post. ‘Cos it was a shiny new AI thing that I had somehow missed entirely.
On my way to Google it, I made a short stop on Facebook (as one does). There I saw a new message from Angus Pratt, a key member of this particular Facebook chat group and a well known advocate for research and early detection for both lung and breast cancer, as he has both. What Angus posted was regarding his recent trip to WCLC24 . He shared a poster he had come across. It was the Pink and Pearl campaign poster from the American College of Radiology, who had very elegantly and respectfully linked mammograms to lung cancer screening.
This immediately caught my eye as I’ve been racking my brain around how we – lung cancer advocates – can piggyback on the success of the breast cancer groups. ‘Cos face it, the breast cancer advocacy groups have done an amazing job of getting more people in for mammograms and that has saved lives. Many, many lives.
So these two things merged in my head and I knew I had to dive into them both. Here’s what I found and then did.
Google’s NotebookLM is an AI-powered tool that leverages Google’s language models to supercharge your note-taking, learning, and research. Apparently, you can upload your PDFs or Google Docs, and the AI dives into them to provide “accurate, relevant insights”. It claims that it summarizes documents, highlights key topics, suggests questions, and even supports interactive Q&A. Pretty cool tool, right?!
But wait, there’s more! NotebookLM also says it, like other AI apps, lends a hand with other creative tasks like drafting scripts or emails, complete with citations so you can verify the info. Being experimental, it’s continually evolving based on user feedback. Makes it slightly cooler, right!?
But wait. There’s more! I tried the Audio Overview feature Mairéad had written about, and my mind was blown. Here’s what I did.
I uploaded the .pdf version of the poster Angus had shared. Then I noticed the Audio Overview section had a play button. Expecting not a lot, I clicked play to see what it generated. I was gobsmacked.
It took the one page poster and transformed it into engaging audio discussion between two virtual hosts. They kind or remind me of the radio hosts you hear on morning radio. You know, the work couple that plays off of one another so well.
Well, this AI couple summarized the content that was on the poster, added a bit of internet research, and made me a believer all in less than 60 seconds! OK, it took me 5 minutes to listen to them, and another 5 to listen again, as I thought I had started hallucinating…
I got this result from one poster. Can you imagine what can be produced with more thoughtful uploads and some actual planning. Because seriously, I was just playing around, wondering what would happen if I threw this (poster) at that (AI).
A few caveats. It’s only available in English for now and they claim it might take a few minutes to process larger notebooks. And yes, the discussions reflect your uploaded sources—they’re not the be-all and end-all on the topic. But as an experimental feature, it’s a fresh, super cool way to engage with your research and notes by turning them into audio content. I. Am. Blown. Away.
Have a listen and tell me you’re not.
Cross posted to LinkedIn