My friend David Eaves has the best tagline for his blog: “if writing is a muscle, this is my gym.” So I asked him if I could adapt it for my new biweekly (and occasionally weekly) hour-long video show on oreilly.com, Live with Tim O’Reilly. In it, I interview people who know way more than me, and ask them to teach me what they know. It’s a mental workout, not just for me but for our participants, who also get to ask questions as the hour progresses. Learning is a muscle. Live with Tim O’Reilly is my gym, and my guests are my personal trainers. This is how I have learned throughout my career—having exploratory conversations with people is a big part of my daily work—but in this show, I’m doing it in public, sharing my learning conversations with a live audience.
My first guest, on June 3, was Steve Wilson, the author of one of my favorite recent O’Reilly books, The Developer’s Playbook for Large Language Model Security. Steve’s day job is at cybersecurity firm Exabeam, where he’s the chief AI and product officer. He also founded and cochairs the Open Worldwide Application Security Project (OWASP) Foundation’s Gen AI Security Project.
During my prep call with Steve, I was immediately reminded of a passage in Alain de Botton’s marvelous book How Proust Can Change Your Life, which reconceives Proust as a self-help author. Proust is lying in his sickbed, as he was wont to do, receiving a visitor who is telling him about his trip to come see him in Paris. Proust keeps making him go back in the story, saying, “More slowly,” till the friend is sharing every detail about his trip, down to the old man he saw feeding pigeons on the steps of the train station.
Why am I telling you this? Steve said something about AI security that I understood in a superficial way but didn’t truly understand deeply. So I laughed and told Steve the story about Proust, and whenever he went by something too quickly for me, I’d say, “More slowly,” and he knew just what I meant.
This captures something I want to make part of the essence of this show. There are a lot of podcasts and interview shows that stay at a high conceptual level. In Live with Tim O’Reilly, my goal is to get really smart people to go a bit more slowly, explaining what they mean in a way that helps all of us go a bit deeper by telling vivid stories and providing immediately useful takeaways.
This seems especially important in the age of AI-enabled coding, which allows us to do so much so fast that we may be building on a shaky foundation, which may come back to bite us because of what we only thought we understood. As my friend Andrew Singer taught me 40 years ago, “The skill of debugging is to figure out what you really told your program to do rather than what you thought you told it to do.” That is even more true today in the world of AI evals.
“More slowly” is also something personal trainers remind people of all the time as they rush through their reps. Increasing time under tension is a proven way to build muscle. So I’m not entirely mixing my metaphors here.
In my interview with Steve, I started out by asking him to tell us about some of the top security issues developers face when coding with AI, especially when vibe coding. Steve tossed off that being careful with your API keys was at the top of the list. I said, “More slowly,” and here’s what he told me:
As you can see, having him unpack what he meant by “be careful” led to a Proustian tour through the details of the risks and mistakes that underlie that brief bit of advice, from the bots that scour GitHub for keys accidentally left exposed in code repositories (or even the histories, when they’ve been expunged from the current repository) to a humorous story of a young vibe coder complaining about how people were draining his AWS account—after displaying his keys in a live coding session on Twitch. As Steve exclaimed: “They are secrets. They are meant to be secret!”
Steve also gave some eye-opening warnings about the security risks of hallucinated packages (you imagine, “the package doesn’t exist, no big deal,” but it turns out that malicious programmers have figured out commonly hallucinated package names and made compromised packages to match!); some spicy observations on the relative security strengths and weaknesses of various major AI players; and why running AI models locally in your own data center isn’t any more secure, unless you do it right. He also talked a bit about his role as chief AI and product officer at information security company Exabeam. You can watch the complete conversation here.
My second guest, Chelsea Troy, whom I spoke with on June 18, is by nature totally aligned with the “more slowly” idea—in fact, it may be that her “not so fast” takes on several much-hyped computer science papers at the recent O’Reilly AI Codecon planted that notion. During our conversation, her comments about the three essential skills still required of a software engineer working with AI, why best practice is not necessarily a good reason to do something, and how much software developers need to understand about LLMs under the hood are all pure gold. You can watch our full talk here.
One of the things that I did a little differently in this second interview was to take advantage of the O’Reilly learning platform’s live training capabilities to bring in audience questions early in the conversation, mixing them in with my own interview rather than leaving them for the end. It worked out really well. Chelsea herself talked about her experience teaching with the O’Reilly platform, and how much she learns from the attendee questions. I completely agree.
Additional guests coming up include Matthew Prince of Cloudflare (July 14), who will unpack for us Cloudflare’s surprisingly pervasive role in the infrastructure of AI as delivered, as well as his fears about AI leading to the death of the web as we know it—and what content developers can do about it (register here); Marily Nika (July 28), the author of Building AI-Powered Products, who will teach us about product management for AI (register here); and Arvind Narayanan (August 12), coauthor of the book AI Snake Oil, who will talk with us about his paper “AI as Normal Technology” and what that means for the prospects of employment in an AI future.
We’ll be publishing a fuller schedule soon. We’re going a bit light over the summer, but we will likely slot in more sessions in response to breaking topics.