Claude Code comes to Roadmap, OpenClaw loses its head, and AI workslop

I’m Matt Burns, Director of Editorial at Insight Media Group. Each week, I round up the most important AI developments, explaining what they mean for people and organizations putting this technology to work. The thesis is simple: workers who learn to use AI will define the next era of their industries, and this newsletter is here to help you be one of them.
This is a big week for Roadmap.sh: our site launched the Claude Code roadmap to help support the untold number of users jumping on the popular platform. It’s a comprehensive guide covering everything from vibe coding and agentic loops to MCP servers, plugins, hooks, and subagents. This is the place to start with Claude Code and AI-assisted coding, even if you’re already vibe coding elsewhere. It will help anyone jump from casual prompting to real agentic workflows. We’re proud to have it out there, and we think it speaks to how developer skillsets are being rewritten in real time, and the platforms mapping the new world are the ones who’ll own it.
This shift showed up clearly in this Towards Data Science interview with Stephanie Kirmer, a machine learning engineer with almost a decade in the field. She describes how LLMs have reshaped her daily work from using code assistants to bounce ideas off, critique approaches, and handle the grunt work around unit tests. She’s also candid about their limits, noting that the real value still comes from experience applied to unusual problems. Her take on the broader AI economy is worth sitting with, too. She thinks we’re in a bubble, not because the tech isn’t useful, but because the investment expectations are widely out of proportion. If the industry were willing to accept good returns on moderate investment rather than demand immense returns on a gigantic investment, she says, the current AI world could be sustainable.
Productivity Gains or AI Workslop?
AI tools are objectively improving, but are organizations actually becoming more productive? Fortune reported a study of thousands of C-suite executives who have yet to see AI produce a productivity boom. The executives surveyed invested heavily in AI but are struggling to translate that into measurable output gains. This is reminiscent of past technology shifts, first with PCs, and then with the internet and mobile.
Another study released this week points to AI training as the missing link. According to a major CEPR study across 12,000 European firms, the organizations that train their people to use AI are the organizations that benefit from it. The study notes that AI adoption increases labor productivity by 4% on average, with no evidence of short-run employment reductions. Every additional 1% of investment in workforce training amplified AI’s productivity effect by nearly 6%.
In short, the orgs that just buy AI licenses and expect magic are the orgs that are disappointed. Organizations need to invest both in AI and in employee upskilling.
AI Gets Cheaper and More Capable
Anthropic this week released Claude Sonnet 4.6, which promises near-Opus-level performance at significantly lower pricing. It even beats Anthropic’s Opus model on several tasks, and according to some benchmarks, it matches or outperforms Google’s Gemini 3 Pro and OpenAI’s GPT-5.2 in several categories. It’s a major upgrade to a mid-tier option.
This news should help push AI adoption among organizations on the fence by offering near-peak performance at a much lower cost.
On the other side, OpenAI launched Codex Spark, a model built for raw speed, capable of 1,000 tokens per second. It’s designed for rapid prototyping and real-time collaboration, complementing the heavier Codex model for long-running agentic tasks.
Google got in on the fun, too, releasing Gemini 3.1 Pro. While this is still in preview, early benchmarks show that it’s currently far better at solving complex problems than Google’s previous mainstream model. According to Google, the ‘core intelligence’ of Gemini 3.1 Pro comes directly from the Deep Think model, which explains why 3.1 Pro performs so well on reasoning benchmarks.
The pattern is clear: performance rises, prices fall, and the barrier to entry for orgs adopting AI keeps shrinking. If cost were an excuse for waiting, that excuse is shrinking with each new release.
OpenClaw: Still Fun, Still messy, and Now Headless
OpenClaw continues to be one of the most fascinating open source projects in the AI space. If you haven’t been following it, OpenClaw is, in short, a platform that runs Claude Code autonomously to be a user’s personal AI assistant. Buy a Mac Mini, install OpenClaw, and let it run your (or its) life. Eivind Kjosbakken published this OpenClaw walkthrough on TDS, and it’s a great practical guide for personalizing its behavior with skills and connecting it to Slack, GitHub, and Gmail. He reports massive efficiency gains within a week. Projects like this reinforce that the era of the AI assistant isn’t just a pipedream; it’s already becoming a reality.
The project took a turn on Sunday, though. OpenClaw founder Peter Steinberger announced he’s joining OpenAI to work on next-gen personal agents. The good news is OpenClaw is moving to a foundation, and Steinberger says it will remain open source. The less good news involves security. The New Stack reported that Snyk found over 7% of skills on ClawHub, the OpenClaw marketplace, contain flaws that expose sensitive credentials. Meanwhile, Anthropic caused a panic and later reaffirmed its policies, saying users can still use Claude accounts to run OpenClaw and similar tools.
The New Stacks’s Frederic Lardinois interviewed the founder of NanoClaw, a lightweight alternative to OpenClaw. Gavriel Cohen built the ClawBot alternative in a weekend after learning about security flaws in the popular agentic framework. NanoClaw launched on GitHub in late January and now has just under 10,000 stars. The core principle is radical minimalism: about a few hundred lines of actual code, a handful of dependencies, and each agent running inside its own container.
India is Betting Big on AI
India this week hosted a massive AI event. At the India AI Impact Summit, Replit CEO Amjad Masad put it bluntly: “Two kids in India can now compete with Salesforce.” Also at the event, Adani Group pledged $100 billion towards building AI-ready data centers in India by 2035, partnering with Google, Microsoft, and Flipkart to build what they say will be the world’s largest integrated data center platform.
And if you needed a kicker for how seriously the world’s biggest AI players are taking India, OpenAI’s Sam Altman and Anthropic’s Dario Amodei both showed up on stage at the summit alongside PM Modi. But they seemingly refused to hold hands for a traditional unity pose. They raised fists instead, standing right next to each other. It’s a small moment that captures an enormous truth: The rivalry between OpenAI and Anthropic is real and the stakes are global.
AI’s Biggest Funding Yet
OpenAI and Anthropic are both raising capital at astronomical levels. According to Bloomberg, OpenAI’s latest round is on track to exceed $100 billion, with Amazon expected to invest up to $50 billion, Softbank with up to $30 billion and Nvidia coming in around $20 billion. The company’s valuation could exceed $850 billion. For context, last week, Anthropic closed a $30 billion round at a $380 billion valuation, with annualized revenue hitting $14 billion. Both companies are reportedly preparing for a potential IPO.
There’s more. Fei-Fei Li’s World Labs raised $1 billion for its “spatial intelligence” approach to AI. Their pitch involves building models that understand 3D worlds rather than just flat text and images. Investors include AMD, Autodesk, and Nvidia. In the past, a billion-dollar raise would dominate the news cycle, but this one is barely making it above the fold, suggesting that the scale of investment is radically shifting.
Ads or no ads?
One more thread worth watching is how AI companies plan to actually make money. Perplexity announced that it’s ditching ads and going all-in on subscriptions. The reasoning is comforting: Users need to trust that every answer is the best possible answer, not one influenced by an advertiser. Valued at $20 billion with $200 million in ARR, Perplexity is betting that trust can be a moat.
Anthropic clearly feels the same way. The company ran a series of Super Bowl ads that are darkly funny, parodying what happens when your AI assistant starts serving ads mid-conversation. The tagline is sharp: “Ads are coming to AI. But not to Claude.” These spots were a direct shot at OpenAI, which recently began serving contextual ads at the bottom of ChatGPT conversations for free and Go-tier users.
As AI companies explore monetization, the lesson for users is the same: The underlying AI technology is being subsidized, competed over, and made more accessible (and cheaper) nearly every week.
The post Claude Code comes to Roadmap, OpenClaw loses its head, and AI workslop appeared first on The New Stack.
