In the early days, “software eats the world” meant the real power went to people who could bend tools to their will — either by writing their own programs or scripting the ones they used. Now, with large language models in the mix, the question isn’t just what your app does out of the box, but what you allow AI tools to do with it.
In this post I look at two clear patterns for making software AI-friendly:
- Expose a command surface (like MCP) so an LLM can call your app’s functions directly.
- Expose a programmable surface (SDK, DSL, low-code) and let the LLM write code that uses it.
Both have strengths, trade-offs, and a place in most serious products. The key is deciding where to draw the line — and making sure the door you open matches the way your users (and their AIs) actually work.
submitted by /u/WifeEyedFascination
[link] [comments]