New Mobile SDK Brings Low-Code Development to On-Device AI

One of the big AI development trends of last year was the shift to on-device inference, in many cases using so-called small language models (SLMs). Google is leading the charge on this, with its “Web AI” initiative and technology such as LiteRT.js, its Web AI runtime.
But this trend toward processing inference on the user’s device isn’t just browser-based. Last month, a company called DataSapien launched a software development kit (SDK) for mobile development of on-device AI applications. The SDK targets both iOS and Android, as well as hybrid frameworks like React Native and Flutter.
Understanding the DataSapien SDK for On-Device AI
To understand where DataSapien’s platform fits in the AI development ecosystem — and how it relates to Web AI — I spoke to DataSapien’s founding CEO, StJohn “Singe” Deakins.
The SDK is a 20MB download, which includes a “Personal Data Store and Intelligence environment.” To use the SDK to build AI apps, developers are given a low-code, drag-and-drop user interface featuring access to “thousands of ML [machine learning] and small language models.” Deakins demoed the UI to me in our meeting, and it did seem like a slick and simple environment in which to build an app.
“You can embed any small language model into the SDK.”
— StJohn Deakins, DataSapien CEO
Since the SDK processes data on a user’s device, I asked Deakins how much of this is through the integration with external AI models and how much is driven by the company’s own algorithms. He explained that the key is orchestrating all of it.
“You can embed any small language model into the SDK,” he said. “So it’s 20MB when [the SDK] is installed, but then obviously adding a model, which might be 300MB. Then we also have an environment for hosting machine learning algorithms, so you might have your own existing recommender [algorithm], and you want to pull that in … And then we’ve also got kind of deterministic rules — like ‘if this, then that’ — and the skill really is in orchestrating the different types of intelligence.”
DataSapien architecture; source: DataSapien.
DataSapien vs. Inrupt and Solid
The key to processing data on the device is the personal data storage, which DataSapien calls the “MeData” vault. This has similarities to Sir Tim Berners-Lee’s company Inrupt, which uses the Solid online storage standard to create what it calls “data wallets.”
In his LinkedIn biography, Deakins explicitly compared his solution to Inrupt’s, claiming that DataSapien is “years ahead of legacy centralized-server approaches such as Solid/Inrupt.” I asked him to expand on that comparison.
After first clarifying that Inrupt is “a wonderful company because they’re trying to do the same sort of thing,” he explained that the issue — as he sees it — is that you “have a centralized store of personal data, and the mobile phone is a client.” So your personal data doesn’t stay on your device, but is sent to various applications for processing.
I suspect Inrupt’s take is that the user still owns and controls their personal data, but that allowing selected external servers to process it enables far more powerful applications. So in a way, it’s an apples-to-oranges comparison. But from a purely personal storage point of view, DataSapien’s approach does seem inherently safer — since a user’s personal data never leaves their device.
Use Cases and the Potential for Agentic AI
While DataSapien has only just launched its SDK, early users of the platform have been in the retail and health sectors. Deakins said that travel and financial services are also use cases they’re exploring.
He showed me a demo of a health app that creates a meal plan based on a user’s preferences — for instance, if a user has a nut allergy, she can enter that information. The meal plan is then created by the user interacting with a small language model on the device. (You can watch this same demo on Deakins’ Loom account.)
In our current AI era, this begs the question: Is this type of app, where an AI guides a user to accomplish a goal or task, a kind of agent? In a company blog post last May, Arda Doğantemur, technical lead at DataSapien, suggested that its platform is capable of agentic functionality: “The rules and ML models can initiate agentic AI loops.” As an example, Doğantemur noted that a rule in a dietary app might “detect a sudden drop in activity, or a deviation from a dietary goal, and proactively trigger a journey or suggestion.”
On autonomous agents: “I think we’re a long, long way away from that, partly because all the backend systems are so complex.”
— StJohn Deakins
However, Deakins is cautious about using the A word for DataSapien’s platform. He characterizes the current state of agentic functionality in the industry as “decision support” rather than fully autonomous agents. But he thinks that as agentic technology gets better, it will enable companies — like retailers, for example — to build better relationships with their customers through apps that can use their personal data (with permission!).
“I think we’re a long, long way away from [autonomous agents], partly because all the backend systems are so complex,” he said. “But in terms of this gradual move from the attention economy to the relationship economy — where the AI can actually help brands to build relationships, by helping them to make better decisions and then helping by doing simple tasks — I see that as being kind of a gradient, right? So it’s building trust over time.”
How DataSapien Compares to Google’s Web AI
I noted that Google is doing a lot of on-device AI work under the umbrella of “Web AI,” which primarily focuses on the browser. I asked Deakins whether he has looked into Google’s approach.
He hadn’t, but he then listed some of the same benefits to on-device processing that Google’s Web AI lead Jason Mayes had also mentioned in a previous interview — such as performance and privacy. Perhaps the biggest benefit, though, according to Deakins, is the cost factor.
” … If it’s on-device, you’re using local compute.”
— StJohn Deakins
“Because if I’m sending everything off to Anthropic or OpenAI,” he said, “I’m paying token fees; and if it’s on-device, [I’m] using local compute.”
After our interview, Deakins followed up with some technical notes from a conversation he had with his Engineering Director, Hamit Hasanhocaoğlu (with the disclosure that he ran the notes through Claude and then edited them).
“DataSapien’s Mobile SDK currently uses native on-device inference engines (llama.cpp, Cactus, LiteRT) — all C++ with low-level optimizations for iOS/Android. We’ve built wrappers to enable the SDK to be embedded into React Native, Flutter and KMP [Kotlin Multiplatform] apps.”
He added that DataSapien will “build and extend to a Web App SDK,” using WebAssembly to take advantage of the same mobile inference engines.
The Future of On-Device AI
What DataSapien offers is an enterprise platform for building privacy-focused, AI-centric mobile apps — and perhaps in the near future, agentic apps. It nicely complements what Google is doing with Web AI, and I see plenty of scope for both approaches as AI gets baked into more and more applications.
I especially think SLMs will be a winner for mobile apps (whether native or web), because these models are getting ever more powerful.
The post New Mobile SDK Brings Low-Code Development to On-Device AI appeared first on The New Stack.
