This essay explores visually how keeping our data and AI on‑device could empower not only users, but also companies.
The cloud cripples your data. Instagram has your photos, iMessage your messages and Google your documents. By splitting up our data, we prevent any AI from truly knowing us as individuals. And by giving away all control, we relegate ourselves to mindless drivers of engagement. What have we gained from sharing our lives with Facebook? If data is the new oil, where are our cars?
Algorithms will soon influence every part of our world. Google and Facebook already use AI to decide what you see, and by extension what you think, feel and do. Do we really want to base our behaviour on systems we can’t understand and can’t control?
I believe that users having control over their data will let companies achieve the full potential of AI. By designing technology to align with user values, we can reignite our confidence in technology that amplifies and advances humanity.
We need to rethink the relation between our data and the services that use it.
First of all, any solution has to benefit both users and companies. Many talented developers are already working to solve the technical challenges of decentral systems. But I think at its core, this is also a design problem. So I would like to show you how on-device data handling enables new concepts that lead to a better AI and user experience.
We start by keeping our most important data with us, on our devices. So if a company wants to access it, they need to ask you for permission – even if you use it in their app. Apps then are just interfaces to view and interact with our data on-device.
That means you stay in control. You don’t have to decide whether you trust a company with your data. And just as importantly, it means that apps can use each other’s data without concerns for privacy.
Let’s call it our
Circle of Knowledge
Apps inside the circle have on-device access to each other’s data. Apps outside the circle keep their data separate. It’s a simple mental model, on which we can build an intuitive interface.
When you drag an app out of the circle, it can’t use data from other apps and vice versa. For these apps, you can then set individual permissions that cut off specific apps or sensitive data.
The circle is built on a set of standardized open data types. Apps inside the circle make a simple deal: They get on-device access to all data of these types from others app in the circle. In return, they have to store their own data of these types on-device (and on-device only) and open it up for other apps to use. The extent of data types could grow over time, but these could be a start:
Data no longer lives in a sandbox then. Apps can use all data created by the user, regardless of its origin. And you can switch between services without losing data.
It’s like all apps are in the same room.
YouTube knows that you just watched Coco on Netflix.
Safari knows what’s left on your Things to-do list for the day.
Snapchat knows about the conversations you have with your best friend on WhatsApp.
Amazon knows the jacket you raved about on Instagram.
Evernote knows about the hundreds of notes you made in Google Keep.
Spotify knows the song lyrics you looked up with Safari.
Photos knows your friends are reminiscing about last weekend’s hiking trip on Facebook.
By keeping data on-device, developers can build more personal and modular apps. So when you install a new app, it doesn’t have to start with a blank slate. The app already knows you: what you have done before, how you use other apps and how you want to live your life.
That’s where on-device AI comes in. All data from apps inside the circle can be used for system-wide machine learning. Data sets that are today spread across dozens of services, each with their own little AI, can be combined on-device to create a comprehensive view of your behaviour. This becomes possible through technologies like the iPhone’s neural engine and Google’s federated learning, which have made on-device machine learning very capable.What is federated learning?
💬 With federated learning, the results of on-device AI are encrypted and uploaded to a server together with data from other users. There, they are applied to a larger AI model that is then shared with all other users. This results in less latency, less energy consumption, and better privacy. Federated learning is already used in Google products like Gboard. It combines the advantages of a universally shared model with the ease of processing data on-device. Learn more about federated learning on Google’s AI blog.
Fueled by on-device data, on-device AI can eliminate the viability of traditional, proprietary data collection. Apps which want to be part of the circle would be required to themself support on-device data and to build on open data types. Over time, this could convince even entrenched companies to give up on collecting all of your data.
When you first open an app that supports on-device access, you get a confirmation that it will be part of the circle. You can toggle it off to keep the app outside the circle. It’s a simple reassurance that everything is under control.
This mental model is more playful and natural to think in than a list of permissions, and it’s reinforced with every new app you install. The user experience leads to users trusting and preferring apps that use on-device data, further pushing companies to move away from legacy data collection.
However, storing data on-device requires us to reconsider how we share it with others and sync it across devices.
You can be in control of your data and still benefit from the convenience of the cloud. Since data in the circle is processed on-device, it doesn’t have to be readable on cloud servers. Instead, we use end-to-end encryption to make sure it’s only accessible by the people you share it with.
💬 End-to-end encryption uses encryption keys to authenticate communication. It prevents interception by third parties. To decrypt data, users need a signed key. This is also called “zero knowledge”, since not even the service provider can access your original data. Services like WhatsApp, iMessage and PGP use end-to-end encryption.
Your data is then independent of any specific cloud provider. It doesn’t matter whether it’s stored by Google, Dropbox, a distributed network or a small local provider. They would all be part of the same standard, acting as commodity storage for your encrypted data. You can keep everything in one place, or switch cloud providers for data types you want to keep separate.
This system enables user privacy and control, while being every bit as convenient and powerful as existing cloud services.
💬 Services like Facebook could still provide free cloud storage for data types supported by their app. They would behave like any other cloud provider, so your data is still encrypted and not readable by Facebook. But for small companies, it removes the complexity of maintaining a secure cloud infrastructure.
Sometimes, we do want to share our data with external organizations. Lyft might want location data to coordinate its cars, Facebook could keep around features that rely on data collection, and doctors could request health data.
These cases require your data to be stored and processed on a server you have no control over. Handing over your data is hard to reverse, and can have lasting consequences. So when it happens, you should know exactly what’s at stake.
In many ways, sharing your data is similar to a payment transaction. You wouldn’t buy something without knowing what you get and how much you pay. Likewise, you should understand how data will be processed (and what that means for you) before giving it away.
What if we treat our data like our money?
Users are familiar with the significance of this interface from Apple Pay and know to treat it different from normal notifications. It expands to an overview in the settings, where you can see all companies you have shared data with, look into details, and revoke permits.
The split between on-device access (through the circle) and external sharing (through permits) is critical for helping users understand and control how their data is handled. Powerful on-device access as the default lessens the need for external sharing, allowing us to shine a spotlight on the choices that truly matter.
💡 Thinking further: What if users could actually modify these permits? The user could decide exactly what data is shared, whether it’s anonymized, and what it can be used for. Is this possible technically? Which decisions should be made by the user, and which are better left to the companies? How would an interface work that doesn’t overwhelm the user?
This is about more than privacy. Tech companies have lost the trust of their users.
Today, users feel a deliberate absence of clarity about what happens to their data in “the cloud”. It allowed Facebook and others to raise empires without worrying their users. But it has now backfired dramatically, as confusion has grown into helplessness and wariness of all new technology – regardless of the intentions behind it.
Companies need to treat users like adults and design systems that help users understand how their data is handled. Only then can they rebuild the confidence in technology that they need for a future where data is inherent to everything we do.
Today, every company has a few pieces of your data. That system might work for making you click on ads, but it can’t power an AI that truly understands you. Imagine what would be possible if your phone knew all about you.
Apps inside the circle use and contribute to a system-wide profile. It uses on-device AI to analyze the open data types and metadata reported by apps in the circle. The profile keeps track of all your interests, which apps can use for their own AI-based features (without any data leaving your device!). It lets users understand and engage with AI through a single interface.
The spatial interface offers a playful way to explore what the AI has learned. Instead of strict categories and rankings, the profile places related interests near each other and highlights common themes. It’s a map of yourself – it grows as you grow, mirroring how you think about your interests.
Tap on an interest to view the data behind it. A calculated interest level shows how each interest is weighted. You can raise it through boosts, or remove the interest altogether.
It’s a good start. But in the long term, the profile could evolve to be about much more than interests.
We need technology that knows us not just as engagement numbers. Instead of distracting us from our goals, it should empower us to live the life we aim for. This requires an AI with deep insights into our life. Insights that can’t be gathered by a single app or service.
Technology should understand our behavior beyond the surface of impulsive taps and swipes.
As AI technology advances, the profile could eventually include your values, opinions, habits, routines, desires and relationships. Then, the AI wouldn’t just know which content best captures your fleeting attention. It would have a real understanding of what you do, why you do it, and how it can help.
💡 Thinking further: A profile that aims to really understand the user is inherently complex. Assuming the technology gets there, how much of that complexity should be surfaced to the user? Can we build abstract models of what the AI learns in ways that everyone can understand? If we then want the user to be in control, do we need interfaces that visualize our opinions, habits and desires? How do we show connections between these different kinds of data?
The result would be an OS that always knows your status, and the context of what you are trying to do. Apps can then use these insights to tailor their experience to your current situation.
Slack knows when you are “in the zone” and doesn’t interrupt.
Google Maps knows that you spent the morning planning your trip to Amsterdam.
Mail knows you are trying to be more patient with your coworkers.
Facebook knows that you are scrambling to finish the presentation for tomorrow.
Photos knows that you have just gotten over the person you spent this day one year ago with.
Things knows that you prefer to run errands after lunch.
App Store knows which apps worked out best for people with a similar profile.
Siri knows that the phone is placed on the table at a café right now, where you are meeting with your colleague and longtime friend David to go through the notes from yesterday’s design review.
This kind of profile becomes viable through on-device data and on-device AI. It’s a long-term foundation for technology that thrives when it supports your values, that rejects clickbait, made-up news, and sensationalism.
Power over your profile is crucial then. As the role of AI grows, so does the importance of control and transparency. If we want an AI to understand us, we must first make sure that we understand the AI.
You shouldn’t have to stumble through settings to find out how apps use your profile and your data. Transparency is most useful in the immediate context of your actions.
On iOS, Siri is the face of AI. It represents both system-level features and third-party apps, and it’s always just a button press away. Siri should be accountable. It should tell you why AI decisions are made, how your data is handled, and help you change these behaviors.
Siri is also available inline as you interact with your data or with content shown through the AI.
These UI metaphors for controlling your data and the AI scale across different contexts, building a cohesive mental model for the circle, data permits and your profile.
💡 Thinking further: On-device data is great for building a comprehensive profile. But some things are difficult to figure out through data alone. Often, it’s best to just ask the user – if only to confirm suspicions of the AI. But how do we make sure these prompts aren’t intrusive or annoying? They shouldn’t feel like a chore, but they do need to provide insights that might be uncomfortable for the user. I have toyed with the idea of having Siri ask the user simple questions, but it’s really difficult to phrase them right. You can find my mockups here.
Let’s recap the ideas we went through.
The cloud cripples your data. AI needs all your data in one place. On-device instead of cloud. You stay in control.
Circle of Knowledge – All apps in the same room. Apps in the circle can use each other’s data. Open data types. No more sandbox. On-device AI through federated learning. Apps need to contribute to participate.
Data Permits – Treat data like money. Encrypted storage through cloud providers. Know what’s at stake when sharing data. On-device as the default allows for detailed permits. Transparency leads to trust and confidence.
Profile – A map of yourself. Interests. System-wide for apps in the circle. Spatial interface. Evolves to learn our values, habits, desires. Apps know the context. Foundation for understanding beyond impulsive engagement.
Transparency – Siri as the face of AI. Transparency is most useful in context. Siri should be accountable. Through voice and inline.
Cambridge Analytica. GDPR. Time Well Spent. Last year, it became clear how important it is for us to be in control of the technology we use. The common thread is a failing of technology to treat us as individuals, to respect our rights and support our values.
This used to be an issue of privacy, but we have moved well beyond that. Today, we have a unique opportunity to connect the dots. Users feel helpless against increasingly imposing technology, while designers and developers are sick of selling out their users. On the other hand, we have decades of research on decentralized technology, powerful computers in our pockets and AI technology that is severely limited by the cloud.
Data is mighty. It can empower us, but it can also be used against us. The path we are currently on leads to a single company owning all of our data. This would crush competition, our freedom and democracy. We need to be deliberate and thoughtful in building the foundation of technology for the next decades.
This will require broad collaboration between companies. Ultimately, the goal would be to create open, cross-platform standards for the ideas outlined here. But the principles behind my designs can take many forms. Together, we can figure out systems that let us control our data, use AI to empower users, and build the future of technology without sacrificing our humanity.
I’m an interface designer from Berlin. In 2016, I interned at Basecamp and Google after publishing Desktop Neo. I am now working on Muse, a tool for thought on iPad.