Making Public Services Explorable

It is exceptional that one should be able to acquire the understanding of a process without having previously acquired a deep familiarity with running it, with using it, before one has assimilated it in an instinctive and empirical way. — John von Neumann, “The Mathematician”

A person is not a machine, and should not be forced to think like one.
— Bret Victor, “Learnable Programming”

This is the first in a series of essays about open logic for public services. (You can find the second here.) Together, they argue for public systems that are built to be understood and navigated. This is an argument for open rules, logic, and algorithms, and for interfaces designed to help people learn about the complex systems they rely on every day, so they can improve their lives. This essay focuses on one small entry point into public systems — determining eligibility for subsidized services.

Public systems are designed to enable people to assert rights, get help, and address grievances. In short, public systems are built to help people better themselves: we use them to get what we need, and participate in them to improve or change how they function.

One way to think about how well public systems are working is to look at how well people are using them to improve their lives. To go even simpler, we’re thinking about at how well an activity (engaging with public systems) produces something we value (people improving their lives). Part of this is improving process — what the activity requires of a person. But just as important at improving the effectiveness of an activity is improving someone’s ability —how good a person is at an activity.

Often, public systems design focuses on improving process. Every interaction that a person has with a process is an opportunity for designers to learn about how that process works, and how to improve it. Enormous effort has gone into improving processes this way. But, these interactions are also opportunities to improve a person’s understanding of a process, and their ability to navigate it. Too often, these latter opportunities are ignored. Investing in them can not only change the way public services work, but could upend how people think about and advocate for the policies that shape their world.

Services as slot machines

Take, for instance, subsidized public services, like legal aid. In digitizing these services, lots of valuable effort has gone into online forms for intake — the starting point for a process. These forms tend to have a spray of fields to be filled out and questions to be answered, usually over a series of screens. In other words, they look like every other form on the internet, from signing up for Facebook to taking a quiz about what Hogwarts house you’re in.

Sometimes, an online form is used to generate another, more complex form (in another, fussier format, like PDF) to submit to a court or a government office. From there, one awaits a result of uncertain quality or timeliness.

Other times, a person might directly receive a determination — that they are, or are not, eligible for a service.

Other than the lag time between submission and response, these two patterns of behavior are similar. A person has to complete a form before receiving any sort of result. The internal logic between the form and the service — what sort of service one could get, how the answers one gives will affect service availability or appropriateness, what the next steps are, or whether one is even eligible for a service to begin with — is hidden.

But, a person can’t access services without filling out forms, and answering questions. If a person is unsure about how to answer a question that stands between them and help, they might guess. If their answers don’t yield a positive response, a person might try again, guessing at different responses in hopes that they’ll get a better answer. If they have multiple issues that need help, maybe they’ll guess about which one might fit the best. And so on.

This looks a lot like a slot machine.

It may be that this is a technologically expedient way to manage something like eligibility, saving valuable staff time, and filtering people out who may not be eligible. But forms like this don’t help people understand the government, social, and legal systems they’re engaging with. Forms and walkthroughs that hide their internal logic make every user a new user, every time. Additional problems manifest when an underlying policy or rule changes, and the form silently changes — or worse, doesn’t change, and produces wrong results based on outdated information. To a user, the slot machine stays the same. This is fine for quizzes about Hogwarts houses (I’m a Ravenclaw myself), but less fine when helping a person determine whether they’re eligible for public services.

Eligibility as interpretation

A concrete example. In the US, income and household size are major components of a person’s eligibility for social services (such as subsidized legal aid). A form might capture that information in two fields, like the one we saw earlier:

Over the last year and a half, I’ve probably had conversations with around two dozen people who work in or around intake and triage for social and legal service providers, or call centers. By far, the most consistent concern I hear about a form like this one is that eligible people may mistakenly self-disqualify if they fill it out alone. For instance, a potential client might not include a household member who isn’t an immediate family member (or who is in the country illegally), but who lives with them nonetheless. Calculating monthly income isn’t simple either — low-income clients are likely to have inconsistent income, and their eligibility might change depending on the timeframe of income being measured. Someone may also unfavorably calculate income by including forms of government assistance, which sometimes counts, and sometimes doesn’t.

To the above form’s credit, it tries to mitigate this with additional text, although hidden behind a help bubble:

Generally, your household includes you, your spouse, and your children under age 18. If you live with other adults, include them if you are supporting them or if they are supporting you or your children. Income includes earnings and cash benefits of household members. It does not include SNAP/Food Stamps.

Still, the critique persists that human interviewers can identify and pick at these cases, as well as variable eligibility requirements between organizations, to help people maximize their eligibility for assistance.

(It should be noted that even once someone fills out of one these forms, or talks to a person, they’ll still need to provide verification of their income and household size. At this early stage, we should err on the side of qualification, to prevent eligible people from self-disqualifying.)

Designing for activities

A digital form like this might have come out of a human-centered design approach, which could have found that users believed filling out paper forms to be difficult and inconvenient, or that intake staff spent lots of time on the phone. But this is a shallow insight. The core activity we are trying to improve is not filling out a form, but acquiring a service, or more broadly, getting the best help available. In other words, finding help isn’t a neat process, but a messy one, and we should design accordingly, accounting for all possible errors rather than simplifying for simplicity’s sake. As Don Norman writes:

One way to do this is to look at all the error messages, determine why they might arise, and redesign so that they either never appear, or if they might, that they are transformed into assistance. Not “help” which tells the person what should have been done, but “assistance” which offers the proper action and makes it so easy to proceed that the person might deliberately type incomplete information to get the guidance.

Remember: the “perfect” behavior seldom arises. Almost every situation is a special case of one sort or another. So design for the special cases, design to eliminate error messages.

I believe that we should increase our focus upon the tasks and activities to be accomplished and reduce the focus on these cute but design-empty scenarios and personas. If I truly understand the task, if I truly understand the mixture of tasks that together comprise an activity, and if I truly understand the interruptions, ill-defined nature of most people’s approach to their activities, then I can provide far better support than if I focus upon the training, age, or personality of the individual people who might use it.

Here, for instance, a single form field for monthly income carries with it the assumption that a person has a consistent monthly income. Not only is this assumption wrong, it leaves any deviation from that assumption — translating from hourly rates, or work with tips (or cash under the table), or contract work, or shift work — as a task for person filling out the form. In other words, these rules have room for interpretation, and it’s left to the a potential client to 1) figure that out and 2) make the best case for themselves under those rules. That a user still may not be able to use a form like this without assistance is not their fault — it’s the fault of poor design. Norman, with Pieter Jan Stappers, continues:

”There is a tendency to design complex sociotechnical systems around technological requirements, with the technology doing whatever it is capable of, leaving people to do the rest. The real problem is not that people err; it is that they err because the system design asks them to do tasks they are ill suited for.”

Criticisms of these forms are correct, but don’t go far enough. It may be that talking to a human interviewer maximizes someone’s eligibility for services like legal aid. (It’s part of why we’ve focused on projects to assist interviewers, rather than be directly used by a client.) But the slot machine persists — phone interviews still don’t help a client learn about how eligibility is determined, or how someone decides whether to take their case, or how to navigate the complex system they’re being introduced to. Rules exist to capture the nuances of someone’s situation, and express them in terms of eligibility for services, but those rules are still being hidden. In both cases, a client is providing information to a black box, and then receiving information back about their eligibility, without much sense of how they’ve gone from A to B, or why.

For a person faced with using a system to find help, this is the difference between understanding a system and being shunted through it. If, for instance, low-income Americans face three legal issues a year, shouldn’t we aspire to build a legal system that helps people get better at resolving those issues each time?

Eligibility as exploration

From this, we might suggest that good interfaces for public services should: 1) show their logic, 2) turn imperfections into assistance, and 3) help users understand the system behind the interface.

These are somewhat controversial arguments, and I’ll dive into them in more detail in future pieces. For now, though, an exercise: what might a good interface look like for say, eligibility? Let’s run with the fields from earlier in the post. We’ll contrive it into our own form:

We might start by putting the result of a person’s input next to the input itself. That result could update when someone enters new information. Now we’re at this:


This still hides the ball a little. We could also add an explanation about what the income cutoffs are for each household size. This explanation could also change as the household size changes.


We could also make this more explorable, so that someone could more easily manipulate different income and household sizes, and see what the results are.


We could find a better way to phrase household size, and add explanations inline instead of hiding them behind question marks. We could also use friendly prose, instead of terse labels.


Finally, we could add a small adjustment for different pay schedules.


We could keep going — there’s plenty of room to build on this. We could add more detailed income adjustments, allowing people to input multiple paychecks, over a varied period of time, and for every household member. We could include information on what someone would need to bring as proof of income if they qualify, or offer alternate resources if they don’t qualify. If this form was designed to help someone find help at multiple organizations, we could visually mark organizations by estimated eligibility, instead of hiding organizations that someone might not be eligible for. And all of this could happen before the form is actually submitted.

Building for understanding

The point is that the design is, to paraphrase Norman again, focused on the experience of core activity (finding help), rather than the cleanliness of individual screens.

Optimizing for the activity of finding help presents a different, broader design question: it invites us to not only improve the process of finding help (i.e. the requirements of the activity), but also to improve someone’s ability to find help (i.e. actually conducting the activity).

The goal of this form, then, is not only to filter people from services, it’s to help people understand how to get services, and how the system works. In other words, good interfaces do more than build processes — they improve ability, and help people build mental models of the underlying system, even if it’s complex. One field that does this well is game design. Will Wright, designer of SimCity, explains in an interview how helping people pick up complex systems is grounded in exploratory learning:

As a player, a lot of what you’re trying to do is reverse engineer the simulation. You’re trying to solve problems within the system, you’re trying to solve traffic in SimCity, or get somebody in The Sims to get married or whatever. The more accurately you can model that simulation in your head, the better your strategies are going to be going forward. So what we’re trying to as designers is build up these mental models in the player. The computer is just an incremental step, an intermediate model to the model in the player’s head. The player has to be able to bootstrap themselves into understanding that model. You’ve got this elaborate system with thousands of variables, and you can’t just dump it on the user or else they’re totally lost.

Here, the form’s logic is front and center, and presented in a way that the user can explore and begin to understand it. A user can manipulate the fields prior to submission, and see the result of that manipulation immediately. Potential imperfect entries are turned immediately into assistance, and explanations are tightly integrated into the form itself, rather than hidden behind help icons. To paraphrase Bret Victor, who created the library that powers these examples, we strive to make the meaning of each field transparent, and explain them in the context of a larger service.

Rewriting the rules

Forms are interfaces to complex systems that people turn to in times of need — systems of government, of legal aid, of social services — of policy. Making these interfaces manipulable, understandable, and explorable can not only improve how people access help, but it can improve people’s understanding of the policies and systems that impact their lives. As it stands, these potentially rich interactions are replaced by a passive receipt of whatever opaque output the slot machine yields.

This approach — transparent, embedded logic, and assistive, exploratory interfaces — isn’t just a way to approach forms. All public services, digital or otherwise, should be like this. Services are manifestations of policy, and their interfaces should be explorable and educational. They should not be slot machines, dependent on whatever answer one was fortunate enough to select, or what door one was fortunate enough to enter the system through. This impacts participation, as well: when someone is only shown outputs, absent process, their ability to participate in policy decisions is reduced to whatever reward they receive (or don’t receive), as opposed to judging the process that yielded that reward in the first place. Hiding the logic behind digital forms and services makes government less transparent, and makes it more difficult for citizens to affect policy. We can — and must — do better, and aspire not to hide complexity, but to help people bootstrap models for understanding it. The question, then, is not whether someone should use a computer or talk to a person, but how can we build systems that complement the strengths of both human and digital interfaces, in such a way that maximizes everyone’s ability to find help.

Digitizing services offers the opportunity to introduce new paradigms for interaction between users and government. We squander that opportunity when we produce mere digital versions of analog interfaces, most of which weren’t all that good to begin with. Interfaces to public systems should not merely be data collection fields that hide a larger policy machine, but a glimpse into the machine itself, with the wires and gears exposed, so that a citizen can more fully understand, navigate, and influence, the systems they live in.

In a future essay, I’ll talk more about the importance of exposing a system’s logic and rules, what new behaviors we might expect from doing so, and how hiding those rules interferes with participation and increases reliance on middlemen.

The source for these examples can be found here.

For further reading, read this excellent piece on learnable programming, and the accompanying Tangle.js, to which this essay owes a great debt.

Lawyer, technologist. Affiliate at Berkman Klein Center & Duke Center on Law and Technology. Adjunct prof. at Georgetown Law.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store