Society-Centered Design Is A New Approach to Tech Development
"By recognizing that human-centered design is problematic in the way that it focuses on individuals, on consumers, we can recognize that we need a new framework to ask different questions."
Deep cracks have been appearing in the Ayn Rand-inspired radical individualism on which the Silicon Valley ethos was founded. Amongst the many paradigm shifts Covid-19 is triggering is an end to the era of solipsism. In its place, a return to the collective is underway, born from the recognition of how intimately intertwined our planet’s ecosystems are. In this new age, old models of thinking, designing, and coding no longer fit.
Paving the way for new frameworks is London-based IF, a technology studio founded in 2016 in pursuit of building a more ethical information society. Specializing in the practical application of data and AI ethics, IF’s researchers, technologists and designers produce toolkits, patterns catalogs and other resources that provide an alternative to the convoluted Ts and Cs and opaque data use policies that have long been entrenched as the status quo.
One of their latest projects takes the form of a manifesto: Society-Centered Design, a vision for data ethics that goes beyond the individualism of ‘human-centered design’ to consider the wider implications of data at a societal level. Based on values such as compassion, representation and the civic commons, Society-Centered Design outlines an alternative vision to the status quo; one in which data can be used to empower rather than exploit.
In conversation with Anna Dorothea Ker, IF designer Georgina Bourke shares the limitations of human-centered design thinking, why the time has come to broaden our collective horizons, and how the Society-Centered Design manifesto aims to enable new dimensions in perspective.
ADK: What was the catalyst for creating the Society-Centered Design manifesto?
GB: It was a culmination of looking at the way that we were running projects, and reflecting on the set of values that we brought to new briefs. We see ourselves as an activist business and our values are at the core of IF’s existence. Our vision is for data to be used in ways that empower people, rather than exploit them. We believe data driven services should care for people and make lives better, rather than surveil and monetize them in the kind of dystopian futures that we were seeing around us.
So we were questioning what we were doing and how our approach was different, and explaining that to people – both the wider community as well as clients and practitioners. We came to the conclusion internally that we needed a new language to describe our process. The concepts behind society-centered design are not something IF invented. We took influences from the community – people are already thinking about this – but we saw a need for a new language and a new lens for approaching design and considering the needs of the collective.
Ultimately, it’s about moving from the individual perspective that you see in traditional frameworks for designing products and services. By traditional frameworks, I mean human-centered design, and similar approaches.
ADK: Can you unpack the concept of human-centered design – and the gap its shortcomings it leaves that you’re aiming to fill?
GB: Human-centred design is a way of shifting individual users – or customers – to the center of your design process. It’s about understanding how they live, what their individual needs are, and designing products and services that directly respond to those needs, in the context of the organization you’re designing for, and its agenda. This has led to the optimization of products and services for mainly efficiency and profit. Yes, they are tied to individual needs, but only if they succeed in the marketplace. This has led to some progress – things have become easier to use, but only for the specific groups of individuals designers look at.
By just focusing on individuals, we exclude vulnerable groups with accessibility needs; we exclude thinking about families and communities and wider networks that we are all part of. By optimizing products and services for efficiency and financial gain, we’re not thinking about the inequalities that potentially are caused by not considering different communities and groups, or opportunities for public value. We are not thinking about unintended consequences that might impact other systems, or solving bigger societal problems, like the climate crisis, or public health. To anybody thinking or creating products and services, we’d ask: What are your measures of success and your measures of value? How can you hold yourselves to a higher standard, where the things we’re building aim to benefit the public, and start to address some of our systemic problems?
We need to shift from thinking just about the individual to the collective, to society. By recognizing that human-centered design is problematic in the way that it focuses on individuals, on consumers, we can recognize that we need a new framework to ask different questions, and look at the wider impact that we have. This gives us a new tool set to attempt to change some of these massive challenges that we’re facing.
The reason IF have adopted a systems approach is because data is ultimately about multiple people. For example, when you use services like 23andMe, the DNA genetic testing service, and you share data about your genetic material, you share data about yourself, about your family, and also relatives that you maybe never knew about. So you’re making decisions on behalf of other people. You have rights over that data, but so do those other people. If you’re thinking about designing for people’s rights over data, how do you do that when data is about multiple people? You can’t do that if you just think about the experience the user journeys from the perspective of one person. There’s inherent qualities to data as a material that makes it impossible to just think about it from one person’s point of view.
The other part of data that requires a collective perspective is that its value and its power only manifest when it is in aggregate – when you’re looking at it at scale. When you’re thinking about why people should care about data, or what the risks are for certain communities, it’s really hard for an individual to have that kind of viewpoint. This is because at the moment it’s unclear how companies are aggregating data, how they’re storing it, and what they’re using it for. There’s no way one person could ever see how those systems work.
ADK: This lack of transparency – how did it come to be the norm? What are the factors that are preventing people from gaining knowledge and agency over how data is being used?
GB: The way data is used is often hugely complex, and the kinds of interactions that people are used to for understanding this is through terms and conditions and privacy policies – really lengthy legal documents. These patterns, common ways of understanding or solving a particular problem – in this case, giving consent for data – don’t explain what happens to it in a clear and understandable way. They’re often shown to people at a time when they are just downloading an app, and don’t really want to engage with the topic. But they also obscure a lot about what will happen to data in the future.
So when you’re asking permission upfront to use data for any number of uses, it’s really hard for somebody to understand the potential risks, and to make an informed decision. It’s hugely unfair to expect people to take on that responsibility. How many organizations are radically transparent about the way that they’re using data? That’s what you need to have a full picture of what’s happening to your data. To make a fully informed decision, you need perfect information, and perfect information doesn’t exist. That’s why it’s so difficult for individuals to make informed choices.
Our lives are only going to keep becoming increasingly connected. What is a sustainable way for ensuring that people have the control and agency they want over data? How do we ensure the systems are designed so they are safe and secure, and that the burden doesn’t fall on people to ensure that that happens? Part of that is taking a society-centered design perspective and thinking about the systems we need to create to hold organisations to account and minimise unintended consequences. I’m thinking about AI and the way it amplifies bias, or the Cambridge Analytical scandal where mass data was collected about people, who were sent targeted advertising aimed at swaying their political decisions. The scheme had an impact on elections at the time [the 2016 election of Trump, and the Brexit vote]. We’re starting to see the kind of impact organizations can have when they’re collecting data at that scale. Those kinds of systemic impacts require systemic perspectives and solutions, not just ones focused on individuals.
ADK: The manifesto is structured according to five values and 10 principles. Can you shed light on the tools and research process the team used to come up with them, and share how they fit together?
GB: Many of the values were part of our approach already. In every project we run, we ask questions like “how is this showing care for people? How is this redistributing power? How does this enable somebody to exercise their rights?” So as a team, we chose which ones were most important to communicate, and the ones that contributed to the vision and the mission, and a lot of Google docs [laughs]. Everybody was jumping in with comments and feedback, showing it to friends and colleagues. We got to a first version that Sarah launched at IAM Weekend 2020. We had an amazing response from the community with lots of great feedback, and iterated it again to incorporate some of the views. But we’re only really at the start of what society-centered design means, and how it is going to emerge and become useful and practical for people. That’s the next step – understanding how it translates to different people’s practice, how you apply it, and what the impact is. We’re all really excited for that.
ADK: Can you share how the manifesto could be translated into practical use?
GB: There are a number of opportunities for how it could be applied, or how it could develop and grow. One way you can use it is questioning if the principles are showing up at the moment in your practice – what’s missing, and asking how you could surface that more in your work. Then there’s the design patterns catalogue that we run, which looks at the common interactions and patterns you have with data sharing and transparency. It might be that we share and develop practical tools for society-centered design through different patterns, and how they show up in interfaces. We’re also looking at collaborations, and to partner with various groups to see how we can make society-centered design practical for different people across different uses.
ADK: To situate society-centered design in our current context, as the manifesto’s principle, “Confront Uncertainty”, itself states, Covid-19 has revealed just how “intertwined, complex and ever-shifting” the issues we face on a planetary level are. How does this apply to the current global crisis?
GB: In so many ways. It feels like we’re really finding our value and our resilience again as communities, and I think society-centered design has a big role to play in that. One of the things we’re thinking about at the moment is public health and the contact tracing apps that governments and organizations are rolling out, and the way that data is collected about people. The individual models for consent we have at the moment, and the government mandates for using data are two ends of the spectrum, so we’re in this crisis point where governments are using data in new ways – for really important purposes, but we don’t have frameworks for using this kind of data, at this scale and speed. What data is going to be collected? What will it be used for? How long will it be kept for when the crisis is over? What’s going to happen to these data sets?
Society-centered design prompts us to look at collective consent, and how you gain a community’s view of whether data collection is acceptable in any different scenarios. Then from an equality perspective, what data is being collected about people who are excluded from those data sets? How are we protecting different communities here in the way we collect and use data? Those are two things that spring to mind immediately – how do you gain collective consent for using data for public health in that kind of spectrum between individual and blanket consent, and then how are we ensuring that vulnerable groups and different communities are equally protected, given the things we build into the data that we’re collecting about those people? We don’t have frameworks for that yet, and I think there’s a real opportunity there.