Episode 135 | April 13, 2022
In “Just Tech: Centering Community-Driven Innovation at the Margins,” Senior Principal Researcher Mary L. Gray explores how technology and community intertwine and the role technology can play in supporting community-driven innovation and community-based organizations. Dr. Gray and her team are working to bring computer science, engineering, social science, and communities together to boost societal resilience in ongoing work with Project Resolve. She’ll talk with organizers, academics, technology leaders, and activists to understand how to develop tools and frameworks of support alongside members of these communities.
In this episode of the series, Dr. Gray and Dr. Sasha Costanza-Chock, scholar, designer, and activist, explore design justice, a framework for analyzing design’s power to perpetuate—or take down—structural inequality and a community of practice dedicated to creating a more equitable and sustainable world through inclusive, thoughtful, and respectful design processes. They also discuss how critical thinkers and makers from social movements have influenced technology design and science and technology studies (STS), how challenging the assumptions that drive who tech is built for will create better experiences for most of the planet, and how a deck of tarot-inspired cards is encouraging radically wonderful sociotechnical futures.
- Design Justice: Community-Led Practices to Build the Worlds We Need
- Algorithmic Justice League
- Coded Bias documentary
- “Bug Bounties for Algorithmic Harms? Lessons from cybersecurity vulnerability disclosure for algorithmic harms discovery, disclosure, and redress”
- The Oracle for Transfeminist Technologies card deck
[MUSIC PLAYS UNDER DIALOGUE]
MARY GRAY: Welcome to the Microsoft Research Podcast series “Just Tech: Centering Community-Driven Innovation at the Margins.” I’m Mary Gray, a Senior Principal Researcher at our New England lab in Cambridge, Massachusetts. I use my training as an anthropologist and communication media scholar to study people’s everyday uses of technology. In March 2020, I took all that I’d learned about app-driven services that deliver everything from groceries to telehealth to study how a coalition of community-based organizations in North Carolina might develop better tech to deliver the basic needs and health support to those hit hardest by the pandemic. Our research together, called Project Resolve, aims to create a new approach to community-driven innovation—one that brings computer science, engineering, the social sciences, and community expertise together to accelerate the roles that communities and technologies could play in boosting societal resilience. For this podcast, I’ll be talking with researchers, activists, and nonprofit leaders about the promises and challenges of what it means to build technology with rather than for society.
My guest for this episode is Dr. Sasha Costanza-Chock, a researcher, activist, and designer who works to support community-led processes that build shared power, dismantle the matrix of domination, and advance ecological survival. They are the director of research and design at Algorithmic Justice League, a faculty associate with the Berkman Klein Center for Internet & Society at Harvard University, and a member of the steering committee of the Design Justice Network. Sasha’s most recent book, Design Justice: Community-Led Practices to Build the Worlds We Need, was recently a 2021 Engineering and Technology PROSE Award finalist and has been cited widely across disciplines. Welcome, Sasha.
SASHA COSTANZA-CHOCK: Thanks, Mary. I’m excited to be here.
GRAY: Can you tell us a little bit about how you define design justice?
COSTANZA-CHOCK: Design justice is a term—you know, I didn’t create this term; it comes out of a community of practice called the Design Justice Network. But I have kind of chronicled the emergence of this community of practice and some of the ways of thinking about design and power and technology that have sort of come out of that community. And I’ve also done some work sort of tracing the history of different ways that people have thought about design and social justice, really. So, in the book, I did offer a tentative definition, kind of a two-part definition. So, on the one hand, design justice is a framework for analysis about how design distributes benefits and burdens between various groups of people. And in particular, design justice is a way to focus explicitly on the ways that design can reproduce or challenge the matrix of domination, which is Patricia Hill Collins’ term for white supremacy, heteropatriarchy, capitalism, ableism, settler colonialism, and other forms of structural inequality. And also, design justice is a growing community of practice of people who are focused on ensuring more equitable distribution of design’s benefits and burdens, more meaningful participation in design decisions and processes, and also recognition of already existing, community-based, Indigenous, and diasporic design traditions and knowledge and practices.
GRAY: Yeah. What are those disciplines we’re missing when we think about building and building for and with justice at the center of our attention?
COSTANZA-CHOCK: It’s interesting. I think for me, um, so design and technology design in particular, I think, for me, practice came first. So, you know, learning the basics of how to code, building websites, working with the Indymedia network. Indymedia was a kind of global network of hackers and activists and social movement networks who leveraged the power of what was then the nascent internet, um, to try and create a globalized news network for social movements. I became a project manager for various open-source projects for a while. I had a lot of side gigs along my educational pathway. So that was sort of more sort of practice. So, that’s where I learned, you know, how do you run a software project? How do you motivate and organize people? I came later to reading about and learning more about sort of that long history of design theory and history. And then, sort of technology design stuff, I was always looking at it along the way, but started diving deeper more recently. So, my—my first job after my doctorate was, you know, I—I received a position at MIT. Um, and so I came to MIT to the comparative media studies department, set up my collaborative design studio, and I would say, yeah, at MIT, I became more exposed to the HCI literature, spent more time reading STS work, and, in particular, was drawn to feminist science and technology studies. You know, MIT’s a very alienating place in a lot of ways and there’s a small but excellent, you know, community of scholars there who take, you know, various types of critical approaches to thinking about technology design and development and—and sort of the histories of—of technology and sociotechnical systems. And so, kind of through that period, from 2011 up until now, I spent more time engaging with—with that work, and yeah, got really inspired by feminist STS. I also—parallel to my academic formation and training—was always reading theory and various types of writing from within social movement circles, stuff that sometimes is published in academic presses or in peer-review journals and sometimes totally isn’t, but, to me, is often equally or even more valuable if you’re interested in theorizing social movement activity than the stuff that comes sort of primarily from the academy or from social movement studies as a subfield of sociology.
COSTANZA-CHOCK: Um, so I was like, you know, always reading all kinds of stuff that I thought was really exciting that came out of movements. So, reading everything that AK Press publishes, reading stuff from Autonomia, and sort of the—the Italian sort of autonomous Marxist tradition. But also in terms of pedagogy, I’m a big fan of Freire. And I didn’t encounter Freire through the academy; it was through, you know, community organizing work. So, community organizers that I was connected to were all reading Freire and reading other sort of critical and radical thinkers and scholars.
GRAY: So, wait. Hold the phone.
COSTANZA-CHOCK: OK. [LAUGHS]
GRAY: You didn’t actually—I mean, there wasn’t a class where Pedagogy of the Oppressed was taught in your training? I’m just, now, am like “Really?” That’s—
COSTANZA-CHOCK: I don’t think so. Yeah.
COSTANZA-CHOCK: Yeah, because I didn’t have formal training in education. It was certainly referenced, but the place where I did, you know, study group on it was in movement spaces, not in the academy. Same with bell hooks. I mean, bell hooks, there would be, like, the occasional essay in, like—I did undergraduate cultural studies stuff. Marjorie Garber, you know, I think—
COSTANZA-CHOCK: had like an essay or two on her syllabus, um—
COSTANZA-CHOCK: —of bell hooks. Um so, I remember encountering bell hooks early on, but reading more of her work came later and through movement spaces. And so, then, what I didn’t see was a lot of people—although, increasingly now, I think this is happening—you know, putting that work into dialogue with design studies and with science and technology studies. And so, that’s what I—that’s what I get really excited by, is the evolution of that.
GRAY: And—and maybe to that point, I feel like you have, dare I say, “mainstreamed” Patricia Hill Collins in computer science and engineering circles that I travel. Like, to hear colleagues say “the matrix of domination,” they’re reading it through you, which is wonderful. They’re reading—they’re reading what that means. And design justice really puts front and center this critical approach. Can you tell us about how you came to that framework and put it in the center of your work for design justice?
COSTANZA-CHOCK: Patricia Hill Collins develops the term in the ’90s. Um, the “matrix of domination” is her phrase. Um, she elaborates on it in, you know, her text, uh, Black Feminist Thought. And of course, she’s the past president of the American Sociological Association. Towering figure, um, in some fields, but, you know, maybe not as much in computer science and HCI, and other, you know, related fields. But I think unjustly so. And so, part of what I’m really trying to do at the core of the Design Justice book was put insights from her and other Black feminist thinkers and other critical scholars in dialogue with some core, for me, in particular, HCI concepts, um, although I think it does, you know, go broader than that. The matrix of domination was really useful to me when I was learning to think about power and resistance, how does power and privilege operate. You know, this is a concept that says you can’t only think about one axis of inequality at a time. You can’t just talk about race or just talk about gender—you can’t just talk about class—because they operate together. Of course, another key term that connects with matrix of domination is “intersectionality” from Kimberlé Crenshaw. She talks about it in the context of legal theory, where she’s looking at how the legal system is not set up to actually protect people who bear the brunt of oppression. And she talks about these, you know, classic cases where Black women can’t claim discrimination under the law at a company which defends itself by saying, “Well, we’ve hired Black people.” And what they mean is they’ve hired some Black men. And they say, “And we’ve also hired women.” But they mean white women. And so, it’s not legally actionable. The Black women have no standing or claim to discrimination because Black women aren’t protected under anti-discrimination law in the United States of America. And so that is sort of like a grounding that leads to this, you know, the conversation. The matrix of domination is an allied concept. And to me, it’s just incredibly useful because I thought that it could translate well, in some ways, into technical fields because there’s a geometry and there’s a mental picture. There’s an image that it’s relatively easy to generate for engineers, I think, of saying, “OK, well, OK, your x-axis is class. [LAUGHS] Your y-axis is gender. Your z-axis is race. This is a field. And somewhere within that, you’re located. And also, everyone is located somewhere in there, and where you’re located has an influence on how difficult the climb is.” And so when we’re designing technologies—and whether it’s, you know, interface design, or it’s an automated decision system—you know, you have to think about if this matrix is set up to unequally distribute, through its topography, burdens and benefits to different types of people depending on how they are located in this matrix, at this intersection. Is that correct? You know, do you want to keep doing that, or do you want to change it up so that it’s more equitable? And I think that that’s been a very useful and powerful concept. And I think, for me, part of it maybe did come through pedagogy. You know, I was teaching MIT undergraduates—most of them are majoring in computer science these days—and so I had to find ways to get them to think about power using conceptual language that they could connect with, and I found that this resonated.
GRAY: Yeah. And since the book has come out—and I, you know, it’s been received by many different scholarly communities and activist communities—has your own definition of design justice changed at—at all? Or even the ways you think about that matrix?
COSTANZA-CHOCK: That’s a great question. I think that one of the things that happened for me in the process of writing the book is I went a lot deeper into reading and listening and thinking more about disability and how crucial, you know, disability and ableism are, how important they are as sort of axes of power and resistance, also as sources of knowledge. So, like, disability justice and disabled communities of various kinds being key places for innovation, both of devices and tools and also of processes of care. And just, there’s so much phenomenal sort of work that’s coming, you know, through the disability justice lens that I really was influenced by in the writing of the book.
GRAY: So another term that seems central in the book is “codesign.” And I think for many folks listening, they might already have an idea of what that is. But can you say a bit more about what you mean by codesign, and just how that term relates to design justice for you?
COSTANZA-CHOCK: I mean, to be entirely honest with you, I think that when I arrived at MIT, I was sort of casting around for a term that I could use to frame a studio course that I wanted to set up that would both signal what the approach was going to be while also being palatable to the administration and not scaring people away. Um, and so I settled on “codesign” as a term that felt really friendly and inclusive and was a broad enough umbrella to enable the types of partnerships with community-based organizations and social movement groups, um, that I wanted to provide scaffolding for in that class. It’s not that I think “codesign” is bad. You know, there’s a whole rich history of writing and thinking and practice, you know, in codesign. I think I just worry that like so many things—I don’t know if it’s that the term is loose enough that it allows for certain types of design practices that I don’t really believe in or support or that I’m critical of or if it’s just that it started meaning more of one thing, um, and then, over time, it became adopted—as many things do become adopted—um, by the broader logics of multinational capitalist design firms and their clients. But I don’t necessarily use the term that much in my own practice anymore.
GRAY: I want to understand what you felt was useful about that term when you first started applying it to your own work and why you’ve moved away from it. What are good examples of, for you, a practice of codesign that stays committed to design justice, and what are some examples of what worries you about the ambiguity of what’s expected of somebody doing codesign?
COSTANZA-CHOCK: So, I mean, there—there’s lots of terms in, like, a related conceptual space, right? So, there’s codesign, participatory design, human-centered design, design justice. I think if we really get into it, each has its own history and sort of there are conferences associated with each. There are institutions connected to each. And there are internal debates within those communities about, you know, what counts and what doesn’t. I think, for me, you know, codesign remains broad enough to include both what I would consider to be sort of design justice practice, where, you know, a community is actually leading the process and people with different types of design and engineering skills might be supporting or responding to that community leadership. But it’s also broad enough to include what I call in the book, you know, more extractive design processes, where what happens is, you know, typically a design shop or consultant working for a multinational brand parachutes into a place, a community, a group of people, runs some design workshops, maybe—maybe does some observation, maybe does some focus groups, generates a whole bunch of ideas about the types of products or product changes that people would like to see, and then gathers that information and extracts it from that community, brings it back to headquarters, and then maybe there are some product changes or some new features or a rollout of something new that gets marketed back to people. And so in that modality, you know, some people might call an extractive process where you’re just doing one or a few workshops with people “codesign” because you have community collaborators, you have community input of some kind; you’re not only sitting in the lab making something. But the community participation is what I would call thin. It’s potentially extractive. The benefit may be minimal to the people who have been involved in that process. And most of the benefits accrue back either to the design shop that’s getting paid really well to do this or ultimately back to headquarters—to the brand that decided to sort of initiate the process. And I’m interested in critiquing extractive processes, but I’m most interested in trying to learn from people who are trying to do something different, people who are already in practice saying, “I don’t want to just be doing knowledge extraction. I want to think about how my practice can contribute to a more just and equitable and sustainable world.” And in some ways, people are, you know, figuring it out as we go along, right? Um, but I’m trying to be attentive to people trying to create other types of processes that mirror, in the process, the kinds of worlds that we want to create.
GRAY: So, it seems like one of the challenges that you bring up in the book is precisely design at—at some point is thinking about particular people and particular—often referred to as “users’”— journeys. And I wanted to—to step back and ask you, you know, you note in the book that there’s a—a default in design that tends to think about the “unmarked user.” And I’m quoting you here. That’s a “(cis)male, white, heterosexual, ‘able-bodied,’ literate, college educated, not a young child, not elderly.” Definitely, they have broadband access. They’ve got a smartphone. Um, maybe they have a personal jet, I don’t know. That part was not a quote of you. [LAUGHTER] But, you know, you’re really clear that there’s this—this default, this presumed user, ubiquitous user. Um, what are the limits for you to designing for an unmarked user, but then how do you contend with this thinking so specifically about people can also be quite, to your earlier point about intersectionality, quite flattening?
COSTANZA-CHOCK: Well, I think the unmarked user is a really well-known and well-documented problem. Unfortunately, it often, it—it applies—you don’t have to be a member of all those categories as an unmarked user to design for the unmarked user when you’re in sort of a professional design context. And that’s for a lot of different reasons that we don’t have that much time to get into, but basically hegemony. [LAUGHTER] So, um—and the problem with that—like, there’s lots of problems with that—one is that it means that we’re organizing so much time and energy and effort in all of our processes to kind of, like, design and build everything from, you know, industrial design and new sort of, you know, objects to interface design to service design, and, you know, if we build everything for the already most privileged group of people in the world, then the matrix of domination just kind of continues to perpetuate itself. Then we don’t move the world towards a more equitable place. And we create bad experiences, frankly, for the majority of people on the planet. Because the majority of people on planet Earth don’t belong to that sort of default, unmarked user that’s hegemonic. Most people on planet Earth aren’t white; they’re actually not cis men. Um, at some point most people on planet Earth will be disabled or will have an impairment. They may not identify as Disabled, capital D. Most people on planet Earth aren’t college educated. Um, and so on and so forth. So, we’re really excluding the majority of people if we don’t actively and regularly challenge the assumption of who we should be building things for.
GRAY: So, what do you say to the argument that, “Well, tech companies, those folks who are building, they just need to hire more diverse engineers, diverse designers—they need a different set of people at the table—and then they’ll absolutely be able to anticipate what a—a broader range of humanity needs, what more people on Earth might need.”
COSTANZA-CHOCK: I think this is a “yes, and” answer. So, absolutely, tech companies [LAUGHS] need to hire more diverse engineers, designers, CEOs; investors need to be more diverse, et cetera, et cetera, et cetera. You know, the tech industry still has pretty terrible statistics, and the further you go up the corporate hierarchy, the worse it gets. So that absolutely needs to change, and unfortunately, right now, it’s just, you know, every few years, everyone puts out their diversity numbers. There’s a slow crawl sometimes towards improvement; sometimes it backslides. But we’re not seeing the shifts that we—we need to see, so it’s like hiring, retention, promotion, everything. I am a huge fan of all those things. They do need to happen. And a—a much more diverse and inclusive tech industry will create more diverse and inclusive products. I wouldn’t say that’s not true. I just don’t think that employment diversity is enough to get us towards an equitable, just, and ecologically sustainable planet. And the reason why is because the entire tech industry right now is organized around the capitalist system. And unfortunately, the capitalist system is a resource-extractive system, which is acting as if we have infinite resources on a finite planet. And so, we’re just continually producing more stuff and more things and building more server farms and creating more energy-intensive products and software tools and machine learning models and so on and so on and so on. So at some point, we’re going to have to figure out a way to organize our economic system in a way that’s not going to destroy the planet and result in the end of homo sapiens sapiens along with most of the other species on the planet. And so unfortunately, employment diversity within multicultural, neoliberal capitalism will not address that problem.
GRAY: I could not agree more. And I don’t want this conversation to end. I really hope you’ll come back and join me for another conversation, Sasha. It’s been unbelievable to be able to spend even a little bit of time with you. So, thank you for—for sharing your thoughts with us today.
COSTANZA-CHOCK: Well, thank you so much for having me. I always enjoy talking with you, Mary. And I hope that, yeah, we’ll continue this either in a podcast or just over a cup of tea.
[MUSIC PLAYS UNDER DIALOGUE]
GRAY: Looking forward to it. And as always, thanks to our listeners for tuning in. If you’d like to learn more—wait, wait, wait, wait! There’s just so much to talk about. [MUSIC IS WARPED AND ENDS] Not long after our initial conversation, Sasha said she was willing to have more discussion. Sasha, thanks for rejoining us.
COSTANZA-CHOCK: Of course. It’s always a pleasure to talk with you, Mary.
GRAY: In our first conversation, we had a chance to explore design justice as a framework and a practice and your book of the same name, which has inspired many. I’d love to know how your experience in design justice informs your current role with the Algorithmic Justice League.
COSTANZA-CHOCK: So I am currently the director of research and design at the Algorithmic Justice League. The Algorithmic Justice League, or AJL for short, is an organization that was founded by Dr. Joy Buolamwini, and our mission is to raise awareness about the impacts of AI, equip advocates with empirical research, build the voice and choice of the most impacted communities, and galvanize researchers, policymakers, and industry practitioners to mitigate AI harms and biases, and so we like to talk about how we’re building a movement to shift the AI ecosystem towards more equitable and accountable AI. And my role in AJL is to lead up our research efforts and also, at the moment, product design. Uh, we’re a small team. We’re sort of in start-up mode. Uh, we’re hiring various, you know, director-level roles and building out the teams that are responsible for different functions, and so it’s a very exciting time to be part of the organization. I’m very proud of the work that we’re doing.
GRAY: So you have both product design and research happening under the same roof in what sounds like a super-hero setting. That’s what we should take away—and that you’re hiring. I think listeners need to hear that. How do you keep research and product design happening in a setting that usually you have to pick one or the other in a nonprofit. How are you making those come together?
COSTANZA-CHOCK: Well, to be honest, most nonprofits don’t really have a product design arm. I mean, there are some that do, but it’s not necessarily a standard, you know, practice. I think what we are trying to do, though, as an organization—you know, we’re very uniquely positioned because we play a storytelling role, and so we’re influencing the public conversation about bias and harms in algorithmic decision systems, and probably the most visible place that that, you know, has happened is in the film Coded Bias. It premiered at Sundance, then it aired on PBS, and it’s now available on Netflix, and that film follows Dr. Buolamwini’s journey from, you know, a grad student at the MIT Media Lab who has an experience of facial recognition technology basically failing on her dark skin, and it follows her journey as she learns more about how the technology works, how it was trained, why it’s failing, and ultimately is then sort of, you know, testifying in U.S. Congress about the way that these tools are systematically biased against women and people with darker skin tones, skin types, and also against trans and gender nonconforming people, and that these tools should not be deployed in production environments, especially where it’s going to cause significant impacts to people’s lives. Over the past couple years, we’ve seen a lot of real-world examples of the harms that facial recognition technologies, or FRTs, can create. These types of bias and harm are happening constantly not only in facial recognition technologies but in automated decision systems of many different kinds, and there are so many scholars and advocacy organizations and, um, community groups that are now kind of emerging to make that more visible and to organize to try and block the deployment of systems when they’re really harmful or at the very least try and ensure that there’s more community oversight of these tools and also to set some standards in place, best practices, external auditing and impact assessment so that especially as public agencies start to purchase these systems and roll them out, you know, we have oversight and accountability.
GRAY: So, April 15 is around the corner, Tax Day, and there was a recent bit of news around what seems like a harmless use of technology and use of identification for taxes that you very much, um, along with other activists and organizations, uh, brought public attention to the concerns over sharing IDs as a part of our—of our tax process. Can you just tell the audience a little bit about what happened, and what did you stop?
COSTANZA-CHOCK: Absolutely. So, um, ID.me is a, uh, private company that sells identity verification services, and they have a number of different ways that they do identity verification, including, uh, facial recognition technology where they compare basically a live video or selfie to a picture ID that’s previously been uploaded and stored in the system. They managed to secure contracts with many government agencies, including a number of federal agencies and about 30 state agencies, as well. And a few weeks ago, it came out that the IRS had given a contract to ID.me and that people were going to have to scan our faces to access our tax records. Now, the problem with this—there are a lot of problems with this, but one of the problems is that we know that facial recognition technology is systematically biased against some groups of people who are protected by the Civil Rights Act, so, uh, against Black people and people with darker skin tones in general, uh, against women, and the systems perform least well on darker skinned type women. And so what this means is that if you’re, say, a Black woman or if you’re a trans person, it would be more likely that the verification process would fail for you in a way that is very systematic and has—you know, we have pretty good documentation about the failure rates, both in false positives and false negatives. The best science shows that these tools are systematically biased against some people, and so for it to be deployed in contracts by a public agency for something that’s going to affect everybody in the United States of America and is going to affect Black people and Black women specifically most, uh, is really, really problematic and opens the ground to civil rights lawsuits, to Federal Trade Commission action, among a number of other, you know, possible problems. So when we at the Algorithmic Justice League learned that ID.me had this partnership with the IRS and that this was all going to roll out in advance of this year’s tax season, uh, we thought this is really a problem and maybe this is something that we could move the needle on, and so we got together with a whole bunch of other organizations like Fight for the Future and the Electronic Privacy Information Center, and basically, all of these organizations started working with all cylinders firing, including public campaigns, op-eds, social media, and back channeling to various people who work inside different agencies in the federal government like the White House Office of Science and Technology Policy, the Federal Trade Commission, other contacts that we have in different agencies kind of saying, “Did you know that this system—this multi-million-dollar contract for verification that the IRS is about to unleash on all taxpayers—is known to have outcomes that disproportionately disadvantage Black people and women and trans and gender nonconforming people?” And in a nutshell, it worked to a degree. So the IRS announced that they would not be using the facial recognition verification option that ID.me offers, and a number of other federal agencies announced that they would be looking more closely at the contracts and exploring whether they wanted to actually roll this out, and what’s happening now is that at the state level through public records requests and other actions, um, you know, different organizations are now looking state by state and finding and turning up all these examples of how this same tool was used to basically deny access to unemployment benefits for people, to deny access to services for veterans. There are now, I think, around 700 documented examples that came from public records requests of people saying that they tried to verify their access, um, especially to unemployment benefits using the ID.me service, and they could not verify, and when they were told to take the backup option, which is to talk with a live agent, the company, you know, was rolling out this system with contracts so quickly that they hadn’t built up their human workforce, so when people’s automated verification was failing, there were these extremely long wait times like weeks or, in some cases, months for people to try and get verified.
GRAY: Well, and I mean, this is—I feel like the past always comes back to haunt us, right, because we have so many cases where it’s, in hindsight, seems really obvious that we’re going to have a system that will fail because of the training data that might have created the model. We are seeing so many cases where training datasets that have been the tried-and-true standards are now being taken off the shelf because we can tell that there are too many errors and too few theories to understand the models we have to keep using the same models the same way that we have used them in the past, and I’m wondering what you make of this continued desire to keep reaching for the training data and pouring more data in or seeing some way to offset the bias. What’s the value of looking for the bias versus setting up guardrails for where we apply a decision-making system in the first place?
COSTANZA-CHOCK: Sure. I mean, I think—let me start by saying that I do think it’s useful and valuable for people to do research to try and better understand the ways in which automated decision systems are biased, the different points in the life cycle where bias creeps in. And I do think it’s useful and valuable for people to look at bias and try and reduce it. And also, that’s not the be all and end all, and at the Algorithmic Justice League, we are really trying to get people to shift the conversation from bias to harm because bias is one but not the only way that algorithmic systems can be harmful to people. So a good example of that would be, we could talk about recidivism risk prediction, which there’s been a lot of attention to that, you know, ever since the—the ProPublica articles and the analysis of—that’s come out about, uh, COMPAS, which is, you know, the scoring system that’s used when people are being detained pre-trial and a court is making a decision about whether the person should be allowed out on bail or whether they should be detained until their trial. And these risk scoring tools, it turns out that they’re systematically biased against Black people, and they tend to overpredict the rate at which Black people will recidivate or will—will re-offend during the, you know, the period that they’re out and underpredict the rate at which white people, you know, would do so. So there’s one strand of researchers and advocates who would say, “Well, we need to make this better. We need to fix that system, and it should be less biased, and we want a system that more perfectly—more perfectly does prediction and also more equitably distributes both false positives and false negatives.” You can’t actually maximize both of those things. You kind of have to make difficult decisions about do you want it to, um, have more false positives or more false negatives. You have to sort of make decisions about that. But then there’s a whole nother strand of people like, you know, the Carceral Technology Resistance Network, who would just say, “Hold on a minute. Why are we talking about reducing bias in a pre-trial detention risk-scoring tool? We should be talking about why are we locking people up at all, and especially why are we locking people up before they’ve been sentenced for anything?” So rather than saying let’s build a better tool that can help us, you know, manage pre-trial detention, we should just be saying we should absolutely minimize pre-trial detention to only the most extreme cases that—where there’s clear evidence and a clear, you know, present danger that the person will immediately be harming themselves or—or—or someone else, and that should be something that, you know, a judge can decide without the need of a risk score.
GRAY: When you’re describing the consequences of a false positive or a false negative, I’m struck by, um, how cold the calculation can sound, and then when I think about the implications, you’re saying we have to decide do we let more people we might suspect could create harms leave a courtroom or put in jail people we could not possibly know how many more of them would not versus would commit some kind of act between now and when they’re sentenced. And so, I’m just really struck by the weightiness of that, uh, if I was trying to think about developing a technology that was going to try and reduce that harm and deliberate which is more harmful. I’m just saying that out loud because I—I feel like those are those moments where I see two strands of works you’re calling out and two strands of work you’re pointing out that sometimes do seem in fundamental tension, right? That we would not want to build systems that perpetuate an approach that tries to take a better guess at whether to retain someone before they’ve been convicted of anything.
COSTANZA-CHOCK: Yeah, so I think, like, in certain cases, like in criminal, you know, in the criminal legal system, you know, we want to sort of step out from the question that’s posed to us, where people are saying, “Well, what approach should we use to make this tool less biased or even less harmful,” if they’re using that frame. And we want to step back and say, “Well, what are the other things that we need to invest in to ensure that we can minimize the number of people who are being locked up in cages?” Because that’s clearly a horrible thing to do to people, and it’s not making us safer or happier or better, and it’s systematically and disproportionately deployed against people of color. In other domains, it’s very different, and this is why I think, you know, it can be very tricky. We don’t want to collapse the conversation about AI and algorithmic decision systems, and there are some things that we can say, you know, at a very high level about these tools, but at the end of the day, a lot of the times, I think that it comes down to the specific domain and context and tool that we’re talking about. So then we could say, well, let’s look at another field like dermatology, right? And you would say, well, there’s a whole bunch of researchers working hard to try and develop better diagnostic tools for skin conditions, early detection of cancer. And so it turns out that the existing datasets of skin conditions heavily undersample the wide diversity of human skin types that are out there in the world and overrepresent white skin, and so these tools perform way better, um, you know, for people who are, uh, raced as white, uh, under the current, you know, logic of the construction of—of racial identities. So there’s a case where we could say, “Well, yeah, here inclusion makes sense.” Not everybody would say this, but a lot of us would say this is a case where it is a good idea to say, “Well, what we need to do is go out and create much better, far more inclusive datasets of various skin conditions across many different skin types, you know, should be people from all across the world and different climates and locations and skin types and conditions, and we should better train these diagnostic tools, which potentially could really both democratize access to, you know, dermatology diagnostics and could also help with earlier detection of, you know, skin conditions that people could take action on, you know. Now, we could step out of that logic for a moment and say, “Well, no, what we should really do is make sure that there’s enough resources so that there are dermatologists in every community that people can easily see for free because they’re always going to do, you know, a better job than, you know, these apps could ever do,” and I wouldn’t disagree with that statement, and also, to me, this is a case where that’s a “both/and” proposition, you know. If we have apps that people can use to do self-diagnostic and if they reach a certain threshold of accuracy and they’re equitable across different skin types, then that could really save a lot of people’s lives, um, and then in the longer run, yes, we need to dramatically overhaul our—our medical system and so on and so forth. But I don’t think that those goals are incompatible, whereas in another domain like the criminal legal system, I think that investing heavily in the development of so-called predictive crime technologies of various kinds, I don’t think that that’s compatible with decarceration and the long-term project of abolition.
GRAY: I love that you’ve reframed it as a matter of compatibility cause I—what I really appreciate about your work is that you’re—you keep the tension. I mean you—that you really insist on us being willing to grapple with and stay vigilant about what could go wrong without saying don’t do it at all, and I’ve found that really inspiring. Um …
GRAY: Yeah, please.
COSTANZA-CHOCK: Can I—can I say one more thing about that, though? I mean, I do—yes, and also there’s a whole nother question here, right? So, you know, is—is this tool harmful? And then there’s also—there’s a democracy question, which is, were people consulted? Do people want this thing? Even if it does a good job, you know, um, and even if it is equitable. And because there’s a certain type of harm, which is, uh, a procedural harm, which is if an automated decision system is deployed against people’s consent or against people’s idea about what they think should be happening in a just interaction with the decision maker, then that’s a type of harm that’s also being done. And so, we really need to think about not only how can we make AI systems less harmful and less biased, among the various types of harm that can happen, but also more accountable, and how can we ensure that there is democratic and community oversight over whether systems are deployed at all, whether these contracts are entered into by public agencies, and whether people can opt out if they want to from the automated decision system or whether it’s something that’s being forced on us.
GRAY: Could you talk a little bit about the work you’re doing around bounties as a way of thinking about harms in algorithmic systems?
COSTANZA-CHOCK: So at the Algorithmic Justice League, one of the projects I’ve been working on over the last year culminated in a recently released report, which is called “Bug Bounties for Algorithmic Harms? Lessons from cybersecurity vulnerability disclosure for algorithmic harms discovery, disclosure, and redress,” and it’s a co-authored paper by AJL researchers Josh Kenway, Camille François, myself, Deb Raji, and Dr. Joy Buolamwini. And so, basically, we got some resources from the Sloan and Rockefeller foundations to explore this question of could we apply bug bounty programs to areas beyond cybersecurity, including algorithmic harm discovery and disclosure? In the early days of cybersecurity, hackers were often in this position of finding bugs in software, and they would then tell the companies about it, and then the companies would sue them or deny that it was happening or try and shut them down in—in various ways. And over time, that kind of evolved into what we have now, which is a system where, you know, it was once considered a radical new thing to pay hackers to find and tell you about bugs in your—in your systems, and now it’s a quite common thing, and most major tech companies, uh, do this. And so very recently, a few companies have started adopting that model to look beyond security bugs. So, for example, you know, we found an early example where Rockstar Games offered a bounty for anyone who could demonstrate how their cheat detection algorithms might be flawed because they didn’t want to mistakenly flag people as cheating in game if they weren’t. And then there was an example where Twitter basically observed that Twitter users were conducting a sort of open participatory audit on Twitter’s image saliency and cropping algorithm, which was sort of—when you uploaded an image to Twitter, it would crop the image in a way that it thought would generate the most engagement, and so people noticed that there were some problems with that. It seemed to be cropping out Black people to favor white people, um, and a number of other things. So Twitter users kind of demonstrated this, and then Twitter engineers replicated those findings and published a paper about it, and then a few months later, they ran a bounty program, um, in partnership with the platform HackerOne, and they sort of launched it at—at DEF CON and said, “We will offer prizes to people who can demonstrate the ways that our image crop system, um, might be biased.” So this was a biased bounty. So we explored the whole history of bug bounty programs. We explored these more recent attempts to apply bug bounties to algorithmic bias and harms, and we interviewed key people in the field, and we developed a design framework for better vulnerability disclosure mechanisms. We developed a case study of Twitter’s bias bounty pilot. We developed a set of 25 design lessons for people to create improved bug bounty programs in the future. And you can read all about that stuff at ajl.org/bugs.
GRAY: I—I feel like you’ve revived a certain, um, ’90s sentiment of “this is our internet; let’s pick up the trash.” It just has a certain, um, kind of collaborative feel to it that I—that I really appreciate. So, with the time we have left, I would love to hear about oracles and transfeminism. What’s exciting you about oracles and transfeminist technologies these days?
COSTANZA-CHOCK: So it can be really overwhelming to constantly be working to expose the harms of these systems that are being deployed everywhere, in every domain of life, all the time, to uncover the harms, to get people to talk about what’s happened, to try and push back against contracts that have already been signed, and to try and get, you know, lawmakers that are concerned with a thousand other things to pass bills that will rein in the worst of these tools. So I think for me, personally, it’s really important to also find spaces for play and for visioning and for speculative design and for radical imagination. And so, one of the projects that I’m really enjoying lately is called the Oracle for Transfeminist Technologies, and it’s a partnership between Coding Rights, which is a Brazil-based hacker feminist organization, and the Design Justice Network, and the Oracle is a hands-on card deck that we designed to help us use as a tool to collectively envision and share ideas for transfeminist technologies from the far future. And this idea kind of bubbled up from conversations between Joana Varon, who’s the directoress of Coding Rights, and myself and a number of other people who are in kind of transnational hacker feminist networks, and we were kind of thinking about how, throughout history, human beings have always used a number of different divination techniques, like tarot decks, to understand the present and to reshape our destiny, and so we created a card deck called the Oracle for Transfeminist Technologies that has values cards, objects cards, bodies and territories cards, and situations cards, and the values are various transfeminist values, like autonomy and solidarity and nonbinary thought and decoloniality and a number of other transfeminist values. The objects are everyday objects like backpacks or bread or belts or lipstick, and the bodies and territories cards, well, that’s a spoiler, so I can’t tell you what’s in them.
COSTANZA-CHOCK: Um, and the situations cards are kind of scenarios that you might have to confront. And so what happens is basically people take this card deck—and there’s both a physical version of the card deck, and there’s also a virtual version of this that we developed using a—a Miro board, a virtual whiteboard, but we created the cards inside the whiteboard—and people get dealt a hand, um, and either individually or in small groups, you get one or several values, an object, a people/places card, or a bodies/territory card and a situation, and then what you have to do is create a technology rooted in your values and—that somehow engages with the object that you’re dealt that will help people deal with the situation, um, from the future. And so people come up with all kinds of really wonderful things that, um—and—and they illustrate these. So they create kind of hand-drawn blueprints or mockups for what these technologies are like and then short descriptions of them and how they work. And so people have created things like community compassion probiotics that connect communities through a mycelial network and the bacteria develop a horizontal governance in large groups, where each bacteria is linked to a person to maintain accountability to the whole, and it measures emotional and affective temperature and supports equitable distribution of care by flattening hierarchies. Or people created, um, a—
GRAY: [LAUGHS] Right now, every listener is, like, Googling, looking feverishly online for these—for the, the Oracle. Where—where do we find this deck? Where—please, tell us.
COSTANZA-CHOCK: So you can—you can just Google “the Oracle for Transfeminist Technologies” or you can go to transfeministech.codingrights.org. So people create these fantastic technologies, and what’s really fun, right, is that a lot of them, of course, you know, we could create something like that now. And so our dream with the Oracle in its next stage would be to move from the completely speculative design, you know, on paper piece to a prototyping lab, where we would start prototyping some of the transfeminist technologies from the future and see how soon we can bring them into the present.
GRAY: I remember being so delighted by a very, very, very early version of this, and it was the tactileness of it was just amazing, like, to be able to play with the cards and dream together. So that’s—I’m so excited to hear that you’re doing that work. That’s—that is inspiring. I’m just smiling. I don’t know if you can hear it through the radio, but, uh—wow, I just said “radio.” [LAUGHTER]
[MUSIC PLAYS UNDER DIALOGUE]
COSTANZA-CHOCK: It is a radio. A radio in another name.
GRAY: I guess it is a radio. That’s true. A radio by another name. Oh, Sasha, I could—I could really spend all day talking with you. Thank you for wandering back into the studio.
COSTANZA-CHOCK: Thank you. It’s really a pleasure. And next time, it’ll be in person with tea.
GRAY: Thanks to our listeners for tuning in. If you’d like to learn more about community-driven innovation, check out the other episodes in our “Just Tech” series. Also, be sure to subscribe for new episodes of the Microsoft Research Podcast wherever you listen to your favorite shows.