subscribe

Bias and Ethics in Spatial Computing

It seems like ethics are often an afterthought in our permissionless race towards more and better technology. And the more ubiquitous a given technology becomes, the more our senses are dulled to any Mr. Hyde implications lurking in the shadows. A recent seminar entitled simply Spatial Computing: Ethics took a stab at some forward thinking, at least for the spatial computing arena. The virtual seminar featured some heavyweights in the field of technology ethics:

Ethics Sign Shows Values Ideology And Principles

  • Kevin Kelly, Founding Executive Editor, Wired magazine
  • Brad Templeton, Futurist-in-Chief, Electronic Frontier Foundation
  • Brandeis Marshall, Faculty Associate, Berkman Klein Center for Internet & Society at Harvard University
  • Arathi Sethumadhavan, Principal User Research Manager – Ethics & Society, Microsoft
  • Alires Almon, Partner Director, Iliff Artificial Intelligence Institute
  • Brian Green, Director of Technology Ethics, Markkula Center for Applied Ethics, Santa Clara University

The seminar began by tackling the most obvious question at hand: “What do we mean by ethics?” Perhaps the best definition was offered Kelly, who argued that “ethics is an attempt to articulate a consensus as to what behaviors will bring the best of us forward”. Brian Green proposed that an ethicist is “most interested in big picture: what is technology doing to us; to the world, and to us as human beings?”

Fortunately, the ethicists speaking at this roundtable clearly focused on and valued ethics in practice. Sethumadhavan (Microsoft) warned that “every tech has some level of risk that it will create”. To her, the practical application of ethics is “being proactive, not reactive and logically thinking about what can go wrong”. In a business context, she clarified, “ethics is holding product teams responsible; it is responsive innovation. And it can help bring a competitive advantage”. Kelly then spoke optimistically, when he suggested our innate mutability: “we can change; we use ethics to come to a consensus about the best behavior and how to get there [with emerging technologies]. Green summed up the conversation by concluding ethics is about “doing the right thing; it’s about action, not just knowledge”.

After the brief and obligatory bout of definitional jockeying, the roundtable panelists dove right in to face off with some of the most concerning ethical issues in spatial computing:

  • whether mixed reality and driving will truly be a safe combination
  • how XR headgear can hijack our attention
  • how new devices tend to break accepted norms and then dumb us down when they become ubiquitous
  • whether it psychologically safe to take on the role of immersive personas
  • whether spatial computing is truly safe for children
  • whether we are “looking at the person whose life is most ruined by [coming] technology”.

The panelists next addressed the large ethics elephant in the room: privacy. The Microsoft User Research Manager, Arathi Sethumadhavan, cautioned about the collection of biometric information and what companies might do with that information; she also stated that just acquiring user consent to use data was too simplistic a solution. Kelly (Wired magazine) lamented that, in the annals of history, privacy is only a relatively recent concept, and chiefly a western notion. He suggested that the notion that “we own our own data is misguided, [again] a western construct”. He continued: “All the hands that touch data have a stake in it and a responsibility to it. I’m not sure we can stop the collection of it over time, because it’s going to be useful. A better tack is to move away from the idea of ownership and talk about right and responsibilities.” As a contrarian, Brandeis from Harvard shouted that “privacy is laughable—privacy IS going to be breached, so we need put in place structures to mitigate [concerns]”. “Privacy literacy is needed”, she concluded. Templeton agreed, indicating that “most people do not care until after they see a [privacy] invasion; people say they care about privacy, but what they do about it says they don’t care”.

The topics discussed up to this point represented the customary provender of ethicists. But then, a new ethics theme emerged, a fresh and more timely concern: the notion of systemic bias in technology innovation. Listeners were reminded about cardiac technology that was tested only on male patients, only to fail in the field when it was employed with women. Marshall, the Harvard professor, pointed out that the people creating technology are fairly homogeneous—mostly white and male. That can lead to unwarranted assumptions in algorithms, testing, and deployment.

The critical ethical concern, pinpointed by both Marshall and Professor Almon from the Iliff Artificial Intelligence Institute, involves a technology’s impact on ‘vulnerable’ populations. Marshall wondered “how do we operationalize a technology for vulnerable populations, i.e., persons of color? Almon questioned “Will those who create [emerging technologies] take into consideration vulnerable populations?” “Do you know what harm can be done and are you doing something to address it?” “As a developer, are you using diverse testers?” Sethumadhavan (Microsoft) then reminded the listeners that “another vulnerable group is children”, adding also that “women have more virtual reality sickness, so does that create disproportionate evaluation” of their skill set when XR is used to evaluate workplace performance?

For me, the message suddenly hit home. I recalled some AI user-interface testing in word and facial recognition technologies that I recently completed, having been tortured in painful hours-long marathon sessions. I distinctly remember over the two days of testing that nearly all testing subjects were white. What a critical issue in this age of increased attention to systemic inequities. —Len Scrogan