May 26, 2024 (Updated on May 27, 2024)

An article by Kate Irwin in PCMag this weekend describes how undergrads at Princeton University were using social media and dance to “promote peaceful use cases and gauge public opinion” on Boston Dynamics’ robot dog, aka “Spot,” for an engineering course titled “Robots in Human Ecology: A Hands-on Course for Anthropologists, Engineers, and Policymakers.” The class was co-led by Alexander Glaser, a Professor of Mechanical and Aerospace Engineering, and Ryo Morimoto, a Professor of Anthropology.

For their class, Irwin notes that the students “taught” Spot to dance, because of course dancing is going to be the foremost application for robots in society. Not warfare. Not surveillance. Not replacing humans in the labor market.

Although Boston Dynamics has vowed not to weaponize its robots for the foreseeable future, early research and development of its robot technology were funded by the Defense Advanced Research Projects Agency (DARPA). Boston Dynamics is now trying to distance their public image from their “complicated history with the military” in order to broaden the market for their products, but there should be no erasing of their historical role in the development of robotics for warfare.

It’s in this light that some aspects of the students’ coursework struck me as questionable, including the types of strategies they employed to make the robot technology appear less threatening. In addition to using dance videos on TikTok as a form of artwashing — which is the use of art to enhance public opinion or minimize public scrutiny on an otherwise controversial issue, similar to greenwashing — we also see the students highlighting flaws in Spot’s technology in an effort to make the robot dog seem both highly advanced but also bumbling; artificial but also “cute;” competent but not enough to be threatening. So, for example, in response to a TikTok user’s comment pointing out Spot’s capacity for being weaponized (“is this what I’ll see before it shoots”), the students posted a video showing the robot fallen over, looking pitiful, with the caption, “Wouldn’t be too worried about this robot’s capabilities… it literally fell over from trying to step to the side.” So, while Spot is so advanced that it can be programmed to do ballet, it is also vulnerable and imperfect, and would never usurp its human masters… because, we’re so perfect? In reply to the video, another user commented, “aww that poor baby :(” demonstrating the effectiveness of the anthropomorphization.

I believe we’ve recently passed an inflection point in how the media is covering AI and robotics. Now that these technological genies are out of the bottle and already everywhere, the conversation is shifting from hype to reassurance, and perhaps even denial. We see both the companies, and the people reacting to their products, increasingly downplaying the capabilities of these technologies. This is an existential threat to humankind? Ha!

A student is quoted by the PCMag reporter saying that “The media has the masses thinking that robots are super-duper advanced and autonomous. But that’s not quite true.” But the argument that these technologies are flawed doesn’t erase the fact that they are continuously being improved, designed to (and will likely) surpass human capabilities in the near future. More importantly, the flaws in AI and robotics don’t erase the fact that they are already being deployed, and at this moment, people’s words, ideas, safety, privacy, creativity, labor, and employment are being stolen or sidelined. Perfection is never the goal with these technologies; obsolescence is. The idea that we should not feel concerned about these technologies now, but should take a wait-and-see approach until they are even more advanced and embedded in society, is logic as flawed and dangerous as the technologies themselves. So, it’s very concerning to me that a Princeton student’s takeaway from a class that was purportedly about exploring the implications of robotics and AI, is essentially, don’t worry about it.

The robot dog’s moments of failure, as captured by the Princeton students, are meant to appeal to our humanocentrism, i.e. to reassure us by indulging our self-defeating need to feel superior and dominant as a species, and in that way, the captured moments of failure were as performative and nonaccidental as the videos’ choreographed dance routines. The videos were curated in a way to make us forget that fact. This could be justifiable for an arts or marketing class that is intended to evoke a public reaction, but for an anthropology assignment purportedly examining the ethics of a private corporation’s technology, and purportedly about engaging in open discussion with the public about these technologies, I find this mix of marketing and education exercise to be highly unsettling and even exploitative for an undergraduate course.

According to the Princeton Engineering News site, which recently published its own story on the course, students in the class “discuss potential roles, meanings, and ethics of robots in society, as well as gaining hands-on experience manipulating the robot for generating ethically-sound and community-engaged applications on campus.” But while there certainly are many positive applications for robotics in society, even the co-instructor’s stated objective, to “to develop a ‘civilian use case’ for Spot,” seems arbitrarily narrow and designed to limit discussion about real-world cases and their ramifications. Just last year, NBC News reported that the NYPD was deploying Spot to help with “crime-fighting.”

It was interesting to observe the mental gymnastics the Princeton students used to frame their marketing as educational “research.” Gigi Schadrack, one of the students in the class, claimed that she posted videos on TikTok showing herself dancing ballet with the robot “to explore how bipedal movements could be translated to a quadruped.” Another student, Wasif Sami, claimed that the videos provided an opportunity to do an “anthropological analysis of the work and its public response.” According to Sami, “As a group, we grappled with how our playful, performative content exists in dialogue with higher-stakes impacts of technology.”

I found it fascinating, from a purely ethnographic approach (of course), the code-switching that the students do between the academic, jargony language they use (“this is an anthropological analysis of performance around higher-stakes impacts of tech”) to uplift their work, to the patronizing, sort of cutesy language they use (“these robots are not super-duper advanced”) to communicate with the public and downplay concerns about the technology, highlighting the inauthenticity of this exercise. (Note to others: accessibility is not the same as infantilization.)

Another engineering student at Princeton was more transparent about the purpose of their work, explaining that the course allowed her “to see how we can influence the community and their perceptions about robotics.” But while the educational components of the course seem to benefit the students at Princeton, the marketing components, which included inviting children to campus to “pet the robot,” seemed most likely to benefit Boston Dynamics. As these students graduate and move on to jobs at Lockheed-Martin, Microsoft, and Google, it’s important to ask, what is the lasting impact of their “community outreach”? In which communities are robots most likely to be deployed for policing? I’m reminded of the ways that scholars like Jay Dolmage have described the communities surrounding universities as sites of exploitation, where the community becomes instrumentalized as objects of study and all the benefits of that research stay within the university.


Like it? Share with your friends!