Think of this scene from the film from 2014, Ex machina: A young nerd, Caleb, is in a weak room with a scantily clad female, Kyoko. Nathan, a brilliant robotist, stumbles in drunk and asks Caleb to dance with Kyoko-bot. To get things started, Nathan presses on a wall-mounted panel and the room lighting suddenly changes to an ominous red, while Oliver Cheatham’s disco classic “Get Down Saturday Night” begins to play. Kyoko – who seems to have done this before – starts dancing wordlessly, and Nathan joins his robot creation in an intricately choreographed piece of pelvic shock. The scene suggests that Nathan permeated his robot creation with disco functionality, but how did he choreograph the dance on Kyoko, and why?
Ex machina may not answer these questions, but the stage gestures to an emerging field in robotics research: choreography. Definitely, choreography is about making decisions about how bodies move through space and time. In the danceable sense, choreography is to articulate movement patterns for a given context, generally optimizing for expressiveness rather than utility. To be attuned to the world̵
While concerns about body movement are central to both dance and robotics, the disciplines have rarely overlapped. On the one hand, the Western dance tradition has been known for maintaining a general anti-intellectual tradition that presents major challenges for those interested in interdisciplinary research. George Balanchine, the renowned founder of the New York City Ballet, famously told his dancers, “Do not think, dear, do.” As a result of this type of culture, the stereotype of dancers calcified as servile bodies that are better seen than heard long ago. Meanwhile, the field of computer science – and robotics in the long run – shows comparable, if different, body problems. As sociologists Simone Browne, Ruha Benjamin and others have demonstrated, there is a long history of new technologies that cast human bodies as mere objects of surveillance and speculation. The result has been the perpetuation of racist, pseudo-scientific practices such as phrenology, mood-reading software and AI that indicate whether you are gay by the way your face looks. The body is a problem for computer scientists; and the field’s overwhelming response has been technical “solutions” that seek to read bodies without meaningful feedback from the owners. That is, an insistence that bodies be seen but not heard.
Despite the historical divide, it may not be too much to consider robots as specialized type choreographers, and to think that the integration of choreography and robotics can benefit both fields. Robots’ movement is not usually studied for meaning and intention as it is for dancers, but robots and choreographers are concerned with the same basic concerns: articulation, expansion, power, form, effort, effort and power. “Robotics and choreographers aim to do the same thing: understand and communicate subtle choices in motion within a given context,” writes Amy Laviers, a certified motion analyst and founder of the Robotics, Automation and Dance (RAD) Lab at a recent National Science Foundation-funded paper. When robotics work choreographically to determine robot behavior, they make decisions about how human and inhuman bodies move explicitly in the intimate context of each other. This differs from the utilitarian parameters that tend to control most robot research, where optimization reigns supreme (does the robot do its job?), And what a device’s motion means or makes someone feel has no obvious consequence.
Madeline Gannon, founder of the research studio AtonAton, leads the field in her exploration of robotic expressiveness. Her World Economic Forum – ordered installation, Script, exemplifies the possibilities of choreo-robotics both in its brilliant choreographic consideration and its achievements with innovative machine technology. The piece consists of 10 robot arms displayed behind a transparent panel, each strong and brilliantly lit. The arms remember the production design of technodystopic films as Ghost in the shell. Such robotic arms are designed to perform repetitive labor, and are commonly used for utilitarian conditions such as painting car chassis. Still when Script activated, the robot arms do not contain any of the expected, repetitive rhythms of the assembly line, but instead look alive, each moving individually to animate interacting with the surroundings. Depth sensors installed at the bottom of the robot’s platform track the movement of human observers through space, measure distances and react iteratively to them. This tracking data is distributed throughout the robot system, and acts as a shared view for all robots. When passers-by move sufficiently close to a robot arm, it will “look” closer by tilting the “head” in the direction of stimuli, and then move closer to engage. Such simple, subtle gestures have been used by puppetry for millennia to fill objects with animus. Here it has the cumulative effect of making Script seem curious and very lively. These little choreographies give the impression of personality and intelligence. They are the functional difference between a random series of industrial robots and the coordinated movements of intelligent packet behavior.