enculturation

A Journal of Rhetoric, Writing, and Culture

The Digital Chironomia

Steven Smith, North Carolina State University

(Published November 18, 2019)

Introduction

Gesticulation is an everyday part of our lives. First considered by Aristotle and then by other philosophers such as Cicero, Quintilian, and Gilbert Austin, the examination of gesticulation, along with its vocal elements, and its crucial role in orality was once a focal point in the study of rhetorical delivery. Since the 19th century, however, delivery has often been left underrepresented by much of the rhetoric community, and more emphasis put on the study of the voice as opposed to the study of gesture. In order to fill this gap, scholars such as Collin Gifford Brooke and Sean Morey have sought to explore the conventions of delivery in light of new media technologies like web 2.0 and touch-screen interfaces. But while their examinations of the role that embodiment can play during its interaction with these technologies and interfaces have helped restore the integrity of rhetorical delivery, very little attention has been paid to the relationship between natural user interface (NUI) technologies and gesticulation; essentially raising the question, “how does the body malleatein the wake of NUI technologies?”

To answer this question, I designed and coded several gestures from Gilbert Austin’s Chironomia, or A Treatise on Rhetorical Delivery into the multimedia program TouchDesigner via the Microsoft Kinect v2—a motion-sensing input device. When a participant approaches the installation, the Kinect begins to track their skeletal joints (i.e. head, shoulders, elbows, knees, etc.) and place their body on an imaginary plane. When they move their body, x, y, and z coordinates are tracked across said plane, and if the stance is held correctly for approximately 1.5 seconds, the program is able to recognize the gesture and play an audio file. The nature of the program and the Kinect sets the stage to allow participants to reenact 19th-century gestures found within the works of the Chironomia by manipulating their body to mimic the desired gesture, while allowing self-reflection on the body’s ability to reconfigure itself while engaging with NUI technologies.

This kit includes an essay on embodiment, habit, and gesticulation, as well as an instruction set for re-creating one gesture found in the Chirologia: “Supplico.” The kit is designed so that even the most novice users of TouchDesigner and the Microsoft Kinect can create a working gestural installation that is capable of recognizing any desired pose. 

Embodiment, Habit, Rhetorical Delivery, and The Digital Chironomia

The Digital Chironomia is a response to ongoing conversations occurring in the contemporary uses of rhetorical delivery (Brooke, Porter, Morey), procedural rhetoric (Bogost), habit (Holmes), and physical computing (Rieder), and is an attempt at revitalizing, or adding to, scholarship dedicated to representing the often-most underrepresented canon of rhetorical scholarship. Through the recreation of Gilbert Austin’s Chironomia, or A Treatise on Rhetorical Delivery and John Bulwer’s Chirologia, Or the Natural Language of the Hand  using the multimedia program TouchDesigner, this article and installation seeks to explore the ramifications of the “habitual” gestures found in classical treatises dedicated to delivery, specifically focused on gesticulation rather than oratory. In what I would categorize as an archival, theoretical, and methodological approach, the Digital Chironomia project is an attempt to digitize 19th century gestures into a program that can track a participant’s embodied movements via the Microsoft Kinect v2—a motion sensing input device. This project draws our focus to embodiment, specifically gesticulation, and the ways in which our bodies are malleable, as well as the potentialities of coding and archiving gestures through natural user interface (NUI) technologies. Directions to code one gesture from Bulwer's treatise, “Supplico,” can be found in the instruction set of this article. If worked through carefully, the instructions should provide the means to code other gestures as one sees fit. 

Figure 1. Supplico.

Figure 1. Supplico.

The rhetorical canon of delivery provided inspiration for this project, including those pieces written during classical antiquity and those written much later in the 18th and 19th centuries that explored the body’s role during an oration. While much time could be dedicated to exploring the history of delivery, what is important to this essay dedicated to the Digital Chironomia installation is that, traditionally, more attention has been paid to vocal aspects than those of gesticulation. From Isocrates, who once claimed that “discourse intended to be read has some disadvantages in that it is ‘robbed of the prestige of the speaker, the tones of his voice, the variations which are made in the delivery,’ to Aristotle, who believed that “delivery is concerned with the regulation of the voice which must be properly managed in accord with each emotion as to volume, pitch, and rhythm,” tonal aspects have been given priority over gestural, and this more or less remained the case until Quintilian took up teaching the art of gestures at the turn of the century (Nadeau).[i]

Today, the study of gesticulation is often taken up by, but not limited to, those in the humanities (though some animal biologists certainly study bodily communication in various species). In the subsequent section I talk briefly about contemporary studies conducted on gesticulation, but for my own research purposes I found that, given the rise of scholarship dedicated to “digital delivery,” or how the traditional canon of delivery works in today’s digitally-driven society, scholarship concerning gesture was still somewhat sparse. Sean Morey certainly devotes time to how the body extends itself into cyberspace via digital technologies, but spends less time discussing how traditional aspects of gesture may be explored via digital technologies. David Rieder, on the other hand, has explored how some technologies, such as the Kinect or the Arduino, “can influence a participant’s experience of their spatio-temporal relation to their bodies” which allow us to create installations that can “focus on corporeal performance, posture, and gestures” from the canon of delivery (24). The recent emergence of available technologies that can track a user via sensors or touch, combined with increasing scholarship dedicated to rhetorical delivery, led me to consider how gesture could be explored through digitally-mediated technologies—in particular, natural user interfaces (NUI) and the Microsoft Kinect. My approach is intended to focus more on the gestural aspects of delivery as opposed to the tonal in the wake of voice-recognition software such as Amazon’s Alexa or Microsoft’s Cortana, in hopes to entice scholars to explore the often-underrepresented gestural dimensions of rhetorical delivery.  

Intended to be representational of 19th century gestural practices, the Digital Chironomia project requires a participant to pose and hold an artificial gesture modeled after those within Austin’s Chironomia and Bulwer’s Chirologia. That pose is then recognized via the Microsoft Kinect’s skeletal tracking system and fed into the TouchDesigner program. When the pose is held, a “reward” mechanism displays in the form of an audio file explaining the conventions of whatever gesture is recognized. For example, when holding “Supplico” correctly, an audio file plays which explains that the gesture is meant to “implore” someone to do something. Thus, the Digital Chironomia is unique in its ability to blend classical elements of rhetorical delivery and contemporary issues in media studies and embodied malleability (as well as alternative ways of archiving, which may warrant a reader’s consideration for their own projects). 

Figure 2. Required Tools for the Digital Chironomia.

Figure 2. Required Tools for the Digital Chironomia.

Plastic Bodies, (Un)Natural Expressions

As a habitual, oftentimes natural phenomenon, gesticulation is first instanced in the aftermath of birth, often represented by a frowning, crying baby, and then accentuated through cultural practices such as the “thumbs up” to represent approval. Thus gestures find themselves deeply rooted in human (and animal) social practices. Encompassing nearly every aspect of communication, gestures exist as “frozen moments in cave paintings to avatars in cyberspace” and may range from “gender appropriateness (a man in trousers sitting with knees spread) or inappropriateness (a woman sitting knees spread in a skirt); they may be religious (a sign of blessing) or profane (giving ‘the finger’); and they may be distinctively associated with social or cultural background (different forms of prayer)” (Cregan 2). Gestures may vary depending on the faith being practiced (Douglas xvii), the social class of the gesturer (Bryson 139), or even the gender of those enacting the gesture, where “in many cultures, girls are encouraged to be more generally contained in their movements than boys from a very early age” (Cregan 2). In some cases, gestures may be substituted when a pair does not have a common language, as it oftentimes acts as a form of universal, symbolic communication common to mankind (Cregan 3; Kendon). Thus, depending on the situation, bodily gesticulation may be malleated to suit one’s particular needs in almost any given circumstance. 

The idea of the body being malleable harkens back to William James’ Habit, a psychological examination of habit in living creatures. While his work does little to contribute to the study of gesture (aside from a few mentions of gesture, James does not delve much into the study), he does provide a more worthwhile examination of the body and its ability to be plastic or molded. For James, plasticity is a way of understanding how bodies are malleable and “plastic”—easily capable of changing or developing new habits depending on the context: “Plasticity, then, in the wide sense of the word, means the possession of a structure weak enough to yield to an influence, but strong enough not to yield all at once. Each relatively stable phase of equilibrium in such a structure is marked by what we may call a new set of habits” (6); the body can change and still maintain homeostasis, and this is because the “plasticity of the organic materials of which bodies are composed” (6). 

In the rare instances that James does mention gesture as habitual, he simply refers to it as personal habits” and refers to those habits as “vocalization and pronunciation, gesture, motion, and address,” or simply the ways we say things and how we act when speaking (which is to say the same aspects of delivery if we are referring to rhetoric) (53). His psychological examination of habit written in 1914 undoubtedly provides an interesting insight into the interior/exterior mechanisms of bodily habit, and its influence on rhetoric has somewhat been taken up by contemporary scholars such as B.J. Fogg and Steve Holmes. While Fogg’s work is not an explicit study of rhetorical habit, he does “argue for an extension of rhetoric’s historic roots in effective argumentation to include forms of nonconscious behavioral reinforcement monitored by computational algorithms and real-time feedback” (Holmes, “EthosHexis”). B.J. Fogg’s work has certainly caught significant criticism for his lack of engagement with classical sources, but his call to braid elements of computation and behavioral reinforcement together has opened the possibilities for qualitatively-minded rhetoricians to explore the affective relationship between human/machine. Although Fogg is referring more to technologies that track nonconscious, homeostatic changes in real-time, such as a Fitbit watch, the Digital Chironomia project is one that takes into consideration the possibilities of NUI technologies and how they may be used during in situ rhetorical studies to track bodily malleability akin to James’ study on the body. 

Steve Holmes’ work, on the other hand, seeks to redefine digital rhetoric as the “idea that behavior change is rhetoric” and he seeks to justify this claim through a Derridean deconstructive approach via embodiment, habit, and materiality (Holmes, “Ethos, Hexis”). In his work, Holmes explores how hexis, or habit, may allow contemporary digital rhetoricians a means to view persuasive technologies more intricately and explore how those technologies may alter the perception of individuals engaging with said technologies; arguing that developing embodied habits using persuasive technologies can alter the ways that we act in, and perceive, the world. But while his definition of digital rhetoric is also one that may be traced back to the idea of rhetorical delivery—that is, delivery as a means to alter the behavior of not only oneself, but also of an audience (Aristotle’s and Cicero’s work contributes much to this idea, though Demosthenes and others were undoubtedly writing on the impact that delivery had on peers)—of relevance here is the work of Gilbert Austin and John Bulwer and their writings dedicated to gesticulation, as it served as the framework for the Digital Chironomia.  

In Chironomia and Chirologia, both Bulwer and Austin provide a manual treatise of particular gestures used during their lifetime. The gestures explored act not only as a manual of the hands and body, but also how they work during public address. For example, the gesture Gestus II: Oro [I Pray] is a gesture wherein the conjuror raises the hand or spreads it out towards heaven, which Bulwer writes is a “habit of devotion” and typically practiced “by those who are in adversity and in bitter anguish of the mind, and by those who give public thanks and praise to the most high” (23). This gesture speaks not only to instances of a reflexive habit, acting in accordance with James’ work on the subject of malleable bodies that change depending on the context, but also as a means to showcase how gesture can be a vehicle for behavioral change which can be explicated through Holmes’ notion of digital rhetoric; in this case, the act of raising one’s hands in the pattern of Gestus II instills the actor with feelings of anguish, pride, obedience, etc., contingent on the reason enacting the gesture in the first place, which results in a change in behavior. 

Figure 3. The Chironomia.

Figure 3. The Chirologia.

Several hundred gestures are explored within the confines of the Chirologia and the Chironomia, and the latter work comes equipped with illustrations to provide proper accounts of popular gestures used during the 19th century (Chirologia¸ on the other hand, only provides descriptions). Each gesture denotes a particular message to not only the viewers, but also to the actor via their behavioral reasoning for performing that gesture. By most accounts, Bulwer and Austin would agree that gestures are habitual. Augmented with James’ theories on plastic bodies, Fogg’s inclusion of behavioral reinforcement via computational algorithms, and Holmes’ notion that rhetoric is behavior change, the Digital Chironomia acts as an embodied experience that relies on a user’s mutability to reenact the coded gestures programmed in the TouchDesigner software and enacted via the Microsoft Kinect’s motion sensor. As such, the project itself is one interested in the ways in which a user’s performance of these gestures allows for an exploration of how our bodies adapt physiologically to uncommon gestural practices that are governed by contemporary technologies. Questions of adaptability will undoubtedly be raised, as well those concerning the affective experiences that one may have when participating in the installation. A closer look at how one experiences behavioral changes during participation in the Digital Chironomia will be explored in more qualitative installments in the future, but for now I would like to direct this article towards the construction of the Digital Chironomia via the principles of procedural rhetoric. 

Enacting Procedurality

Unlike the enactment of gestures themselves, the implementation of them into TouchDesigner is not a natural phenomenon. And while this may pose problems for those interested in natural gesticulation, it allows critical makers and qualitative researchers an opportunity to focus on the body when it encounters NUI technologies. In what is a lengthy trial and error process, coding a gesture typically involves setting up the Microsoft Kinect in a spacious room and moving back and forth from software to hardware in order to properly input the range of the bodily coordinates one is tracking. For example, when coding “Supplico,” a user will have to place their hands in the correct position in relation to one another, as well as in relation to their location from the hips—if the hands are too far apart from one another, or too far up or down, then the gesture will not be recognized by the software. Thus, coding and enacting each gesture for the Digital Chironomia was far more procedural than natural. 

Figure 4. Approximate Hand Positions for Supplico.

Figure 4. Approximate Hand Positions for "Supplico."

As defined by Ian Bogost in his work Persuasive Games, procedures are typically understood as “established, entrenched ways of doing things” (3). Some examples of procedures might entail the routine that a court of law follows during a trial, raising one’s hand in a classroom to speak, or the constrains and affordances of a software program. Procedures, then, are seemingly embedded behaviors through which a person, or program, acts. Bogost’s interest, however, is in what he calls procedural representations, or the relationship of “processes with other processes” that act as a “form of symbolic expression that uses process rather than language” (9); in other words, these procedures are “made up of rules rather than letters” (9). As such, procedural representations require “inscription in a medium that actually enacts processes” (9); for example, humans and computers are both good actors for enacting processes given their capabilities for executing procedures. For the purposes of the Digital Chironomia, procedural representation relies on the project’s ability to explore the relationship between the human actor’s performance of the conventions programmed within TouchDesigner, and TouchDesigner’s competence at executing those conventions that the human actor must follow. 

In a sense, because of procedural expressions the Digital Chironomia has become a project of procedural rhetoric. As the “art of persuasion through rule-based representations and interactions rather than the spoken word, writing, images, or moving pictures,” procedural rhetoric seeks to explore how things work persuasively (Bogost ix). Throughout his book, Bogost relies on videogames to explain the idea of procedural rhetoric as they require user interaction but is quick to note that any medium is capable of procedural rhetoric, so long as those procedures are acting persuasively—for example, procedural rhetoric also finds itself embedded in politics, advertising, and education, and he explores these possibilities through his discussion of various videogames. Because they act in a kind of reciprocal relationship with reality, Bogost argues that the parameters an actor must follow while playing a videogame can have an impact on how they perceive and act in the world.  

Since the Digital Chironomia is a project that is designed to explore the malleability of the body, Bogost’s exploration of how procedures act could act rhetorically was crucial during the design process of this installation. Initially, the coding of a gesture begins with the connection between the Kinect itself and the TouchDesigner program, which is generally as easy as creating the Kinect “node” (TouchDesigner language) within the software and standing in one location until it can track the body. From there, one must parse out which body parts they want TouchDesigner to recognize; for purposes of “Supplico,” it was necessary to parse out the hip and hands, as each are used to track the correct position of the gesture. From there, the Kinect provides the coordinates of each body part across an imaginary plane, and user must then parse out their desired data from each bodily limb. For example, for “Supplico” to be recognized by the software I needed my hands to be in relation to one another greater than 0.5 degrees on the imaginary plane and less than 0.2 degrees in relation to my hip. More detail about that is provided in the instruction set of this essay, but from a procedural standpoint the Digital Chironomia relies on the manipulation of the visual plane within the Kinect that tracks the body; as a result, the participant must follow the procedures embedded in the software for it to recognize whatever gesture is being enacted. Following Bogost’s idea of procedural representations, the participant must follow the system that governs his/her actions, and these systems are constructed through computational expressions that typically require interactivity. 

Figure 5. Hand/Hip Body Relations.

Figure 5. Hand/Hip Body Relations.

While not a videogame, the Digital Chironomia requires a participant to act in accordance with the conventions that have been coded in the TouchDesigner software. From a procedural rhetoric standpoint, the installation itself coerces the participant to follow the “rule-based representations and interactions” that are key to this specific form of persuasion (Bogost, ix). As a participant approaches the Kinect, and engages in the installation, they must contort their body in such a way that mimics the desired gesture to be recognized. While “Supplico” requires the users to hold their hands in a specific position in relation to one another as well as to their hips, for example, Gestus II might require their hands to be located above their shoulders. In either scenario, the participant alters his or her body to fulfill the demands set forth by the program, and when a gesture is enacted by a participant a reward mechanism that explains the use of each gesture is triggered via an audio file. 

Observing the enactment of the installation yields results for not only my own interests, but also for a participant’s. As an observer, I can witness firsthand how the body adapts to the TouchDesigner processes and NUI technologies and the implications of exploring gestures that have been coded into software. Observation of the body during participation yields opportunities for exploring the relationship between embodiment and technological agency both qualitatively through direct observation and theoretically through theories of media, aesthetics, sensation, and posthumanism. For a participant of the Digital Chironomia there also exists the possibility of self-reflection regarding the body, as well as the opportunity to engage with the 19th century canon of delivery in light of NUI technologies; this acts in accordance within at least one realm where procedural rhetoric shines—education—as a means for participants to perhaps explore how gestures are oftentimes physical representations of behavior, or simply understand how gestures were used in the 19th century. There are undoubtedly other possibilities for exploration as well, and hopefully readers of this essay will find those encoded in their own reproduction of Supplico.  

Conclusion

As a study that is concerned with the intersection of digital media and embodiment, the Digital Chironomia capitalizes on the idea that our bodies are our primary medium. Merleau-Ponty’s claim in his work The Phenomenology of Perception that “the body is our general medium for having a world” has served as an important reminder for the early development of this project (146). Merleau-Ponty and I are in agreement that we have a means to make sense of the world—particularly through our senses and experiences—due to what some might refer to as the process of embodiment, or the vehicle which allows us to sense and experience. Gestus II, Supplico, and any gesture, for that matter, serve as representations of embodied processes that I have sought to argue as habitual. Using James’ ideas that the body is plastic, and Steve Holmes’ ideas that “behavior change is rhetoric,” it has been my goal in the Digital Chironomia project to explore the ways in which our bodies physiologically adapt in the wake of digital technologies. 

While the work itself is inherently inspired by the traditional canon of delivery, John Bulwer’s Chirologia, and Gilbert Austin’s Chironomia, from a production standpoint, the installation draws from the conventions of Ian Bogost’s idea of procedures and procedural rhetoric. When coding Supplico or any other gesture, certain rules must be followed that a participant must execute for the Kinect to recognize the pose—for example, hands in a specific location in relation to another part of the body. As a participant follows these conventions, there becomes an opportunity to learn more about their own embodiment and its relationship with technology. Thus, while the program itself (hopefully) persuades a participant to engage with the installation, there, too, exists an opportunity for self-reflection. 

Outside of the theoretical possibilities, there too are other potentials regarding the Kinect and TouchDesigner, and several projects currently exist online that incorporate the body in the TouchDesigner interface, and tutorials to create such installations are easily searchable. As a multimedia tool, TouchDesigner is a fascinating software that has been used globally for production purposes, but its accessibility has increased its viability within academia for those unfamiliar with coding or without a production background. Although this essay and instruction set are tailored to gesticulation, TouchDesigner has capabilities that expand far past the conventions of rhetorical delivery and hopefully future producers will find this kit valuable to their own projects.

___

Instruction Set

The instruction set provided herein is for coding one gesture, Supplico, into the TouchDesigner program. Using these basic principles, users will be able to add more gestures of their choosing if desired. The following elements will be necessary: 

  • Software and Interface Requirements
    • TouchDesigner—to control the project’s interface, tracking human motion, and expulsion of audio 
  • Hardware Requirements:
    • PC with multiple output ports for projections (USB 3.0 required)
    • Microsoft Kinect V2 for PC
    • Optional: Tripod (for Microsoft Kinect, unless mounted elsewhere)

This instruction set assumes that users will have a working knowledge of the Microsoft Kinect SDK—the software used to run a Kinect on the PC. While this process is simple, tutorials can be easily accessed online, and in most cases, it is as simple as downloading the SDK. 

For accessibility purposes, this document is split into three sections, with a brief description of each section at the beginning. It should be noted before production begins that some working knowledge of TouchDesigner may be necessary, as this project can be somewhat complex—regardless, I have done my best to make sure even the most novice of users can grasp this assignment. The TouchDesigner community also welcomes new users with open arms and may be accessed through social media sites like Facebook or the forums found on the www.derivative.ca website. There also exist several videos on YouTube and Wikipedia pages dedicated to one understanding the concepts behind the software, and there are several videos showing explanations of using the Kinect and the software together. One valuable book for composing in TouchDesigner is An Introduction to TouchDesigner by Elburz Sorkhabi, which was a key component to developing this project. It can be found online for free, along with project files and video tutorials. 

Section One: Connecting the Kinect to TouchDesigne

In this first section, users will gain a basic understanding of TouchDesigner’s Dialogues, how to place nodes, how to connect nodes, and how to modify the parameters of each node. This will serve as the basis for the following sections. 

1.1: TouchDesigner and the Kinect 

Begin by opening the TouchDesigner software. Once opened, users should see a blank “grid” which should take up the majority of their screen. Right click to bring up a list of software options and click “Add Operator.” Users will then see a “Create Dialogue” menu with six tabs at the top (COMP, TOP, CHOP, SOP, MAT, and DAT). Click TOP for now, and then select Kinect. Place the “node” anywhere on grid. 

Note: You may also press “tab” instead of right clicking to get to the Dialogue menu. 

1.2: Adding Nodes

Repeating the same process, click tab to enable the Dialog menu. With TOP selected again, select a MULTIPLY TOP. Repeat this process for the following: CONSTANT TOP, THRESHOLD TOP, and a second KINECT TOP. Scaffold each node as you wish but be sure to place them in the same order as they are previously listed. 

1.3: Connecting Nodes

Note: This “chain” of nodes is is simply to ensure that the Kinect is connected and received by the TouchDesignersoftware. 

With there now being a total of five nodes, the user should use their mouse and connect each TOP via wiring operators. This is done by left clicking either the input or output section on each node and dragging a wire to another node’s input/output. There should be a total of four wiring operators connecting each node. If all is working properly, you should see yourself in the KINECT1 TOP node. See Figure 6 below. 

Figure 6. Connecting Nodes.

Figure 6. Connecting Nodes.

1.4: Setting up the KINECT CHOP

This chain of nodes is responsible for recognizing the body and parsing out the data we will use to later to track the gesture. 

Press tab and select the CHOP ribbon, then select a KINECT CHOP. Repeat this process and create a SELECT CHOP. Rename this CHOP “select_armChans” by click on the next within the node. With the select_armChans CHOP selected, a box should appear in the top right-hand corner of TouchDesigner. Rename the Channel Names box “^*id”, Rename From box “*”, and the Rename To box “t[xyz]”. Connect the two nodes together. Finally, right click the output section of the SELECT CHOP, choose the COMP ribbon, and then select BASE COMP. Rename this BASE COMP Supplico. Your setup should now look similar to Figure 7 with the KINECT CHOP displaying your body coordinates and the select_armChans CHOP parsing the data out from your arms. 

Figure 7. Kinect to Base.

Figure 7. Kinect to Base.

Section 1 Conclusion

This section is responsible for ensuring that the Kinect and TouchDesigner are connected and for setting us up for gestural input, which begins in the next section. Not much troubleshooting can be done at the moment to ensure your project is on the correct path, but you should, at the very least, double check to make sure the parameters within the select_armChans CHOP have been input properly. 

Section Two: Recreating the Globe

In this section, users will begin parsing out bodily data from the Kinect so that the program can recognize certain body parts in relation to one another, ultimately allowing for the creation of the gesture. 

2.1: “Inside” the Supplico BASE COMP

TouchDesigner is an extremely powerful program that allows users to modify nodes by “going inside them.” In order to do this, you simple take your cursor and place it over the node, and then use your scroll wheel to “scroll inside” of the node. This will take you to an entirely new grid, and once inside you should see two unconnected CHOPS: an IN CHOP and an OUT CHOP. There should be no data on the OUT CHOP, and a faint dotted arrow on the input of the IN CHOP. If you have done this correctly, you should see something similar to Figure 8. 

Figure 8. IN/OUT CHOPS.

Figure 8. IN/OUT CHOPS.

2.2: Parsing Data

From the in1 CHOP within our Supplico node, create three SELECT CHOPS and rename each one the following: “hand_right”, “hand_left”, and “hip”. Select each node individually, rename their Channel Name’s the following: *hand_r*, *hand_l*, and *hip*. It may also be necessary to put an asterisk in the Rename from section of each node. A proper example can be seen in Figure 9. Essentially, though, you are parsing the data from the KINECT CHOP that we created in Section 1. 

Once the three SELECT CHOPS have been made, create a MERGE CHOP followed by a NULL CHOP. Chain each SELECT CHOP node to the MERGE CHOP, and then the MERGE CHOP to the NULL CHOP. Once these steps have been followed, your interface should similar to Figure 9. If done correctly, and if standing in front of the Kinect, the data from your right hand, left hand, and hip should be displayed within their respective nodes across an imaginary XYZ plane. 

Figure 9. Hand and Hip Data Parsed

Figure 9. Hand and Hip Data Parsed.

2.3: Data Feed

This section is responsible for further parsing out each individual part body’s XYZ coordinates so that they can be read individually and more accurately. It requires us to drag and drop the XYZ data from our NULL CHOP so that all of the data reads the same throughout each node.

Before anything, select the “lock” button on the NULL CHOP as we will be working from that chop for this step. Next, create three NULL COMPS and rename each one of the following: “hand_left”, “hip”, and “hand_right”. If everything has thus far been done correctly, the data inside the NULL CHOP should include the XYZ coordinates of our left hand, right hand, and hip. Each extremity’s data should be within their own rectangle inside of the NULL CHOP; for example, the data from your right hand should read as “0p1/hand_r:tx,” “0p1/hand_right:ty,” and “0p1/hand_r:tz”. The “0” of each might read differently if you are standing in front of your Kinect. 

Next, select the “hand_left” NULL COMP. Under the parameters section (on the right), you should see a section titled “Translate” with three blank boxes—these blank boxes represent XYZ on our plane. So, in the NULL CHOP, take your cursor and select and hold the data for “0p1/hand_l:tx” and drag that into the left box of the NULL COMP’s “Translate” parameter. Next, select the data “0p1/hand_l:ty” from the NULL CHOP and put it in the middle box of Translate. Finally, select and drag the data of “0p1/hand_l:tz” from the NULL CHOP and put it in the right box of Translate. If everything is done correctly, the parameters within your “hand_left” NULL COMP should read and look exactly like Figure 10. Now, when you move your hand, the data within the Translate parameters should change accordingly. 

Figure 10. NULL to COMP Data Parse.

Figure 10. NULL to COMP Data Parse.

Repeat this process for both the “hand_right” and “hip” NULL COMPS. Be sure to put each XYZ coordinate of your body into the respective Translate box. If done correctly, this section of installation should look like Figure 11. 

Figure 11. Fully Completed Translate Parameters

Figure 11. Fully Completed Translate Parameters.

Section 2 Conclusion

This section can be somewhat difficult for beginners using TouchDesigner. Dragging and dropping data into their correct parameter may be confusing, but it is, quite literally, just that—dragging and dropping. Keep in mind the boxes within the Translate parameter go in an XYZ order, so you should drag and drop the data into its respective box. Once this data has been inputted, the NULL COMPS should change depending on your location once you’re standing in front of the Kinect. Troubleshoot if necessary—go back and carefully read the directions if you have found that the data is not being fed correctly. At this stage, though, that may be difficult, as it isn’t actually doing anything other than tracking your body. The next section will give more indication as to whether it is working. 

Section Three: Bodily Relations

This section will put each body part in relation to one another so that it can correctly read the gesture Supplico. It work require inputting the data from our previously created NULL COMPS into newly created CHOPS. 

3.1: Hand to Hand

First, be sure to unlock the NULL CHOP that we locked in the previous section. Next, create the following: OBJECT CHOP, SELECT CHOP, EXPRESSION CHOP, and LAG CHOP. Chain each node together in the order created and repeat this process a total of four times so that you have four of each node. Rename each OBJECT CHOP the following: “hand_l_hand_r”, “hand_r_hand_l”, “hand_l_hip”, and “hand_r_hip” just as a reminder to show what each node is doing. This cluster of nodes should look like Figure 12 below. 

Figure 12. Body Cluster.

Figure 12. Body Cluster.

3.2: Hand References

Next, we are going to take the data from the previously created NULL COMPS and insert them into the newly created OBJECT CHOPS; this is relatively simple compared to the last step and just involves typing in some data. 

To begin, select the OBJECT CHOP labeled “hand_l_hand_r”. In the parameters, you should see two boxes, each titled Target Object and Reference Object. In the Target Object box, type “hand_left”, and in the Reference Object box type “hand_right”. In the TouchDesigner interface, a dotted arrow should now appear from both the “hand_right” and “hand_left” NULL COMP nodes; this simply shows the data from those COMPS being fed into the OBJECT CHOP. If done correctly, this cluster of nodes should look like Figure 13. 

Figure 13. Left to Right Hand Relation.

Figure 13. Left to Right Hand Relation.

3.2: More Hand References

We’ll repeat this process for the next three cluster of nodes that we created at the beginning of this step. So, select the “hand_r_hand_l” OBJECT CHOP and in its Target Object write “hand_right” and in its Reference Object” type “hand_left”. Again, a dotted arrow should appear from the NULL COMPs of “hand_right” and “hand_left”. Next, select the “hand_l_hip” OBJECT CHOP and write “hand_left” in the Target Object and “hip1” in the Reference Object. Finally, select “hand_r_hip” and write “hand_right” and “hip1” in both Target Object and Reference Object, respectively. At this point, there should be quite a bit going on with your interface, but it should look like Figure 14. 

Figure 14. Data Feed.

Figure 14. Data Feed.

Section 3 Conclusion

This section was, hopefully, relatively simple to follow, as it is essentially just ensuring that the data from one node is correctly typed into the receiving node. You may not see the “1 dist” that are in the various nodes above, or your data may be different. Either way, it is nothing to worry about, as the next section will seek to define these distances. 

Section Four: Defining Values

In this section, we’re going to input the values that we want TouchDesigner to react to when the Kinect reads our body. That being said, there are some variables to consider when we input values. For example, someone taller or shorter than myself may input different variables, and the distance from the practitioner to the Kinect may vary from individual to individual. For my own purposes, I stand about 5ft. away from the Kinect, and I am about 6’0’ tall. I will provide the values that worked for me, and hopefully they will work for you, but you may have to play around with the data to get the desired results. 

4.1: Defining the Expression of hand_l_hand_r

Select the EXPRESSION CHOP of the “hand_l_hand_r” cluster and select the Expr tab under the parameters box. Click the drop-down icon under Expression 1 and type the following expression in the expr0 box: “me.inputVal<0.5”. The color of this line should be blue, and the Expression 1 value should change to 1. Essentially, this is making the node activate if the distance of our left hand to right hand is less than 0.5 degrees on the invisible plane. 

4.2: Defining the Expression of hand_r_hand_l

Similar to the last step, select EXPRESSION CHOP of “hand_r_hand_l” and select the Expr tab. Enable the drop-down icon under Expression 1 and type the following expression in expr0: “me.inputVal>0.3”. Expression 1 should change to 0. 

4.3: Defining the Expression of hand_l_hip

Select the EXPRESSION CHOP of “hand_l_hip” and type “me.inputVal>.2” in the expr0 box of the Expr tab. Expression 1 should change to 0. 

4.4: Defining the Expression of hand_r_hip

Select the EXPRESSION CHOP of “hand_r_hip” and type “me.inputVal>.45” in the expr0 box of the Expr tab. Expression 1 should change to 1.  

At this point, all of your expression tab’s values should be changed to mimic what worked for me (see Figure 15). Again, your mileage may vary depending on your physical location within the room, the location of the Kinect, and your height. If you are standing in front of the Kinect, you will notice that the values change depending on your location—this is normal, and means you are on the right track. If you do not notice these variables changing, please double check your work. 

Figure 15. Expression Values.

Figure 15. Expression Values.

4.5: Defining the LAG CHOP’S Delay

This step is responsible for a delay in the response of you holding your gesture; i.e., you must hold the pose for approximately 1.5 seconds before the program will recognize it as a gesture. That way, you cannot simply wave your hands around and accomplish the pose—you must strike it and hold it. 

Simply click the LAG CHOP of the “hand_l_hand_r” cluster and in each Lag box of its parameters put 0.2 Each lag chop will have the same data, so simply repeat this step for your LAG CHOPS. 

Section 4 Conclusion

This section, again, can be a little frustrating to newcomers using TouchDesigner. It is truly a rinse-and-repeat process, trying to figure out the proper inputs for the software to recognize your pose. My suggestion is to hold the post where you want it, read the data that is being registered to your OBJECT CHOP. For example, if you are holding your hands in the position that you want to, see what the OBJECT CHOP is tracking—if it is tracking your right hand at .6 degrees, input “me.inputVal>0.59”, so if your hands are greater than 0.59, TouchDesigner registers it. 

Section 5: Final Nodes

This section is relatively simple, requiring minimal data input. That said, it will require you to create several nodes.

5.1: The LOGIC CHOP Cluster

Create a LOGIC CHOP and chain all four of the previously-created LAG CHOPS into it. Next, create a NULL CHOP, another LOGIC CHOP, SELECT CHOP, and EXPRESSION CHOP. Chain the NULL CHOP to the LOGIC CHOP and the SELECT CHOP to the EXPRESSION CHOP. Change the Expression of the EXPRESSION CHOP to “me.inputVal>0”. Finally, change the Channel Names of the SELECT CHOP to *id*--this should create a dotted arrow to the SELECT CHOP from KINECT CHOP we created at the beginning of this installation. If all is done correctly, this cluster will resemble Figure 16. 

Figure 16. Logic Chop Cluster.

Figure 16. Logic Chop Cluster.

5.2: LOGIC CHOP Cluster 2

Begin this step by creating another LOGIC CHOP and a TRIGGER CHOP. Chain the previously-created LOGIC CHOP (logic3 in Figure 15) and EXPRESSION CHOP to the newly-created LOGIC CHOP. Chain the LOGIC CHOP to the TRIGGER CHOP. With the TRIGGER CHOP selected, input the values from Figure 16 into the Sustain tab. 

Next, take the OUT CHOP that is located somewhere on your grid and chain the TRIGGER CHOP to it. Create another EXPRESSION CHOP, a NULL CHOP, and another LOGIC CHOP. Chain this cluster of nodes together, beginning with the OUT CHOP. Change the Expression of the EXPRESSION CHOP to “me.inputVal==1”. Expression 1 should be 0. 

Figure 17. Logic Cluster 2.

Figure 17. Logic Cluster 2.

5.3: Adding AUDIO FILE CHOPS

In this step, users will learn how to add audio files to the TouchDesigner software. This section is relatively simple with basic knowledge of uploading principles. 

Click tab and select AUDIO FILE IN. Place this node in any desired location. In the parameters section, click on the sign and add the desired audio file or effect. 

Figure 18. Audio File In.

Figure 18. Audio File In.

5.4: Add an AUDIO FILE DEVICE OUT from CHOP

Right click near the previously added CHOP and select AUDIO DEVICE OUT CHOP and place it in somewhat close proximity to the previously added node. Chain the two nodes together.

5.5: Connect the NULL CHOP to the AUDIO FILE IN CHOP

With the AUDIO FILE IN CHOP selected, drag the NULL CHOP to the “Play” parameter. It should turn blue and a dotted arrow should appear. 

5.6: Connect the LOGIC CHOP to the AUDIO DEVICE IN CHOP

With the AUDIO FILE IN CHOP selected expand the “Cue” parameter so that “cuepulse” is present. Drag the LOGIC CHOP to the “cuepulse” parameter of the AUDIO FILE IN CHOP. It should turn blue and a dotted arrow should appear. By the end of this node, your project should look similar to Figure 19. 

Figure 1. Final Looks.

Figure 19. Final Looks.

Conclusion

At this point, if you stand in front of the Kinect and mimic Supplico, the Kinect should register the pose and the audio file you uploaded should play. As you pose, you should see nearly all nodes change in accordance with your body, and the “0 dist” on most of our nodes will change. If you find that something isn’t quite working, it’s best to work backwards and check individual nodes that may be blocking subsequent nodes—i.e., if the distance of a node isn’t changing, that node may interrupt the entire project. 

More gestures can easily be added, but you will have to “scroll out” of this node cluster and create another BASE COMP from the main screen of your TouchDesigner project. TouchDesigner is capable of handling large amounts of data, but keep in mind the free version of the software has some constraints, so purchasing the licensed software may be necessary. Videos could be added to the gestures, or silly images that appear when you hold the pose—it does not have to be as serious as the Chironomia, but my hope is that this instruction set will provide you with the means of working through TouchDesigner in creative ways which yields interesting potentials for those of us in critical making. 




[i] Cicero undoubtedly put some emphasis on gesture, and he had a plethora of works dedicated to delivery, including De invention, Brutus, Orator, de Officiis, and even some of De oratore. Although he gave much attention to delivery and developed strategies for proper oration, he, in my opinion, gave an unsatisfactory approach to gesture, which was taken up in much more detail by Quintilian.

Works Cited

 

Aristotle, Rhetoric. Modern Library, 1954.

Austin, Gilbert. Chironomia, or A Treatise on Rhetorical Delivery. T. Cadell and W. Davies, 1806.

Bogost, Ian. “Procedural Rhetoric.” Persuasive Games: The Expressive Power of Videogames, MIT P, 2007.

Brooke, Collin Gifford. Lingua Fracta: Toward a Rhetoric of New Media (New Dimensions in Computers and Composition). Hampton P, 2009.

Bryson, Anna. “The Rhetoric of Status: Gesture Demeanour and the Image of the Gentleman in Sixteenth- and Seventeenth-Century England.” Renaissance Bodies: The Human Figure in English Culture, C. 1540-1660, edited by Lucy Gent and Nigel Llewellyn, Reaktion Books, 1990, pp. 136–53.

Bulwer, John. Chirologia: or the Natural Language of the Hand, and Chironomia: or the Art of Manual Rhetoric, edited by James W. Cleary. Southern Illinois UP, 1974. 

Cicero, Marcus T. De L'invention =: De Inventione, edited by Henri Bornecque. Garnier, 1932.

---. De Officiis, edited by Walter Miller. Heinemann, 1928.

---. On the Ideal Orator, edited by James M. May and Jakob Wisse. Oxford UP,  2001.

---. Brutus; Orator, edited by Otto Jahn and Wilhelm Kroll. Harvard UP, 1962. 

---. De Oratore; De Fato; Paradoxa Stoicorum; De Partitione Oratoria, edited by E. W. Sutton and H. Rackham. Harvard UP, 1942.

Cregan, K. “Gesture and Habit.” Key Concepts in Body and Society. Sage UK, 2012.

Douglas, Mary. Natural Symbols: Explorations in Cosmology. Routledge London, 1996. First published 1970.

Fogg, B. J. Persuasive Technology: Using Computers to Change What We Think and Do. Morgan Kaufmann, 2003.

Holmes, Steve. Ethos, Hexis and Persusaisve Technologiesenculturation: A Journal of Rhetoric, Writing, and Culture, 2016.

---. The Rhetoric of Videogames as Embodied Practice: Procedural Habits. Routledge Studies in Communication and Rhetoric, 2017.

Isocrates. Speeches and Letters, edited by George Norlin. Harvard UP, 1980.

James, William. Habit. University of Michigan Library, 1890.

Kendon, Adam. Western Interest in Gesture from Classical Antiquity to the Eighteenth Century. Cambridge UP, 2006.

Merleau-Ponty, Maurice. The Phenomenology of Perception, edited by Colin Smith, Routledge, 1962.

Morey, Sean. Rhetorical Delivery and Digital Technologies: Networks, Affect, Electracy. Routledge, 2015.

Nadeau, Ray. "Delivery in Ancient Times: Homer to Quintilian." Quarterly Journal of Speech, vol. 50, no. 1, 1964, pp. 53–60.

Porter, James E. “Recovering Delivery for Digital Rhetoric.” Computers and Composition, vol. 26, no. 4, 2009, pp. 207–24. 

Quintilian. The Institutio Oratoria of Quintilian. edited by Harold E. Butler. Heinemann, 1921. 

Rieder, David M. Suasive Iterations: Rhetoric, Writing, and Physical Computing. Parlor Press, 2017.

Sorkhabi, Elburz. An Introduction to TouchDesigner. Void, 2014, https://book.nvoid.com/#book.

Welch, Kathleen E. Electric Rhetoric. MIT, 1999.