Tyler Parker - CCL #15 September 2023 Application

Professional Work and Experience

  • Meshcapade - 3D Graphics and Research Engineer (based in Tuebingen, Germany)
  • Amazon Astro - Developed the realtime simulation of human pose detection for the robotics initiative Amazon Astro
  • Amazon Go - Developed the human body synthetic data generation pipeline for training Amazon Go cashierless stores
  • Body Labs - Research Engineer in Computer Vision, Machine Learning, 3D Graphics and Interactive Media

Technology Meets Dance

VR Content



My professional work for some time has been computer vision and machine learning as applied to human 3D avatars, and I’m semi-professionally in the performing arts, so I am incredibly eager to combine these pursuits whenever possible. The Choreographic Coding Lab would be an amazing environment to share my professional experience as to enable artists, and to learn from the other participants and guest speakers who create at the intersection of art/dance/code. Even if I were attending in a purely support capacity, I would find the experience personally rewarding, and well worth the trip out to Cologne (which I feel obligated to mention, as I am a New York City based American).

I do, however, have a project idea. I am performing in a production of Ruddigore at the International Gilbert and Sullivan Festival in the UK (see? I am willing to travel!). My director pointed me to the work of Dr Dionysios Kyropoulos, his research project Chironomia and his thesis Teaching acting to singers: harnessing historical techniques to empower modern performers, as reference for his classical melodrama vision for the production. I was immediately intrigued and saw opportunity to interpret this work through my technical experience (and how this could work at the CCL).

My project idea is to:

  • research seventeenth and eighteenth century movement and gesture acting techniques relevant to opera
  • translate this research into a dataset of digital “body pose / movement vocabulary”, through the #vortanz project
  • create a language model that utilizes this dataset in order to associate choreography (visual) to lyrics (textual/audio)
  • create an interactive agent of this model that serves as a tool / art piece

Even though the topic is a more literal adherence to choregraphy rather than dance, I do feel it is still well aligned to the goals of the CCL. It serves to “digitize” performance knowledge as a dataset. It utilizes new technology (LLM) in an ethical manner that does not seek to replace an artistic process, but serve as a resource and tool for it. And, since it is a little different than the expectations of dance input, it may help to refine the process of #vortanz by testing its edges.

Thank you for the consideration.


  • Machine Learning / Computer Vision / Synthetic Data, with specific experience in their application to 3D animated human avatars
  • VR Development (Unity, WebXR)
  • 3D Content Generation (Blender, Quill VR)
  • Web Interactives (P5.js, WebXR)
  • Interactive Installation / Performance (openFrameworks)
  • Singing (Opera/Musical)


As mentioned in my motivation, I am incredibly eager to have any of my technical experience (or, less likely, my performing arts experience) be useful at the CCL.

If my project were green-lit, I would be eager to collaborate with anyone who is also in the vocal performing art space, if possible. In general, I would want help to make sure the generated dataset is well designed so that it can be useful. The last component, the “interactive agent of this model that serves as a tool / art piece”, I am very open to suggestion and collaboration and would be particularly interested in a interactive VR or web application (or both, with WebXR).