Tyler Parker - CCL #15 September 2023 Application
Reference Links
Professional Work and Experience
- Meshcapade - 3D Graphics and Research Engineer (based in Tuebingen, Germany)
- Amazon Astro - Developed the realtime simulation of human pose detection for the robotics initiative Amazon Astro
- Amazon Go - Developed the human body synthetic data generation pipeline for training Amazon Go cashierless stores
- Body Labs - Research Engineer in Computer Vision, Machine Learning, 3D Graphics and Interactive Media
Technology Meets Dance
- Art-A-Hack DANCEDEMIC 2020 - “WALK WITH ME” - Developed visuals and framework that responded to the dancers Emotibit biometric data and art direction, for the Battery Dance Festival (when it was virtual due to COVID)
- Kidoairaku - KiDoAiRaku roughly translates to “the gamut of emotion”, and my piece was Do or anger. A Kinect was used to determine a person’s silhuoette (tech-breakdown here), which was used to “paint through time” of a video of flame (non-flame example here and here)
VR Content
- Bought/Broken - Unity/WebXR developer for this VR piece, which was a recipient of the IGDA Foundation 2021 Diverse Game Developers Fund (the creative director / sound engineer being the applicable members)
- The Pirate’s of Penzance Airship - A VR painted Pirate Airship that was used in an animated sequence during the Overture of Blue Hill Troupe’s “The Pirates of Penzance” (animated also in VR, using Quill VR). Also performed as ‘The Pirate King’, and 3D printed props/set pieces.
- The Great Comet Stage - Sketchfab - My fan VR stage rendition of the Broadway musical “Natasha, Pierre, and the Great Comet of 1812”. Painted in Quill VR, refined in Blender/Unity, deployed to VR Chat for purposes of live performance (SketchFab just a preview)
- A Little Night Music Stage - Sketchfab - My fan VR stage rendition of the Broadway musical “A Little Night Music”, by Stephen Sondheim. Painted in Quill VR, refined in Blender/Unity, deployed to VR Chat for purposes of live performance.
Socials
Motivation
My professional work for some time has been computer vision and machine learning as applied to human 3D avatars, and I’m semi-professionally in the performing arts, so I am incredibly eager to combine these pursuits whenever possible. The Choreographic Coding Lab would be an amazing environment to share my professional experience as to enable artists, and to learn from the other participants and guest speakers who create at the intersection of art/dance/code. Even if I were attending in a purely support capacity, I would find the experience personally rewarding, and well worth the trip out to Cologne (which I feel obligated to mention, as I am a New York City based American).
I do, however, have a project idea. I am performing in a production of Ruddigore at the International Gilbert and Sullivan Festival in the UK (see? I am willing to travel!). My director pointed me to the work of Dr Dionysios Kyropoulos, his research project Chironomia and his thesis Teaching acting to singers: harnessing historical techniques to empower modern performers, as reference for his classical melodrama vision for the production. I was immediately intrigued and saw opportunity to interpret this work through my technical experience (and how this could work at the CCL).
My project idea is to:
- research seventeenth and eighteenth century movement and gesture acting techniques relevant to opera
- translate this research into a dataset of digital “body pose / movement vocabulary”, through the #vortanz project
- create a language model that utilizes this dataset in order to associate choreography (visual) to lyrics (textual/audio)
- create an interactive agent of this model that serves as a tool / art piece
Even though the topic is a more literal adherence to choregraphy rather than dance, I do feel it is still well aligned to the goals of the CCL. It serves to “digitize” performance knowledge as a dataset. It utilizes new technology (LLM) in an ethical manner that does not seek to replace an artistic process, but serve as a resource and tool for it. And, since it is a little different than the expectations of dance input, it may help to refine the process of #vortanz by testing its edges.
Thank you for the consideration.
Qualifications
- Machine Learning / Computer Vision / Synthetic Data, with specific experience in their application to 3D animated human avatars
- VR Development (Unity, WebXR)
- 3D Content Generation (Blender, Quill VR)
- Web Interactives (P5.js, WebXR)
- Interactive Installation / Performance (openFrameworks)
- Singing (Opera/Musical)
Collaboration
As mentioned in my motivation, I am incredibly eager to have any of my technical experience (or, less likely, my performing arts experience) be useful at the CCL.
If my project were green-lit, I would be eager to collaborate with anyone who is also in the vocal performing art space, if possible. In general, I would want help to make sure the generated dataset is well designed so that it can be useful. The last component, the “interactive agent of this model that serves as a tool / art piece”, I am very open to suggestion and collaboration and would be particularly interested in a interactive VR or web application (or both, with WebXR).