Uncle Al

The video performance entitled “Uncle Al” is named after the performer. This experiment delves into the intersection of movement, facial expressions, and auditory components within the context of a performance by a macabre clown. To quantify and create the sound for this video, I utilized machine learning techniques to identify the intricate movements of the performer and track more than 100 facial data points. These data points are harnessed in real-time to extrapolate noteworthy characteristics, coupled and sculpted into new data points based on interesting relationships, subsequently activating, creating, and modulating sounds through audio components.

The showcased video has a noteworthy correlation between the character’s actions and the auditory responses it demonstrates. For instance, upon the character closing their eyes, the auditory landscape is imbued with sounds associated with escapism—children’s laughter and playful chimes, among others. Conversely, the character’s mouth opening prompts the emission of a sonically distinct, ominous composition. This exploration encapsulates the nuanced interplay between facial expressions, movement, and audio elements, shedding light on the potential of machine learning in enhancing the expressive dimensions of performative arts.

Previous
Previous

Phase Shifting

Next
Next

Semi-autonomous Brushstrokes