I wrote an OpenFrameworks application that digested the point cloud data into a compressed format which could be played back in unity, I also continued the work of programmer Sylvie Sherman to display the full point clouds in Unity.
This video is a sample of raw “point clouds” and video documentation from a work in progress currently called, “BJJGNC_datasets”. The video was shot in the “Panoptic Dome” at Carnegie Mellon University in Pittsburgh Pennsylvania, USA. The dome uses 480 cameras and infrared sensors to create point clouds out of the captured movement of performers. Point clouds are used in the field of computer vision to interpret 3D objects and spaces into data. They are messy and raw data sets that change every frame and are not connected to any actual point in space. Algorithms are needed to edit these points into a 3D digital object that can be controlled and defined.
BJJGNC_datasets places gender-non-conforming bodies practicing Brazilian Jiu Jitsu (BJJ) into the dome to challenge the algorithims that interpret what is described as “normal human movement”. The algorithms are not built for gender-non-conforming bodies nor the movements of BJJ because they are written through the lens of societal bias. The challenge of this work in progress is to visualize what is not able to be defined by binaries and algorithms.
You can read more from the artist Nica Ross here.