Volumetric Cinema

The advent of new sensing technologies has shifted the contemporary experience of visual content to include 3 dimensions. With remote satellite sensing we have transitioned from viewing the earth from a planar map to a 3 dimensional globe. Depth LIDAR sensors and motion trackers used in cinema, virtual reality and gaming industries have transformed the cinematic language of navigation into an editing technique: instead of pre cut-to-cut montage, we now experience navigation-based world-to-world transitions. Within a VR or gaming environment, the space constructs as the user navigates through it, providing a recursive interaction for users to shape the content they experience in real time through their attention and navigation.

The perception of information in a dynamic space embeds data without deconstructing its complexity, thus directing a new way of seeing: from planar to global, flat to volumetric. Volumetric techniques redefine what it means to narrate, curate, as well as other cinematic constructs (e.g. cinematic cut shifts from frame-to-frame to world-to-world as the viewer navigates through the storyline and its spaces, or as they swipe their way through different livestreams).

Volumetric cinema expands the potential of non-linear narratives by collapsing time in a spatial manner. When visualizing time as a volumetric object, we are no longer limited to a slider on our video browser, we can navigate time and the narrative of moving images by the brightness contrast, color change, volume and opacity of the event.

The idea of ‘volumetric’ problematizes the contestation between different perspectives in forming a coherent ‘collaborative vision’; conversely, this gives the potential to authenticate events and truth. In ‘Current’, this is where livestream and volumetric cinema meets: the public will be able to volumetrically navigate any event in real-time. Along these lines, ‘Current’ experimented with extracting 3D information from different livestream sources with photogrammetry frameworks, including social media, autonomous car vision, NASA, drones, animal cams, and surveillance cameras. These are sources selected specifically to examine and demonstrate the idiosyncrasies generated by different visual cues, which will reveal to its viewers how data is being collected, structured, and projected volumetrically – a volumetric data analytic.

The transition from 2 to 3 dimensions has enriched the image with more information. Currently, all images and videos produced are embedded with spatial metadata. Photogrammetry reconstructions and LIDAR scans of environments can be localised to GPS-specific locations. When coupled with multiple real-time cameras and sensor inputs of the same location, the informationally rich space of volumetric construction can provide decentralized perspectives to events. Vision mechanisms of self driving cars already use real-time collaborative vision to cross-check what they perceive with each other. Within the framework of volumetric attention-based navigation, Current speculates on the potential of this type of collaborative vision to authenticate truth for users.