Personalized Narrative

Personalisation is perhaps the most rapidly engaged communication device that narrates our everyday urban routines without our acknowledgement. Algorithmic personalisation is ‘a process of gathering, storing, and analyzing information’ (Venugopal, 2009) by recommendation systems. It is a response to the discrepancies between the amount of data these actors can process at a given point in time. Theoretically, machines can process an infinite amount of data; whereas humans often perform reduction in order to reason. The public, as a form of collective intelligence in our democratic epoch, reach consensus depending on how many is being exposed to which content at what time. Thus, algorithmic personalisation is a communication strategy that translates between these actors by recommending the ‘right’ content, to the ‘right’ person, at the ‘right’ time.

400 hours of video are uploaded to youtube every minute and 1 billion hours of video are watched every day. In order to optimise content suggestion in youtube, google’s deep-learning research team produced the largest scale and most sophisticated industrial recommendation systems in existence. Algorithmic personalisation of content is present in most online platforms and impression-based advertising mechanisms like Google Ads. Users’ search engine history, viewing data and banking statements are all taken into account in recommended content. For example, Netflix takes user interactions like title ratings, viewing data which includes time of day and length of content watched as well as categorizing users into global communities with similar tastes and preferences. Amazon’s recommendation engine uses a similar nearest neighbor algorithm which generates 35% its revenue. Platform analytics show a range of possible content curations for the user through ai personalisation. How might cinematic content be algorithmically planned and personalised for the user in the context of Current?

The personalised framework of Current includes GAN content of livestreams within a volumetric context, producing personalised movie narrative for each individual agent. To do this, Current expands on the traditional cinematic static camera to include a range of virtual POVs – motion tracking, walking, cars, drones, animals and physically impossible CG cameras. Current anticipates that the algorithmic curation of content will be linked to not only to databases of your online experience, but also realtime navigation through gaze tracking and biometric inputs. For example, detection of a user’s gaze duration in a specific environment allows the system to prioritize and optimise similar content for them in the future. Why would you need to pay for a ticket to the movie theater if your personalized current is curated based on previous data from your own embodied experiences?

While NETFLIX keeps us awake at night, a total personalisation will give us enough sleep by composing and condensing our favourite narratives in a single viewing unit. In the near future, we will have tailored cinema without us even realising, every movie will have a narrative personalised according to our individual taste. Sensors will not be limited to our click counts and how long we have lingered at each posts, but biometric inputs that shows how fast our heart bumps, the pressure of our fingertips against the screens, movements of eyeballs, etc.. The questions remains: will personalisation allow us more social freedom by providing us with more options and views, or will it make our individual world more monotonic and reinforces our existing believes, leading to a more segregated society?