This was the very first storyboard we made for the project. These initial sketches served as a basis for our work right till the end.
All credits to Rosalie for the slick sketches!
The type of response we needed from the film was largely intrinsic and not essentially an “interaction”. It was supposed to behave in manners similar to a mirror. In other words, the viewers would not have to do a specific action or gesture to make the screen respond. We chose to do this to keep the film experience organic and not make it an interactive app or game experience.
Ultrasonic pings would serve our purpose gracefully and so we decided to use three of them. The sensors would lay in the bottom below each of the small videos playing on the screen. The responsiveness we decided on was subtle in the sense that as a viewer approaches one area of the screen, the timelapse slows down until it comes to a halt and the area in which he/she is standing is replaced by a randomly selected particular shot of time on The Highline and play a random audio clip from the vast pool of one line audio interviews that we took.
Three ultrasonic pings connected to an Arduino; OpenFrameworks reading the sensor values from the Arduino and controlling the video and audio playback for the 5 HD videos and over 45 audio clips; A large 40+” 1080p display for the output.
The ultrasonic pings used to interfere with each other because of physical proximity of their locations and this added to an extra computational cost as we decided it to handle it on the computer. We decided that a bigger screen and switches instead of proximity sensors would be a better option as the computational power was a scarce resource considering that 4-5 high definition videos had to be switched, modified in real-time depending on the sensor values without having a frame lag.