Once the video had been cut together into its final draft and the assets for the holographic interface had been completed, development on the video began. As done with the assets before, the editing for sections of the video was split into two to increase efficiency, with this post being focused on the assets developed in the last post.
As the audio was already playing when the video starts up, it made sense to fill the empty space in the music interface with an audio setting, which would seem to amp up the background music, portraying the functionality of the media player through audio levels alone. The audio timeline was also synced up with the length of the video, allowing the bar to move with the length of the song, a small detail which helps increase immersion.
The notification was given a simple bounce in and pulse effect, to help simulate that this is something that desires attention, and the cooking video was nested inside of another composition to help assist with positioning. The mapping tools were animated to follow the line and reach the destination, using a double layered line to give the impression the device was rendering the best route to the desired location.
However the tracking used in all these clips is incredibly basic, along with the masking on certain points. If done again, more attention would be placed on these parts of the video, allowing smoother tracking and more detail into the masking effects on the UI.