Apps like Vine and Cinemagram have blown up what used to be a simple distinction between still images and video. Now an MIT app project has made the line even blurrier by using content created by different authors, officially enabling users to kick off a video-ish experience created by no one in particular, at no particular point in time.
The result of these authorless GIF-like animations is fast-moving "flipbook" animations that show a single location through the eyes of many. Subsequent photos at the location add frames automatically, creating a collaborative record of the spot over time. While playful photography is fun, the project's larger impact is to demonstrate an interface for collaboratively documenting spaces over time—without much deliberate action on the behalf of anyone.
Why do you think it took so long for animated GIFs to catch on? What challenges do you face in working with animated GIFs?
I think animated GIFs have come back due to a number of factors. Foremost, all the large players have been fighting over codecs for the longest time, and even with the HTML5 video tag being implemented in browsers, the codec wars continue. There have been attempts at truces (WebM, Vorbis), but the scare of submarine patents (as well as a plethora of other political reasons) have been enough to prevent full adoption of any one codec across all browsers and platforms. Animated GIFs, on the other hand, have had native support since the dawn of Netscape, in all their dithered 256-color ugliness. To make animated GIFs work for video, a number of factors need to be in place: high bandwidth, plenty of spare CPU cycles and memory, a desire for sharing video, and sites that make it trivial to share.
Are still images and motion picture beginning to merge?
It's funny—for our mobile app we actually render the animations as video, as a proper lossy video codec can shave off an order of magnitude worth of bits from the weight of the animation. Also, mobiles often struggle to play one-megabyte animated GIFs, in part due to a lack of native hardware decoding. I worry that one day I'll see animated GIF encoding/decoding becoming a SOIC feature, right alongside MPEG4. Many Systems On an IC have video encoding and decoding built-in. This is a huge feature: just look at the number of people using raspberry pi for a media player. The "worry" is a bit sarcastic: it just represents a level of demand being met by the industry for something that's essentially the wrong tool for the job. For rendering the animated GIFs, we use Imagemagick; for our video processing, we use ffmpeg.
Do you view this as a Vine improvement? What was the inspiration for FLIPR?
The project was conceptualized before Vine was released, so we can't claim to be inspired by it. Others have inspired us, though. We love the playfulness and ease-of-creation of Instagram, and Cinemagram introduced the notion of lightly animated images, which we greatly enjoy.
What's in the stack?
We wanted to expand that space into timelapse/stop motion animations. This project builds on top of our F/OSS Open Locast framework, which allows developers to create location-based media platforms easily. The framework consists of two major parts: a web side and a mobile side. Both have been built upon an existing software stack—Django on the web side, Android on the mobile—and are designed to feature-match on both sides.
How do the layers interact?
In Open Locast, both the web and mobile stacks provide a data persistence layer: On Django we use its built-in ORM and on Android we developed our own F/OSS ORM-ish library that ties into Android's Content Provider framework. The web side exposes a RESTful HTTP+JSON API, which is used for all interaction on both the mobile and the web front-end. The mobile has a synchronization layer which can understand and speak to this API in a very generalized way. This allows us to build novel content-driven apps on top of Open Locast without having to constantly reimplement the data + communication + authentication parts of the stack. Additionally, a number of common features of content-driven apps come out-of-the-box, such as free tagging, commenting, favoriting, and media uploading/processing.
How do you associate an image with its location?
Location is added automatically to each flip at the time of image capture and published with the flip. As a flip could potentially span multiple locations, we take the best location discovered by the end of the first round of image capturing and use that. One of the unique features of FLIPR is that it allows for a flip to be made collaborative. Once it's marked collaborative, it shows up differently on the map and other users can post their own photos to it. One could imagine someone taking a photo of a public landmark, adding their own take on it and watching the whole thing change over time. As the author of the flip, you have control of it and can remove unwanted photos if they're added.
What features do you want to add next?
We look to continue to refine the interface to ensure that the process of going from zero to a high-quality flip that you're excited to share is as quick as possible. For example, we're working on building in an optional automatic image stabilization soon in order to minimize the effect of hand shake.
Slideshow Credits: 01 / SteveP: https://mobile.mit.edu/flipr/#!/postcard/42/; 04 / SteveP: https://mobile.mit.edu/flipr/#!/postcard/11/;