ACM Trans. on Graphics (Proc. of ACM SIGGRAPH), Volume 29, Number 3 - 2010
Carlos D. Correa and Kwan-Liu Ma
University of California, Davis
This paper presents a system for generating dynamic narratives from videos. These narratives are characterized for being compact, coherent and interactive, as inspired by principles of sequential art. Narratives depict the motion of one or several actors over time. Creating compact narratives is challenging as it is desired to combine the video frames in a way that reuses redundant backgrounds and
depicts the stages of a motion. In addition, previous approaches focus on the generation of static summaries and can afford expensive image composition techniques. A dynamic narrative, on the other hand, must be played and skimmed in real-time, which imposes certain cost limitations in the video processing. In this paper, we define a novel process to compose foreground and background regions of video frames in a single interactive image using a series of spatio-temporal masks. These masks are created to improve the output of automatic video processing techniques such as image stitching and foreground segmentation. Unlike hand-drawn narratives, often limited to static representations, the proposed system allows users to explore the narrative dynamically and produce different representations of motion. We have built an authoring system that incorporates these methods and demonstrated successful results on a number of video clips. The authoring system can be used to create interactive posters of video clips, browse video in a compact manner or highlight a motion sequence in a movie.
author = "Carlos D. Correa and Kwan-Liu Ma",
title = "Dynamic Video Narratives",
journal = "ACM Transactions on Graphics (Proc. SIGGRAPH)",
year = "2010",
volume = "29",
number = "3"
Big Buck Bunny (2008), The Blender Foundation. Creative Commons Attribution 3.0