Networked_Performance

Center for Future Storytelling

Storytelling is fundamental to being human: itʼs how we share our experiences, learn from our past, and imagine our future.

With the establishment of the Media Labʼs Center for Future Storytelling, the Media Lab, together with Plymouth Rock Studios, is rethinking what “storytelling” will be in the 21st century. The Center will take a dynamic new approach to storytelling, developing new creative methods, technologies, and learning programs that recognize and respond to the changing communications landscape.

The Center builds on the Media Labʼs more than 20 years of experience in developing society-changing technologies for human expression and interactivity, and will now take this to the next level. It will examine ways for transforming storytelling into social experiences, creating expressive tools for the audience and enabling people from all walks of life to embellish and integrate stories into their lives, making tomorrowʼs stories more interactive, creative, democratized, and improvisational. It will seek to bridge the real and the virtual, creating tools for both adults and children that allow stories to incorporate synthetic characters and actors such as robots. It will also pioneer new imaging technologies, from new systems for movement capture, to “morphable” movie studios that allow one physical space to represent a variety of settings.

The research program, which begins immediately, will be centered at the Media Laboratory in Cambridge, moving into the Labʼs new Fumihiko Maki-designed building when the building opens late in 2009. Researchers at the Lab will work closely with the artisan community at Plymouth Rock Studios, and when the Plymouth Rock campus is completed in 2010, the Center will share locations both at MIT and Plymouth Rock, with the studio becoming a site for workshops, teaching, inventing, testing, and displaying new ideas in sound and motion storytelling.

Three Media Lab principal investigators will serve as the Centerʼs co-directors: V. Michael Bove, Jr., head of the Labʼs Object-Based Media group and CELab (consumer electronics) consortium; LG Associate Professor Cynthia Breazeal, head of the Labʼs Personal Robots research group; and Associate Professor Ramesh Raskar, head of the Labʼs Camera Culture group.

Research Initiatives

Research will range from developing low-cost holographic TV, to new imaging technologies for movie studios, to emotionally engaging synthetic actors. Initially, the primary research areas are:

Object-Based Media
V. Michael Bove, Jr.

This Centerʼs research in object-based media will focus on creating connections between people and technologies by developing systems to help objects gain an understanding of the content they carry. This understanding can be used to enhance our abilities to describe the world around us, and the things we do every day. It will also explore ways to capture information about us, so that our stories can be personalized to reflect who we are and what we care about. The groupʼs projects include:

Viper 2.0: Traditional video is one-size-fits-all: editing is fixed, and viewers from different contexts all see the same thing. An earlier Viper system was a tool for creating video programs that can re-edit themselves, allowing video producers to create responsive programs that can change during viewing in response to preference or profile information, presentation equipment or conditions, or real- time sensor feedback. Viper 2.0 will build on this for everyday storytelling, so that ordinary people–not just those with experience in professional video editing tools and scripting languages–can create reactive stories.

Holographic TV: Holographic video has a long history at the Media Lab, but current work aims at changing it from an expensive technology used by a very few to a widespread consumer product. This involves building specialized, inexpensive electro-optic chips that can serve as the basis for a holographic TV that costs about the same as an ordinary TV. It also involves software that enables generating moving holograms in real time using the graphics processors that are already in PCs and video game consoles.

Everything Tells a Story: Imagine if our things–from running shoes, to bicycles, to plush toys, to luggage–could keep a “diary” of everything that had happened to them, collecting, sorting, and interpreting our regular activities for re-use in a multitude of ways, such as personal story creation. For this to work, objects need to incorporate enough sensing to gather rich information about environment and activity, enough storage to remember everything sensed, and enough intelligence to derive useful meaning from it all.

Personal Robots
Cynthia Breazeal

The Centerʼs work will also focus on developing new autonomous and semi-autonomous interactive character technologies for more emotionally engaging, more nuanced performance of synthetic actors. Applications include movies, live theater, gaming, and other forms of interactive storytelling, and learning and creativity toolkits/artifacts for the general public. Projects include:

Next-Generation Synthetic Performers: This work builds on technologies developed during the Labʼs collaboration with Stan Winston Studios to develop next-generation robotic performer technologies–from better tools and interfaces, to introducing levels of autonomy into the synthetic performer (such as eye contact, facial and gestural movement, and lip sync), enabling more flexible, interactive, engaging, and directable robot-mediated performance on the set.

Character in a Bottle: This research involves developing technologies for capturing and computationally modeling the “essence” of a character from the original performance produced by artists, animators and performers. These computational models could then be used to automatically generate new content for that character that is consistent with its original style (quality of movement, mannerisms, and other defining characteristics).

Storyteller: Tools and technologies being developed by the Personal Robots research group will be put into toolkits for children to empower them craft their own compelling stories and characters to foster creativity and learning goals. Researchers will also leverage synthetic performer technologies to create compelling robotic or virtual characters that serve as learning companions for children for areas such as second language acquisition, early childhood readiness skills, or potentially as therapeutic aids to help children with autism improve communication skills. Nexi, Huggable and Tofu are early projects in this direction

Nexi: This new project pushes the limits of existing social robotics technology to support next-generation synthetic performers research. It combines mobility (a compact mobile base capable of human-speed movement), dexterity (hands and wrists designed for both manipulating objects and gesturing expressively), and sociality (a highly expressive face capable of a wide range of human-style facial expressions).

Tofu: This expressive robot will foster new types of interaction with children through the use of cartoon-animation-style movement. Potential extensions of this technology include creating a squash-and-stretch robot character kit (imagine crossing LEGO Mindstorms™ with the Muppets) whereby children can design, program, and remotely puppeteer their own charactersʼ performances to create and share their own stories with friends and family.

Huggable: The Huggable™ is a new type of robotic companion being developed at the MIT Media Lab for healthcare, education, and social communication applications. It features a full body sensitive skin, quiet back-drivable actuators, video cameras in the eyes, microphones in the ears, an inertial measurement unit, a speaker, and an embedded PC with 802.11g wireless networking. An important design goal of the Huggable™ is to make the technology invisible to the user. You should not think of the Huggable™ as a robot but rather as a richly interactive teddy bear. For instance, in a social communication application a grandparent who lives far away could play with their grandchild “as the bear”–controlling the semi-autonomous robot via a Web site and seeing and hearing the child through the eyes and ears of the Huggable™. In an education application, the remote adult could be a teacher who is helping the child learn a second language.

Camera Culture
Ramesh Raskar

The Centerʼs work will also focus on creating tools to improve the ways that we capture, share, and display visual information. Projects include:

Computational Cinematography and Display: This group of projects encompasses future cameras for movies and news, creation of a universal software platform for sharing visual content, and creating 4-D and 6-D displays for richer collection and presentation of information.

Performance Capture: This work will enhance motion capture. For example, Second Skin is a project to build a wearable fabric that supports millimeter-accurate location and bio-parameter tracking at thousands of points on the body. Such a fabric can compute and predict 3-D representations of human activity and use the information to augment human performance. The Shield Field project is a shadow-based method to scan 3-D objects in a single shot.

Morphable Studios: This technology allows one physical space to represent a variety of things. Projects involve techniques to augment and programmatically change the appearance of physical objects: for example, it can make a white clay object sitting on a table in front of you appear to be made of gold or plastic. The physical object is illuminated with a data (or video or slide) projector. The images to be projected are computed with a 3-D graphics-rendering program. This allows you to change the appearance of real objects, adding special effects to the world around you.

Being There: This projector-based approach provides a way to visualize re-creations of real and imaginary sites that are both visually and spatially realistic. Users have a strong sense of immersion and natural interaction as they walk around a virtual site. Programmable Movies: This research explores new ways to create movies that change with context (observer, emotions, place, or time). Projects such as Long-Distance Barcodes work to make both cameras and the world more intelligent, by allowing users to piece together and merge separate images using metadata encoded into the image. This work will allow for the merging of multiple viewpoints to create richer stories from varied storytellers.


Nov 25, 12:17
Trackback URL

3 Responses

  1. Danny Bloom:

    Some of this is BS….it’s one thing to read and tell stories, but it is quite another thing to be WELL READ….what are you guys going to be doing about creating people who are WELL READ? Nada….. David Kirkpatrick, too. Nada.


  2. Tell me a story « Quixotique’s:

    […] (la propriu, şi, din nefericire, deseori, şi la figurat). De acolo am aflat că MIT a lansat un Center for Future Storytelling. De pe pagina lor preiau ad literam descrierea a două […]


  3. BRRRPTZZAP! the Subject » “…To Discover New Modes of Storytelling”:

    […] I signed up for a Screenwriting class last semester, but wound up dropping it after the first class when the professor referred to Happy Go Lucky as “unconventional,” used Juno as an example of a film with a good screenplay, and said that although the school at large tended to focus on “film as art” (bullshit), this class wouldn’t take that approach. (What other approach is there? If film isn’t art, what is it?!) It really bothers me that there seem to be so few people interested in narrative film that aren’t completely boring and traditionally-minded; so few people that believe that narrative film doesn’t have to mean Hollywood (or worse: “indie”) and that art film doesn’t have to mean Brakhage. I was required to write a film manifesto for another class; one excerpt: “My cinema strives to discover new modes of storytelling.” So I was excited, then, to learn of MIT’s new Center for Future Storytelling. […]


Leave a comment

Tags


calls + opps performance livestage exhibition installation networked mobile writings participatory locative media augmented/mixed reality event new media video interactive public net art conference virtual intervention distributed second life sound political technology narrative festival tactical lecture art + science conversation social networks social games history surveillance dance music workshop urban mapping collaboration live upgrade! reblog activist wearable immersive public/private data architecture platform body collective aesthetics environment systems city identity film visualization culture telematic wireless web 2.0 site-specific ecology place webcast open source tool software text research intermedia space community audio radio nature hybrid 3-D avatar e-literature audio/visual responsive presence pyschogeography interdisciplinary media object interview physical global/ization ubiquitous theory theater biotechnology relational play code archive bioart generative news DIY robotic light place-specific hacktivism synthetic p2p cinema remix education agency interface language im/material live cinema algorithmic labor copyright simulation mashup animation perception image free/libre software multimedia artificial motion tracking voice convergence streaming reenactment gift economy machinima emergence webcam cyberreality glitch DJ/VJ tv censorship ARG nonlinear tag transdisciplinary touch recycle asynchronous fabbing semantic web hypermedia chance synesthesia biopolitics tangible app social choreography gesture unconference forking 1
1 3-D activist aesthetics agency algorithmic animation app architecture archive ARG art + science artificial asynchronous audio audio/visual augmented/mixed reality avatar bioart biopolitics biotechnology body calls + opps censorship chance cinema city code collaboration collective community conference convergence conversation copyright culture cyberreality dance data distributed DIY DJ/VJ e-literature ecology education emergence environment event exhibition fabbing festival film forking free/libre software games generative gesture gift economy glitch global/ization hacktivism history hybrid hypermedia identity im/material image immersive installation interactive interdisciplinary interface intermedia intervention interview labor language lecture light live live cinema livestage locative media machinima mapping mashup media mobile motion tracking multimedia music narrative nature net art networked new media news nonlinear object open source p2p participatory perception performance physical place place-specific platform play political presence public public/private pyschogeography radio reblog recycle reenactment relational remix research responsive robotic second life semantic web simulation site-specific social social choreography social networks software sound space streaming surveillance synesthesia synthetic systems tactical tag tangible technology telematic text theater theory tool touch transdisciplinary tv ubiquitous unconference upgrade! urban video virtual visualization voice wearable web 2.0 webcam webcast wireless workshop writings

Archives

2018

Oct | Sep | Aug | Jul
Jun | May | Apr | Mar | Feb | Jan

2017

Dec | Nov | Oct | Sep | Aug | Jul
Jun | May | Apr | Mar | Feb | Jan

2016

Dec | Nov | Oct | Sep | Aug | Jul
Jun | May | Apr | Mar | Feb | Jan

2015

Dec | Nov | Oct | Sep | Aug | Jul
Jun | May | Apr | Mar | Feb | Jan

2014

Dec | Nov | Oct | Sep | Aug | Jul
Jun | May | Apr | Mar | Feb | Jan

2013

Dec | Nov | Oct | Sep | Aug | Jul
Jun | May | Apr | Mar | Feb | Jan

2012

Dec | Nov | Oct | Sep | Aug | Jul
Jun | May | Apr | Mar | Feb | Jan

2011

Dec | Nov | Oct | Sep | Aug | Jul
Jun | May | Apr | Mar | Feb | Jan

2010

Dec | Nov | Oct | Sep | Aug | Jul
Jun | May | Apr | Mar | Feb | Jan

2009

Dec | Nov | Oct | Sep | Aug | Jul
Jun | May | Apr | Mar | Feb | Jan

2008

Dec | Nov | Oct | Sep | Aug | Jul
Jun | May | Apr | Mar | Feb | Jan

2007

Dec | Nov | Oct | Sep | Aug | Jul
Jun | May | Apr | Mar | Feb | Jan

2006

Dec | Nov | Oct | Sep | Aug | Jul
Jun | May | Apr | Mar | Feb | Jan

2005

Dec | Nov | Oct | Sep | Aug | Jul
Jun | May | Apr | Mar | Feb | Jan

2004

Dec | Nov | Oct | Sep | Aug | Jul

What is this?

Networked Performance (N_P) is a research blog that focuses on emerging network-enabled practice.
Read more...

RSS feeds

N_P offers several RSS feeds, either for specific tags or for all the posts. Click the top left RSS icon that appears on each page for its respective feed. What is an RSS feed?

Bloggers

F.Y.I.

Feed2Mobile
Networked
New Radio and Performing Arts, Inc.
New American Radio
Turbulence.org
Networked_Music_Review
Upgrade! Boston
Massachusetts Cultural Council
New York State Council on the Arts, a State agency
Thinking Blogger Award

Turbulence Works

These are some of the latest works commissioned by Turbulence.org's net art commission program.
[ openspace ] wilderness [meme.garden] A More Subtle Perplex A Temporary Memorial Project for Jobbers' Canyon Built with ConAgra Products A Travel Guide A.B.S.M.L. Ars Virtua Artist-in-Residence (AVAIR) (2007) Awkward_NYC besides, Bonding Energy Bronx Rhymes Cell Tagging Channel TWo: NY Condition:Used Constellation Over Playas Data Diaries Domain of Mount GreylockVideo Portal Eclipse Empire State Endgame: A Cold War Love Story Flight Lines From the Valley of the Deer FUJI spaces and other places Global Direct Google Variations Gothamberg Grafik Dynamo Grow Old Handheld Histories as Hyper-Monuments html_butoh I am unable to tell you I'm Not Stalking You; I'm Socializing iLib Shakespeare (the perturbed sonnet project) INTERP Invisible Influenced iPak - 10,000 songs, 10,000 images, 10,000 abuses iSkyTV Journal of Journal Performance Studies Killbox L-Carrier Les Belles Infidles look art Lumens My Beating Blog MYPOCKET No Time Machine Nothing Happens: a performance in three acts Nothing You Have Done Deserves Such Praise Oil Standard Panemoticon Peripheral n2: KEYBOARD Playing Duchamp Plazaville Psychographics: Consumer Survey Recollecting Adams School of Perpetual Training Searching for Michelle/SFM Self-Portrait Shadow Play: Tales of Urbanization of China ShiftSpace Commissions Program Social Relay Mail Space Video Spectral Quartet Superfund365, A Site-A-Day text_ocean The Xanadu Hijack This and that thought. Touching Gravity 2/Tilt Tumbarumba Tweet 4 Action Urban Attractors and Private Distractors We Ping Good Things To Life Wikireuse Without A Trace WoodEar Word Market You Don't Know Me
More commissions