What is Object-Based Media, anyway?
What happens when self-aware content meets context-aware consumer electronics? We make systems that explore how sensing, understanding, and new interface technologies can change everyday life, the ways in which we communicate with one another, storytelling, play, and entertainment.
Bianca Datta, Ermal Dreshaj
DUSK was created as part of the MIT Media Lab Wellness Initiative (a Robert Wood Johnson Foundation grant), to create private, restful spaces for people at the workplace. DUSK promotes a vision of a new type of “nap pod”, where workers are encouraged to use the structure for regular breaks and meditation on a daily basis. The user is provided the much-needed privacy to take a phone call, focus, or rest inside the pod for short periods during the day. The inside can be silent, or filled by binaural beats audio, pitch black, or illuminated by a sunlamp–whatever works for the user to get that necessary rest and relaxation so they can continue to be healthy and productive. DUSK is created with a parametric press-fit design, making it scalable and suitable for fabrication customizable on a per-user basis.
Ambi-blinds are solar-powered, sunlight-driven window blinds. A reinvention of a common household item, Ambi-blinds use the level of sunlight striking the window to automatically control the tilt of the blinds, effectively controlling how much sunlight is cast into a room depending on time of day. Sleep studies dictate that waking up with the sunlight regularly promotes wellness and quality of sleep, regulating our circadian rhythm throughout the day. By automatically regulating the user’s exposure to sunlight, Ambi-blinds promote the well-being of the user in a non-invasive way, and close at night to allow for privacy.
Creating Ambi-blinds: How To Make Documentation
Ermal Dreshaj, Sang-Won Leigh (Fluid Interfaces Group), Artem Dementyev (Responsive Environments Group)
Project LIMBO, standing for Limbs In Motion By Others, aims to create technologies to support people who have lost the ability to control a certain part of their body, or who are attempting sophisticated tasks beyond their capabilities. Our strategy is use a functional electrical stimulation (or FES) as a mean for direct control/feedback on muscle activities, for reprogramming the way human body parts are controlled. We envision scenarios of using muscle stimulation for extending motor-control capability or giving feedbacks for adjusting body motions. For example, paralyzed people could regain the experience of grasping with their hands by actuating hand muscles based on gaze gestures. People who have lost leg control could control their legs with finger movement – and be able to drive a car without special assist. LIMBO has been a featured demonstration in workshops at SXS 2014 and CHI 2014.
LIMBO: Sci-fi 2 Sci-fab Page
LIMBO: Fluid Interfaces landing page
Ermal Dreshaj, Dan Novy
Bottles&Boxes uses optical sensors to determine which bottle is placed into which slot. This work was done in collaboration with Natura, in an effort to better understand how people use products at home. Bottles&Boxes allows tracking usage as well as ranking the order of preference of products based on the order in which they are replaced in a box, with information reported wirelessly for analysis. With Bottles&Boxes, we envision a scenario where users are able to give feedback about a product to a company in real time, which allows for better iterative design, anticipating market demands and the needs of users.
Multilayer Diffractive BxDF Displays
Sunny Jolly and members of the Camera Culture group
With a wide range of applications in product design and optical watermarking, computational BxDF display has become an emerging trend in the graphics community. Existing surface-based fabrication techniques are often limited to generating only specific angular frequencies, angle-shift-invariant radiance distributions, and sometimes only symmetric BxDFs. To overcome these limitations, we propose diffractive multilayer BxDF displays. We derive forward and inverse methods to synthesize patterns that are printed on stacked, high-resolution transparencies and reproduce prescribed BxDFs with unprecedented degrees of freedom within the limits of available fabrication techniques.
Toward BxDF Display using Multilayer Diffraction (SIGGRAPH Asia 2014 – Project Video)
Arata Miyamoto, Valerio Panzica La Manna, Konosuke Watanabe, Yosuke Bando, Daniel J. Dubois, V. Michael Bove, Jr.
ShAir creates autonomous ad-hoc networks of mobile devices that do not rely on cellular networks or the Internet. When people move around, their phones automatically talk to other phones wirelessly when they come close to each other, and they exchange content. Content can be messages, pictures, video, emergency alerts, GPS coordinates, SOS signals, sensor readings, etc. At some point in time, Content 1 is being shared with nearby people and Content 2 by another group of people. When people move, their devices carry shared contents in the devices’ storage to another place and further share them with other people. In this way, contents can hop through people’s devices taking advantage of people’s motion and devices’ radio and storage. No infrastructure such as cell towers, Wi-Fi hotspots, or communication cables is required.
ShAir Web site
Digital Synesthesia looks to evolve the idea of Human-Computer Interfacing and give way for Human-World Interacting. It aims to find a way for users to experience the world by perceiving information outside of their sensory capabilities. Modern technology already offers the ability to detect information from the world that is beyond our natural sensory spectrum, but what hasn’t been properly done is find the way for our brains and body to incorporate this new information as a part of our sensory toolkit, so that we can understand our surrounding world in new and undiscovered ways. The long-term vision is to give users the ability to turn senses on and off depending on the desired experience. This project is part of the Ultimate Media initiative and will be applied to the navigation and discovery of media content.
Playgrounds are suffering abandonment. The pervasiveness of portable computing devices has taken over most of the “play time” of children and teens and confined it to human-screen interaction. The digital world offers wonderful possibilities the physical can’t, but we should not forget it does not substitute for the ones the physical environment possesses. As much as we can augment ourselves digitally, we can do so physically as well. Playgrounds are physical, public, ludic spaces shared by a community through generations. They can be built by anyone and out of virtually anything: every landscape, urban, rural or suburban offers a myriad of materials, known and unthought of, to be put to the service of play. Playground design has huge potential when thinking in constructivist terms and of community building, furthermore, they offer an overlooked opportunity to learn about physics and chemistry, perception and illusion, subjects not always taken into account in their design. A successful playground should involve children as co-designers, bring community together and challenge mental, social and motor skills while providing sensory stimulation and perceptual awareness. Playgrounds bring about some of the best qualities of children: their ability to befriend without profiling, opportunism nor interest. At a playground, children make friends, not connections! My interest lies in taking physical and digital and re-imagining and creating playscapes that merge the best of both worlds, engaging bodies, minds and people together.
Infinity-by-Nine augments the traditional home theater or other viewing environment by immersing the viewer in a three-dimensional ensemble of imagery generated by analyzing an existing video stream. The system uses optical flow, color analysis, and pattern-aware out-painting algorithms to create a synthetic light field beyond the screen edge and projects it onto walls, ceiling, or other suitable surfaces within the viewerís peripheral awareness. Infinity-by-Nine takes advantage of the lack of detail and different neural processing in the peripheral region of the eye. Users perceive the scene-consistent, low-resolution color, light, and movement patterns projected into their peripheral vision as a seamless extension of the primary content.
Project Video (Media Lab LabCAST)
Dan Novy, Santiago Alfaro, and members of the Digital Intuition Group
Remember telling scary stories in the dark with flashlights? Narratarium is a 360-degree context aware projector, creating an immersive environment to augment stories and creative play. We are using natural language processing to listen to and understand stories being told and thematically augment the environment using images and sound. Other activities such as reading an E-book or playing with sensor-equipped toys likewise can create an appropriate projected environment, and a traveling parent can tell a story to a child at home and fill the room with images, sounds, and presence.
James Barabas, Bianca Datta, Ermal Dreshaj, Sunny Jolly, and Daniel Smalley
Holographic video work originated in the Spatial Imaging Group (using computing hardware developed by the Object-Based Media Group) moved to our lab in 2003. We are developing electro-optical technology that will enable the graphics processor in your PC to generate holographic video images in real time on an inexpensive screen. As part of this work we are developing gigapixel-per-second light modulator chips, real-time rendering methods to generate diffraction patterns from 3-D graphics models and parallax images, and user interfaces and content for holographic television. Most recently we have demonstrated real-time transmission of holographic video.
Cheap, color holographic video (MIT News Release)
3-D TV? How about holographic TV? (MIT News Release)
Integrated Optics for Holographic Video
Bianca Datta, Sunny Jolly, and Daniel Smalley
The performance of and affordances offered by holographic video displays are critically dependent on the modulators employed for wavefront modulation. Current pixelated modulators (for instance, MEMS and LCoS devices) cannot provide the requisite bandwidth needed for displays with large area and large viewing angle and therefore require costly and cumbersome optical architectures to enable their use in holographic displays. We are bringing the tools and techniques of waveguide acousto-optics to bear on the challenges of cost and complexity in holographic video displays.
4K Comics applies the affordances of ultra high resolution screens to traditional print media such as comic books, graphic novels, and other sequential art forms. The comic panel becomes the entry point to the corresponding moment in the film adaptation, while scenes from the film indicate the source frames of the graphic novel. The relationship between comics, films, parodies, and other support materials can be navigated using native touch screens, gestures, or novel wireless control devices. Big Data techniques are used to sift, store, and explore vast catalogs of long running titles, enabling sharing and remixing among friends, fans, and collectors.
Ultra-High Tech Apparel
Philippa Mothersill, Laura Perovich, and MIT Media Lab Director’s Fellows Christopher Bevans (CBAtelier) and Philipp Schmidt
The classic lab coat has been a reliable fashion staple for scientists around the world. But Media Lab researchers are not only scientists – we are also designers, tinkerers, philosophers and artists. We need a different coat! Enter the Media Lab coat – Our lab coat is uniquely designed for, and with, the Media Lab community. It features reflective materials, new bonding techiques, and integrated electronics. One size fits One – Each Labber has different needs. Some require access to Arduinos, others need moulding materials, yet others carry around motors or smart tablets. The lab coat is a framework for customization. Ultra High Performance Lab Apparel – The coat is just the start. Together with some of the innovative member companies of the MIT Media Lab we are exploring protective eye-wear, shoe-wear and everything in between.
Dressed in Data
“Dressed in Data” steps beyond data visualizations to create data experiences that engage not only the analytic mind, but also the artistic and emotional self. Data is taken from a study of indoor air pollutants to create four outfits, each outfit representing findings from a particular participant and chemical class. Pieces are computationally designed and laser cut, with key attributes of the data mapped to the lace pattern. This is the first project in a series that seeks to create aesthetic data experiences that prompt researchers and laypeople to engage with information in new ways.
BigBarChart is an immersive 3D bar chart that provides a physical way for people to interact with data. It takes data beyond visualizations to map out a new area “data experiences” which are multisensory, embodied, and aesthetic interactions.
BigBarChart is made up of a number of bars that extend up to 8′ tall to create an immersive experience. Bars change height and color and respond to interactions that are direct (e.g. person entering the room), tangible (e.g. pushing down on a bar to get meta information), or digital (e.g. controlling bars and performing statistical analyses through a tablet). BigBarChart helps both scientists and the general public understand information from a new perspective.
EmotiveModeler: An Emotive Form Design CAD Tool
Whether or not we’re experts in the design language of objects, we have an unconscious understanding of the emotional character of their forms. EmotiveModeler integrates knowledge about our emotive perception of shapes into a CAD tool that uses descriptive adjectives as an input to aid both expert and novice designers in creating objects that can communicate emotive character.
Read more here: emotivemodeler.media.mit.edu
Laura Perovich, Pip Mothersill, and Jenny Broutin Farah
This project investigates soft mechanisms, origami, and fashion. We created a modified Miura fold skirt that changes shape through pneumatic actuation. In the future, our skirt could dynamically adapt to the climatic, functional and emotional needs of the user–for example, it might become shorter in warm weather, or longer if the user felt threatened.
Direct Fringe Writing of Computer-Generated Holograms
Photorefractive polymer has many attractive properties for dynamic holographic displays; however, the current display systems based around its use involve generating holograms by optical interference methods that complicate the optical and computational architectures of the systems and limit the kinds of holograms that can be displayed. We are developing a system to write computer-generated diffraction fringes directly from spatial light modulators to photorefractive polymers, resulting in displays with reduced footprint and cost, and potentially higher perceptual quality.
3-D Telepresence Chair
An autostereoscopic (no glasses) 3D display engine is combined with a “Pepper’s Ghost” setup to create an office chair that appears to contain a remote meeting participant. The system geometry is also suitable for other applications such as tabletop displays or automotive heads-up displays.
Dan Novy and Santiago Alfaro
An intelligent basketball net (which visually and behaviorally matches a standard NBA net) can measure the energy behind a dunked basketball. Its first public appearance was in the 2012 Slam Dunk Competition. Project video
Learning to dance is a kinetic experience that can be enhanced through strengthening the connection between teachers, students, and dance partners. This project seeks to increase communication by collecting movement information and displaying it immediately through input and output devices in shoes and clothing. This playful feedback re-imagines the process of dance practice and education.
Laura Perovich, David Nunez, Christian Ervin
This project presents a 5-year vision for Radical Textiles–fabrics with computation, sensing, and actuating seamlessly embedded in each fiber. Textiles are held close to the body–through clothing, bedding, and furniture–providing an opportunity for novel tactile interactions. We explore possible applications for Radical Textiles, propose a design framework for gestural and contextual interaction, and discuss technical advances that make this future plausible.
Calliope builds on the idea of documentation as a valuable asset to learning. Calliope displays the modifications or “history” of every page by placing the corresponding tag. It also allows for multiple “layers” to be displayed at once, so the user can build up on their previous work or start from scratch. Its tags are human readable and can be drawn directly onto the pages of the sketchbook. Calliope also adds the ability to record sound on each page.
We are exploring technical and creative implications of using a mobile phone or tablet (and possibly also dedicated devices like toys) as a controllable “second screen” for enhancing television viewing. Thus a viewer could use the phone to look beyond the edges of the television to see the audience for a studio-based program, to pan around a sporting event, to take snapshots for a scavenger hunt, or to simulate binoculars to zoom in on a part of the scene.
Simple Spectral Sensing
The availability of cheap LED’s and diode lasers in a variety of wavelengths enables the creation of simple and cheap spectroscopic sensors for specific tasks such as food shopping and preparation, healthcare sensing, material identification and detection of contaminants or adulterants. This enables application in food safety, health and wellness, sports, education among others. Since the sensors are specialized, they intrinsically are lower cost, lower power, have a higher signal-to-noise ratio and have a reduced form factor.
The NeverEnding Drawing Machine (2011)
Edwina Portocarrero, David Robert,Sean Follmer, Michelle Chung
The Never-Ending Drawing Machine (NEDM), a portable stage for collaborative, cross-cultural, crossgenerational storytelling, integrates a paper-based tangible interface with a computing platform in order to emphasize the social experience of sharing object-based media with others. Incorporating analog and digital techniques as well as bi-directional capture and transmission of media, it offers co-creation among peers whose expertise may not necessarily be in the same medium, extending the possibility of integrating objects as objects, as characters or as backgrounds. Calliope is a newer, portable version of the system. Project video
Gamelan Headdresses (2011)
Edwina Portocarrero, Jesse Gray
Blending tradition and technology, this headdresses aim to illustrate “Galak Tika”, Bahasa Kawi (classical Javanese, Sanskrit dialect) meaning “intense togetherness.” Considering the cultural,ritualistic and performative aspects of Gamelan music, they are at once rooted through material, shape and color in tradition while allowing the audience to visualize the complex rhythmic interlocking or “kotekan” that makes Gamelan music unique by incorporating electroluminescent wire. The headdresses are wireless, have a long battery life and are robust and lightweight. They are pretty flexible in their control, allowing MIDI, OSC or direct audience participation through an API.
Edwina Portocarrero, David Cranor
Pillow-Talk is designed to aid creative endeavors through the unobtrusive acquisition of unconscious self-generated content to permit reflexive self-knowledge. Composed of a seamless voice-recording device embedded in a pillow, pillow-talk captures that which we normally forget. This allows users to record their dreams in a less mediated way, aiding recollection by priming the experience and providing no distraction for recall and capture through embodied interaction. The Jar is a simple Mason jar with amber colored LED’s dangling inside it, evocative of fireflies. The neck of the Jar encloses the Animator: it incorporates data storage, sound playback, 16-channel PWM LED control and wireless communication via an Xbee radio into a single small board.
ShakeOnIt is a project which explores interaction modalities made possible by multi person gestures. It was developed as part of Hiroshi Ishii’s Tangible Interfaces class. My team and I created a pair of gloves which detect a series of gestures that are part of a “secret handshake.” This method of authentication recognition encodes the process of supplying credentials to the system in a widely accepted social ritual, the handshake. The performance of the handshake becomes something more than simply giving someone a password; by its very nature, the system actually tests that the people using it have practiced the series of gestures together enough times to become proficient enough to perform the series of gestures successfully. This allows the system to test not only credentials, but social familiarity.
This doorknob has capacitive touch sensors on its backside so that it can sense the grasp of the user. In addition to being inherently more secure to simple “looking over the shoulder” attacks than a standard door-access keypad by virtue of the concealed location of the touch sensors, the important thing to notice about the method of interaction with the doorknob is that although the added functionality provided by the touch sensors facilitate information transfer between the user and the locking mechanism of the door, it does not require the user to change their normal pattern of interacting with a normal doorknob.
Magic Hands (2010)
An assortment of everyday objects is given the ability to understand multitouch gestures of the sort used in mobile-device user interfaces, enabling people to use such increasingly familiar gestures to control a variety of objects and to “copy” and “paste” configurations and other information among them.
Thinner Client (2010)
Many more people in the world have access to television screens and mobile phone networks than full fledged desktop computers with broadband internet connections. We have created the Thinner Client as an exploration into the methodology of leveraging this pre-installed infrastructure in order to enable low-cost computing. The device uses a television as a display, connects to the Internet via a standard serial port and costs less than $10 US to produce.
Graspables: The Bar of Soap (2009)
Grasp-based interfaces combine finger-touch pattern sensing with pattern recognition algorithms to provide interfaces that can “read the user’s mind.” As an example, the “Bar of Soap” is a hand-held device that can detect the finger-touch pattern on its surface and determine its desired operational mode (e.g. camera, phone, remote control, game) based on how the user is grasping it. We have also managed to fit the electronics into a baseball that can classify a pitch based on how the user is gripping the ball (which we are using as the input to a video game). Project video
The Connectibles system is a peer-to-peer, server-less social networking system implemented as a set of tangible, exchangeable tokens. The Sifteo cubes are an outgrowth of this work and the Siftables project from the Fluid Interfaces group. Project page
BYOB is a computationally enhanced modular textile system that makes available a new material from which to construct “smart” fabric objects (bags, furniture, clothing). The small modular elements are flexible, networked, input/output capable, and interlock with other modules in a reconfigurable way. The object built out of the elements is capable of communicating with people and other objects, and of responding to its environment. Project Web site
As featured on Good Morning America, Jay Leno’s monologue, and in the comic strip Sylvia… Yes, we (Gauri, to be specific) were responsible for Clocky, the alarm clock that hides when the user presses the snooze button.
Collaborating Input-Output Ecosystems (2003-2006)
Jacky Mallett, Seongju Chang, Jim Barabas, Diane Hirsh, and Arnaud Pilpre
The Smart Architectural Surfaces system and the Eye Society mobile camera robots are two of the platforms we have developed for exploring group-forming protocols for self-organized collaborative problem solving by intelligent sensing devices. We have also used these platforms for experiments in ecosystems of networked consumer electronic products.
Personal Projection (2003)
The Personal Projection project is looking to add video projection capabilities to very small devices without adding appreciably to their cost, form factor, or power consumption. The projector is based on a monolithic array of VCSELs.
Named after the Egyptian goddess of fertility, Isis is tailored in a number of ways — both in syntax and in internal operation — to support the development of demanding responsive media applications. Isis is a “lean and mean” programming environment, appropriate for research and laboratory situations. Isis software libraries strive to follow a “multilevel” design strategy, consisting of multiple interoperable layers that each offer a different level of abstraction of a particular kind of functionality but that also use the same core language elements as a basis. The small yet complete syntax fosters collaboration by lessening the burden on novices while still allowing experienced programmers to take full advantage of their skills. Isis also provides an efficient mechanism for extending functionality by accessing software libraries written in other languages. The Isis Web site gives a complete manual on the language, and more importantly it links to many different projects that use Isis. Isis is now available for free download under the GNU GPL (though certain libraries and applications are made available only to our sponsors and collaborators).
Three of our best-known Isis projects were HyperSoap (a TV soap opera with hyperlinked objects), Reflection of Presence (a telecollaboration system that segmented participants from their backgrounds and placed them in a gesture-controlled shared space), and iCom (an awareness and communication portal that linked labs spaces in the MIT Media Lab, Media Lab Europe and other locations)
John Watlington and V. Michael Bove, Jr.
The Cheops Imaging Systems were compact and inexpensive self-scheduling dataflow supercomputers that we developed for experiments in super-HDTV, interactive object-based video, and real-time computation of holograms. A Cheops system ran the Mark II holographic video display for over ten years before its retirement. Read more about Cheops