I began my internship in the Digital Media Department's Media Lab in the summer of 2013, after my first year as a master's candidate in the Interactive Telecommunications Program (ITP) at New York University's Tisch School of the Arts. With a background in music and multimedia performance, my first year at ITP was frankly mind-blowing: I learned a whole host of new technologies that opened up entirely new ways of approaching narrative, visual imagery, and audience participation, and my work in the Media Lab was propelled by this new outlook.
The idea for MetMirror came about while exploring the photographic archive of objects in the Met's collection. What struck me most was not only the beauty of many of the photographs, but also the consistency in visual language used over the course of many years—similar lighting, framing, and that particular grey background all suggested a "visual culture" that could be elaborated upon. Some type of interactive installation supported by such imagery seemed a great opportunity to create an immersive Museum experience using this rich visual legacy.
After much brainstorming, I set about making a digital "mirror"; that is, I would take the raw data of a live video feed and use it to create an interactive mosaic of images culled from the Met's archive. This form has a rich history in interactive art, but using the Museum's imagery created a unique historical interplay between the Met's cultural artifacts and current Museum-goers. I wanted visitors to see themselves in the collection.
MetMirror was made by using Processing, an open-source programming language that is particularly good for visual output. In terms of hardware, it requires only a video camera. The source code is available here. The basic mechanics are as follows:
1. In advance, I prepared a simple grid of images, saved as a PNG file. In this case, images randomly selected from the Met's collection.
2. The Processing program loaded the image file and broke it up into its constituent images.It then calculated the average color of each image and stored that information.
3. For every frame of video from the video camera (about thirty frames per second), I inspected the pixels of that frame. So a video of 640 x 480 meant viewing 307,200 pixels—thirty times per second!
4. Similar to step two, I examined each video pixel, found its average color, and stored it.
5. Once all of this information was stored, I compared and matched the color of the pixels from the video with the colors of the pixels in the images.
6. Finally, I drew the images to the screen instead of the video pixels.
The real power of this implementation is that any collection of images could be used to suit the occasion, whether that be a special event or a new exhibition. My next step is now to retool MetMirror so that it can connect to a dynamic feed of images (like a web server). For example, when a special exhibition like Lost Kingdoms opens at the Met, I'd like to enable curators to easily drag and drop the most compelling images from the exhibit into a folder (à la DropBox) and BAM!—MetMirror would automatically update with these new images.
Developing digital or interactive work within a museum context has been challenging, because one is participating in an ever-evolving dialogue. On the one hand, I want to appreciate the history of an institution like the Met. On the other, you hope to generate new meanings for all the Met has to offer, using exciting new tools. As an artist and technologist, the Media Lab has been an incredible place to work because this dialogue is always ongoing. The Media Lab's efforts to foster close relationships with artists is the most important step to building a bridge between the amazing things The Metropolitan Museum of Art has already built, and the amazing things it will build in the future.