Pictured: Playing with the dot screen in Greeley Square TNT teamed up with Breakfast to create a one of a kind interactive billboard for their new show, " Perception ". While most advertisers in New York City are exploring the latest in LED technology, creating a homogeneous landscape of flashing lights, Breakfast reached into the past to revive a forgotten technology. The billboard is made up of over 44,000 dots which are white on one side, black on the other, and can flip between the two. While Breakfast worked out the painstaking details of assembling this screen, I was tasked with writing the software to make it interactive. The project went smoothly -- completed in just about one month -- but it wasn't without some unique challenges. This project has received a great deal of press from the likes of Engadget , Gizmodo , Make , CAN , and more. I'd like to use this forum to go over some of the details that went into its creation. SOFTWARE The software was written around the Cinder framework.The idea was to fill the display with text, convert it to an interactive particle system, and inject a user's image into the scene. Certain letters would be left behind, forming a phrase related to the show. In "Perception", Dr David Eagleman sometimes finds clues embedded in documents. Before establishing concrete plans on the execution of this concept, I built little prototypes of each piece. The result of these studies was a single, flexible application with a myriad of controllable parameters. All imagery was boiled down into a binary image which could be both displayed on a monitor and parsed out and broadcast to the dot screen. TINTED WINDOWS, PART ONE Utilizing a technique I've employed numerous times with the Microsoft Kinect , I wrote the software to use a time of flight (ToF) camera to act as a virtual green screen. Because the Kinect does not work well in direct sunlight, and this installation would be exposed to plenty of it, we went with the Panasonic D-Imager EKL-3106 . It's an amazing device which, as advertised, works perfectly outdoors in direct sunlight. We ran into a pretty serious issue when testing the devices on site. The windows through which these cameras needed to see were thick and tinted. The LED bulbs on the D-Imager reflected off of the glass, unlike the laser emitters used by other devices. We went back to the Kinect, which could barely make it out to the street. In being thorough, we grabbed an Asus XTion Pro Live . Despite being the cheapest device with the strangest name to try to pronounce, it met our needs. We had to face the exhibit towards a different side of the building to maximize shade, but it turned out to be the better side for traffic, anyway. We were eventually able to place the cameras on the exterior of the building using some amazing custom enclosures Breakfast manufactured in house. But the XTion was still the only one small enough to fit into an exterior position. This research led to the development of two initial blocks. Cinder-Beckon wraps the Omek Beckon SDK for Cinder. It's very incomplete as we moved away from the device early in the process. A more complete (sans skeletal data) block was developed for OpenNI. Cinder-XTion is the block I used in this project. While there were a couple other OpenNI wrappers for Cinder out there, they were built more for Kinect and had some show-stopper issues. ANAGRAMS Breakfast gets full credit for concepting and actually laying out the sensible anagrams which occur in this application. Programming this logic and laying it out was easily the most challenging part of this project. Keep in mind that all the text is actually a particle system. Telling tens of thousands of dots how to form properly laid out, readable text in multiple positions while also responding to user input and having physics properties was no small feat. My particle system control experience got a nice boost. TINTED WINDOWS, PART TWO Maybe the coolest thing about this screen is the sound. In the studio, it was mesmerizing playing with this thing and getting a very tangible auditory response out of it. The thick, tinted windows at the location presented another challenge beyond blocking our first choice of ToF camera -- they completely silenced the screens. Breakfast enlisted the help of Joseph Fraioli to capture audio from the screen running at various capacities. I put together a little tool to flip over a specified number of dots to record exact volumes. I then went to work at finding ways to emulate the audio in a way that captured the experience of the real thing as closely as possible. The exhibit now sports a four channel sound system which broadcasts some intelligent playback that makes the synthetic audio nearly indistinguishable from the real thing to many ears. The software keeps track of how many and where dots are flipping over, placing the right audio in the right place. SUCCESS Every site-specific installation comes with its issues. This one threw a lot at us, but it turned out spot on and on time. The opening night went off without a hitch. Post launch, the exhibit was re-enforced by a solid up-time system. It's been running properly without intervention for just about the entire lifetime of the project, which ends in a couple days from the time of this article. Another noteworthy accomplishment of this project is the speed at which the screen runs. We got as high as 45 frames per second (fps) in some tests. The manufacturer expected that we wouldn't be able to get past low single digits. We ended up running at 30 fps. This is well beyond what we hoped for and even we were surprised when it kept up. I've been really impressed with Breakfast. The team there is as innovative as it gets. They quickly execute what a lot of companies in this field would call too difficult to pursue. It was an honor to work with them!
Hi Stephen, First -- this is really an awesome project and super impressive. I'm not even much of a TV watcher but it got me intrigued to check out Perception. The interactivity of it is really innovative and engaging (sorry for the buzz words but they seem completely applicable here!). I'm writing from Omek Interactive and am curious if you ended up using OpenNI for the project in the end or going with Beckon and Cinder? We are always looking to improve our solution and are gearing up to release a final version of our Beckon SDK. In addition we will also have ready a vastly improved version of our Gesture Authoring Tool, that uses machine learning to create gestures (no coding necessary). Anyways, I would love to hear your thoughts on what you liked / didn't like regarding the SDK as well as working with the Panasonic camera -- I can pass the valuable information on to our developers! You can reach me offline as well at alona at omek interactive dot com. Looking forward to hearing from you and beautiful job again on the project. Cheers, Alona
Hey, Alona. I must have missed this message. I used OpenNI + Cinder with the Asus XTion Pro Live. I loved the D-Imager EKL 3106. It's probably my favorite on the market. Like I mentioned in the article, it was unable to penetrate glass because it doesn't use lasers. I see the advantage to just using LED bulbs (namely getting the true, raw IR image without the need for patterns). Other than that, I was really impressed with the hardware. As far as the SDK goes, there were things I did and did not like with Beckon. I like the obvious yet low level stream access. I really like that it can pick out five skeletons. But I feel it tries to do too much in one shot, resulting in a large dependency even if you are trying to do very simple things. If it were me, I would completely decouple the gesture stuff into its own SDK. This would allow your gesture engine to easily work with other technologies while lightening the dependency load for those who don't need it.