Posts tagged “shaders

Kinect Fun House Mirror


A Kinect hack that performs body detection in real-time and cuts an individual person from the Kinect video feed, distorts them using GLSL shaders and pastes them back into the image using OpenGL multitexturing, blending them seamlessly with other people in the image.

It’s a straightfoward concept, but the possibilities are endless. Pixelate your naked body and taunt your boyfriend over video chat. Turn yourself into a “hologram” and tell the people around you that you’ve come from the future and demand beer. Using only your Kinect and a pile of GLSL shaders, you can create a wide array of effects.

This hack relies on the PrimseSense framework, which provides the scene analysis and body detection algorithms used in the XBox. I initially wrote my own blob-detection code for use in this project, but it was slow and placed constraints on the visualization. It required that people’s bodies intersected the bottom of the frame, and it could only detect the front-most person. It assumed that the user could be differentiated from the background in the depth image, and it barely pulled 30 fps. After creating implementations in both Processing (for early tests) and OpenFrameworks (for better performance), I stumbled across this video online: The video shows the PrimeSense framework tracking several people in real-time, providing just the kind of blob identification I was looking for. Though PrimeSense was originally licensed to Microsoft for a hefty fee, it’s since become open-source and I was able to download and compile the library off the PrimeSense website. Their examples worked as expected, and I was able to get the visualization up and running on top of their high-speed scene analysis algorithm in no time.

However, once things were working in PrimeSense, there was still a major hurdle. I wanted to use the depth image data as a mask for the color image and “cut” a person from the scene. However, the depth and color cameras on the Kinect aren’t perfectly calibrated and the images don’t overlap. The depth camera is to the right of the color camera, and they have different lens properties. It’s impossible to assume that pixel (10,10) in the color image represents the same point in space as pixel (10, 10) in the depth image. Luckily, Max Hawkins let me know that OpenNI can be used to perform corrective distortions, aligning the image from the Kinect’s color camera with the image from the depth camera and adjusting for the lens properties of the device. Luckily, OpenNI performs all of the adjustments necessary to perfectly overlay one image on the other. I struggled for days to get it to work, but Max was a tremendous help and pointed me toward these five lines of code, buried deep inside one of the sample projects (and commented out!)

1
2
3
4
5
6
7
     // Align depth and image generators
     printf("Trying to set alt. viewpoint");
     if( g_DepthGenerator.IsCapabilitySupported(XN_CAPABILITY_ALTERNATIVE_VIEW_POINT) )
     {
         printf("Setting alt. viewpoint");         g_DepthGenerator.GetAlternativeViewPointCap().ResetViewPoint();
         if( g_ImageGenerator ) g_DepthGenerator.GetAlternativeViewPointCap().SetViewPoint( g_ImageGenerator );
     }

Alignment problem, solved. After specifying an alternative view point, I was able to mask the color image with a blob from the depth image and get the color pixels for the users’ body. Next step, distortion! Luckily, I started this project with a fair amount of OpenGL experience. I’d never worked with shaders, but I found them pretty easy to pick up and pretty fun (since they can be compiled at run-time, it was easy to write and test the shaders iteratively!) I wrote shaders that performed pixel averaging and used sine functions to re-map texcoords in the cut-out image, producing interesting wave-like effects and blockiness. I’m no expert, and I think these shaders could be improved quite a bit by using multiple passes and optimizing the order of operations.

Since many distortions and image effects turn the user transparent or move their body parts, I found that it was important to fill in the pixels behind the user in the image. I accomplished this using a “deepest-pixels” buffer that keeps track of the furthest color at each pixel in the image. These pixels are substituted in where the image is cut out, and updated anytime deeper pixels are found.

Here’s a complete breakdown of the image analysis process:

The color and depth images are read off the Kinect. OpenNI is used to align the depth and color images, accounting for the slight difference in the lenses and placement that would otherwise cause the pixels in the depth image to be misaligned with pixels in the color image.
The depth image is run through the PrimeSense Scene Analyzer, which provides an additional channel of data for each pixel in the depth buffer, identifying it as a member of one or more unique bodies in the scene. In the picture at left, these are rendered in red and blue.
One of the bodies is selected and the pixels are cut from the primary color buffer into a separate texture buffer.
The depth of each pixel in the remaining image is compared to the furthest known depth, and deeper pixels are copied into a special “most-distant” buffer. This buffer contains the RGB color of the furthest pixel at each point in the scene, effectively keeping a running copy of the scene background.
The pixels in the body are replaced using pixels from the “most-distant” buffer to effectively erase the individual from the scene.
A texture is created from the cut-out pixels and passed into a GLSL shader along with the previous image.
The GLSL shader performs distortions and other effects on the cut-out image before recompositing it onto the background image to produce the final result.
Final result!

Here’s a video of the Kinect Fun House Mirror at the IACD 2011 Showcase:

GLSL & The Kinect – Part 2

For the last couple weeks, I’ve been working on a kinect hack that performs body detection and extracts individuals from the scene, distorts them using GLSL shaders, and pastes them back into the scene using OpenGL multitexturing. The concept is relatively straightforward. Blob detection on the depth image determines the pixels that are part of each individual. The color pixels within the body are copied into a texture, and the non-interesting parts of the image are copied into a second background texture. Since distortions are applied to bodies in the scene, the holes in the background image need to be filled. To accomplish this, the most distant pixel at each point is cached from frame to frame and substituted in when body blobs are cut out.

It’s proved difficult to pull out the bodies in color. Because the depth camera and the color camera in the Kinect do not align perfectly, using a depth image blob as a mask for color image does not work. On my Kinect, the mask region was off by more than 15 pixels, and color pixels flagged as belonging to a blob might actually be part of the background.

To fix this, Max Hawkins pointed me in the direction of a Cinder project which used OpenNI to correct the perspective of the color image to match the depth image. Somehow, that impressive feat of computer imaging is accomplished with these five lines of code:

1
2
3
4
5
6
7
8
    // Align depth and image generators
    printf("Trying to set alt. viewpoint");
    if( g_DepthGenerator.IsCapabilitySupported(XN_CAPABILITY_ALTERNATIVE_VIEW_POINT) )
    {
        printf("Setting alt. viewpoint");
        g_DepthGenerator.GetAlternativeViewPointCap().ResetViewPoint();
        if( g_ImageGenerator ) g_DepthGenerator.GetAlternativeViewPointCap().SetViewPoint( g_ImageGenerator );
    }

I hadn’t used Cinder before, and I decided to migrate the project to Cinder since it seemed to be a much more natural environment to use GLSL shaders in. Unfortunately, the Kinect OpenNI drivers in Cinder seemed to be crap compared to the ones in OpenFrameworks, et. al. The console often reported that the “depth buffer size was incorrect” and that the “depth frame is invalid”. Onscreen, the image from the camera flashed and occasionally frames appeared misaligned or half missing.

I continued fighting with Cinder until last night, when at 10PM I found this video in an online forum:

This video is intriguing, because it shows the real-time detection and unique identification of multiple people with no configuration. AKA it’s hot shit. It turns out, the video is made with PrimeSense, the technology used for hand / gesture / person detection on the XBox.

I downloaded PrimeSense and compiled the samples. Behavior in the above video achieved. The scene analysis code is incredibly fast and highly robust. It kills the blob detection code I wrote performance-wise, and doesn’t require that people’s legs intersect with the bottom of the frame (the technique I was using assumed the nearest blob intersecting the bottom of the frame was the user.)

I re-implemented the project on top of the PrimeSense sample in C++. I migrated the depth+color alignment code over from Cinder and built a background cache and rebuilt the display on top of a GLSL shader. Since I was just using Cinder to wrap OpenGL shaders, I decided it wasn’t worth linking it in to the sample code. It’s 8 source files, it compiles on the command line. It was ungodly fast. I was in love.

Rather than apply an effect to all the individuals in the scene, I decided it was more interesting to distort one. Since the PrimeSense library assigns each blob a unique identifier, this was an easy task. The video below shows the progress so far. Unfortunately, it doesn’t show off the frame rate, which is a cool 30 or 40fps.

My next step is to try to improve the edge of the extracted blob and create more interesting shaders that blur someone in the scene or convert them to “8-bit”. Stay tuned!

Generative Art in Processing

I threw around a lot of ideas for this assignment. I wanted to create a generative art piece that was static and large–something that could be printed on canvas and placed on a wall. I also wanted to revisit the SMS dataset I used in my first assignment, because I felt I hadn’t sufficiently explored it. I eventually settled on modeling something after this “Triangles” piece on OpenProcessing. It seemed relatively simple and it was very abstract.

I combined the concept from the Triangles piece with code that scored characters in a conversation based on the likelihood that they would follow the previous characters. This was accomplished by generating a Markov chain and a character frequency table using combinations of two characters pulled from the full text of 2,500 text messages. The triangles generated to represent the conversation were colorized so that more likely characters were shown inside brighter triangles.

Process:

I started by printing out part of an SMS conversation, with each character drawn within a triangle. The triangles were colorize based on whether the message was sent or received, and the individual letter brightnesses were modulated based on the likelihood that the characters would be adjacent to each other in a typical text message.

In the next few revisions, I decided to move away from simple triangles and make each word in the conversation a single unit. I also added some code that seeds the colors used in the visualization based on the properties of the conversation such as it’s length.

Final output – click to enlarge!

Kinect & GLSL Shaders = Fun!

I’m revisiting the Kinect for my final project. I’m separating the background of an image from the foreground and using OpenGL GLSL Multitexturing Shaders to apply effects to the foreground.

GLSL Shaders work in OpenFrameworks, which is cool. However, there’s a trick that took me about three days to find. By default, ofTextures use an OpenGL plugin that allows for non-power of two textures. Even if you use a power of two texture the plugin is enabled and allocates textures that can’t be referenced from GLSL. FML.

The first GLSL shader I wrote distorted the foreground texture layer on the Y axis using a sine wave to adjust the image fragments that are mapped onto a textured quad.

I wrote another shader that blurs the foreground texture using textel averaging. You can see that the background is unaffected by the filter!

Looking Outwards – Final Project

I’m still tossing around ideas for my final project, but I’d like to do more experimentation with the kinect. Specifically, I think it’d be fun to do some high-quality background subtraction and separate the user from the rest of the scene. I’d like to create a hack in which the users body is distorted by a fun house mirror, while the background in the scene remains entirely unaffected. Other tricks, such as pixelating the users body or blurring it while keeping everything else intact could also be fun. The basic idea seems manageable, and I think I’d have some time left over to polish it and add a number of features. I’d like to draw on the auto calibration code I wrote for my previous kinect hack so that it’s easy to walk up and interact with the “circus mirror.”

I’ve been searching for about an hour, and it doesn’t look like anyone has done selective distortion of the RGB camera image off the kinect. I’m thinking something like this:

Imagine how much fun those Koreans would be having if the entire scene looked normal except for their stretched friend. It’s crazy mirror 2.0.

I think background subtraction (and then subsequent filling) would be important for this sort of hack, and it looks like progress has been made to do this in OpenFrameworks. The video below shows someone cutting themselves out of the kinect depth image and then hiding everything else in the scene.

To achieve the distortion of the user’s body, I’m hoping to do some low-level work in OpenGL. I’ve done some research in this area and it looks like using a framebuffer and some bump mapping might be a good approach. This article suggests using the camera image as a texture and then mapping it onto a bump mapped “mirror” plane:

Circus mirror and lens effects. Using a texture surface as the rendering target, render a scene (or a subset thereof) from the point of view of a mirror in your scene. Then use this rendered scene as the mirror’s texture, and use bump mapping to perturb the reflection/refraction according to the values in your bump map. This way the mirror could be bent and warped, like a funhouse mirror, to distort the view of the scene.

At any rate, we’ll see how it goes! I’d love to get some feedback on the idea. It seems like something I could get going pretty quickly, so I’m definitely looking for possible extensions / features that might make it more interesting!