Posts by bengotow

Kinect & GLSL Shaders = Fun!

I’m revisiting the Kinect for my final project. I’m separating the background of an image from the foreground and using OpenGL GLSL Multitexturing Shaders to apply effects to the foreground.

GLSL Shaders work in OpenFrameworks, which is cool. However, there’s a trick that took me about three days to find. By default, ofTextures use an OpenGL plugin that allows for non-power of two textures. Even if you use a power of two texture the plugin is enabled and allocates textures that can’t be referenced from GLSL. FML.

The first GLSL shader I wrote distorted the foreground texture layer on the Y axis using a sine wave to adjust the image fragments that are mapped onto a textured quad.

I wrote another shader that blurs the foreground texture using textel averaging. You can see that the background is unaffected by the filter!

Looking Outwards – Final Project

I’m still tossing around ideas for my final project, but I’d like to do more experimentation with the kinect. Specifically, I think it’d be fun to do some high-quality background subtraction and separate the user from the rest of the scene. I’d like to create a hack in which the users body is distorted by a fun house mirror, while the background in the scene remains entirely unaffected. Other tricks, such as pixelating the users body or blurring it while keeping everything else intact could also be fun. The basic idea seems manageable, and I think I’d have some time left over to polish it and add a number of features. I’d like to draw on the auto calibration code I wrote for my previous kinect hack so that it’s easy to walk up and interact with the “circus mirror.”

I’ve been searching for about an hour, and it doesn’t look like anyone has done selective distortion of the RGB camera image off the kinect. I’m thinking something like this:

Imagine how much fun those Koreans would be having if the entire scene looked normal except for their stretched friend. It’s crazy mirror 2.0.

I think background subtraction (and then subsequent filling) would be important for this sort of hack, and it looks like progress has been made to do this in OpenFrameworks. The video below shows someone cutting themselves out of the kinect depth image and then hiding everything else in the scene.

To achieve the distortion of the user’s body, I’m hoping to do some low-level work in OpenGL. I’ve done some research in this area and it looks like using a framebuffer and some bump mapping might be a good approach. This article suggests using the camera image as a texture and then mapping it onto a bump mapped “mirror” plane:

Circus mirror and lens effects. Using a texture surface as the rendering target, render a scene (or a subset thereof) from the point of view of a mirror in your scene. Then use this rendered scene as the mirror’s texture, and use bump mapping to perturb the reflection/refraction according to the values in your bump map. This way the mirror could be bent and warped, like a funhouse mirror, to distort the view of the scene.

At any rate, we’ll see how it goes! I’d love to get some feedback on the idea. It seems like something I could get going pretty quickly, so I’m definitely looking for possible extensions / features that might make it more interesting!

Explorations of Varnish

For the last couple days, I’ve been working on a Sinatra-based web service for resizing images. You can hit the service with an image URL and a desired size, and it uses the high-speed image library VIPS to convert it to the appropriate size. To cache images, the Populr team decided to use Varnish. Varnish makes it painless to cache HTTP responses by proxying requests to your web service and returning cached data when it’s available.

EngineYard provides a Chef script for configuring Varnish that automatically configures it based on the size of your instance. However, when I activated the Chef script and started a deployment, the instance wouldn’t spin up. The /var/log/syslog on the machine showed that Varnish was failing to spawn a child process:

1
2
3
4
Jun 28 12:21:28  varnishd[12518]: Pushing vcls failed: CLI communication error (hdr)
Jun 28 12:21:28 varnishd[12518]: Child (12521) died signal=16
Jun 28 12:21:28 varnishd[12518]: Child (-1) said
Jun 28 12:21:28 varnishd[12518]: Child (-1) said Child starts

It turns out, Varnish is built with 64bit architectures in mind and the default settings assume a 64bit stack. On his blog, Kristian Lyngstol noted:

Varnish works on 32-bit, but was designed for 64bit. It’s all about virtual memory: Things like stack size suddenly matter on 32bit. If you must use Varnish on 32-bit, you’re somewhat on your own. However, try to fit it within 2GB. I wouldn’t recommend a cache larger than 1GB, and no more than a few hundred threads… (Why are you on 32bit again?)

Antonio Carpentieri wrote a great blog post on trying to start Varnish on a 32-bit Amazon instance, and found that the problem is the sess_workspace configuration parameter. It’s default value 262144 (256k) is too large to fit on a 32-bit stack. He suggests setting it to only 19264 (16k) and also changing the thread_pool_stack to 64k to prevent problems with Varnish trying to start child processes.
When using the Varnish Chef recipe, these changes need to be made in varnishd.monitrc.erb, which contains the commands monit uses to start and stop Varnish. The configuration parameters are conveniently passed as arguments to the start command, so it’s easy to edit them.

1
2
3
4
check process varnish_80
  with pidfile /var/run/varnish.80.pid
  start program = "/usr/sbin/varnishd -a :<%= @varnish_port %> -T 127.0.0.1:6082 -s <%= @cache %> -f /etc/varnish/app.vcl -P /var/run/varnish.80.pid -u nobody -g nobody -p obj_workspace=4096 -p sess_workspace=262144 -p listen_depth=2048 -p overflow_max=<%= @overflow_max %> -p ping_interval=2 -p log_hashstring=off -h classic,5000009 -p thread_pool_max=<%= @thread_pool_max %> -p lru_interval=60 -p esi_syntax=0x00000003 -p sess_timeout=10 -p thread_pools=<%= @thread_pools %> -p thread_pool_min=100 -p shm_workspace=32768 -p thread_pool_add_delay=1"
  stop program = "/usr/bin/pkill -KILL varnish"

Rather than pursue these changes and handicap Varnish on a 32-bit server, we decided to create a custom build of the VIPS static library for EngineYard’s 64-bit high CPU instances. This turned out to be a bit of a hassle, but EngineYard’s default Varnish Chef recipe ran without a hitch on the 64 bit machine.

Looking Outwards – Kinect Edition

So I’ve been thinking a lot about what I want to do with the Kinect. I got one for christmas, and I still haven’t had time to do much with it. I’m a huge music person, and I’d like to create an interactive audio visualizer that takes input from body movement instead of perceivable audio qualities (volume, frequency waveforms, etc…). I think that using gestural input from a person dancing, conducting, or otherwise rocking out to music would provide a much more natural input, since it would accurately reflect the individual’s response to the audio. I can imagine pointing a Kinect at a club full of dancing people and using their movement to drive a wall-sized visualization. It’d be a beautifully human representation of the music.

I’ve been Googling to see if anyone is doing something like this already, and I haven’t been able to find anything really compelling. People have wired the Kinect through TUIO to drive fluid systems and particle emitters, but not for the specific purpose of representing a piece of music. I don’t find these very impressive, because they’re really dumbing down the rich input from the Kinect. They just treat the users’ hands as blobs, find their centers, and use those as multitouch points. It must be possible to do something more than that. But I haven’t tried yet, and I want everything to be real-time – so maybe not ;-)

Here are a few visual styles I’ve been thinking of trying to reproduce. The first is a bleeding long-exposure effect that was popularized by the iPod commercials a few years ago. Though it seems most people are doing this in After Effects, I think I can do it in OpenGL or maybe Processing:

This is possibly the coolest visualization I’ve seen in a while. However, it was done in 3D Studio Max with the Krakatoa plugin, and everything was painstakingly hand-scripted into a particle system. I love the way the light shoots through the particles (check out 0:16), though. I’d like to create something where the user’s hands are light sources… It’d be incredibly slick.

I’m not sure how to approach implementing something like this, and I’m still looking for existing platforms that can give me a leg-up. I have significant OpenGL experience and I’ve done fluid dynamics using Jos Stam’s Navier-Stokes equation solver, so I could fuse that to a custom renderer to get this done, but I’d like to focus on the art and input and let something else handle the graphics, so suggestions are welcome!

Looking Outwards – Generative Art

I’m a huge fan of Dave Bollinger’s work “Density” (http://www.davebollinger.com/works/density/). He does a mix of generative art and traditional art, and he blends computer programming with traditional mediums. He’s done some generative works that are in a wood block style, and I think they look pretty cool. Unfortunately, he doesn’t document his process very much.

There’s a service online called DNA11 (www.dna11.com) that produces generative art from DNA. You submit a small DNA sample, and they run a PCR of it, colorize it, and enlarge it onto a large canvas. I think it’s a really cool form of generative art because it’s completely personalized.

I think it’d be fun to use this assignment to create an art piece I can hang in my apartment (my walls are looking pretty bare right now…) so I’ve been focusing on generative art that creates static images. I found the work of Marius Watz pretty interesting because he uses code to produce large wall-sized artworks that are visually intriguing and have a lot of originality from piece to piece, while retaining a sense of unity among the set. You can browse the collection of final images here: http://systemc.unlekker.net/showall.php?id=SystemC_050114_150004_04.

Cool Computational Art

The Graffiti Analysis project by Evan Roth makes an effort to capture the motion of graffiti in an artistic fashion. I’m interested in using the Kinect to capture hand gestures representative of audio, and I think this is a really cool visualization of gestural input. The way that velocity information is presented as thin rays is visually appealing. I think it would be more interesting if the project incorporated color, though–since real graffiti communicates with the viewer using color as well as shape.

Cosmogramma Fieldlines is an interactive music visualization created in OpenFrameworks. It was created by Aaron Meyers for the release of an album by the band Flying Lotus. I really like the steampunk, ink and paper graphic design of the project, and I like the way the lines radiating from the object in the center converge around the “planets.” I think it’d be cool to change the interaction approach so that the user could “strum” or otherwise manipulate the radial lines instead of the planets, but it might be harder to do?

This project, called “Solar Rework”, is a really fantastic visualization of audio that uses colored blobs, bright colors and glassy “waves” to represent audio data. I think it’s cool because it visually conveys the idea that the sound is “washing over” the blobs in the scene. I really don’t have any complaints with this one, except that I wish there was source I could download and try out myself.

http://www.turbulence.org/Works/song/mono.html

The Shape of Song is a way of visualizing music that reveals repetition within a track. It’s an interesting way of profiling a song and revealing the underlying data, and the implementation uses arcs for some pretty cool looking shapes. Unfortunately, the visualization is static–when I ran it for the first time, I really expected the visualization to be generated as I listened to the song, and I was a little disappointed when it was already there.

Text Rain

I implemented the text rain exercise in Processing and used code from the Background Subtraction sample at Processing.org to do the underlying detection of objects in the scene.

Learning Processing – Schotter

In Processing.js:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
 
//Info: http://processingjs.org/reference
 
void setup() {
 
	size(404,730);
 
	int rows = 22;
 
	int cols = 12;
 
	int count = cols * rows;
 
	int rect_width = 384 / cols;
 
	int rect_height = rect_width;
 
 
 
smooth();
 
translate(10,10);
 
background(#ffffff);
 
noFill();
 
 
 
	for (int ii = 0; ii < count; ii ++){
 
		int origin_x = (ii%cols) * rect_width;
 
		int origin_y = floor(ii / cols) * rect_height;
 
		float randomness = ((float)ii / (float)count);
 
		float rand_rad = (random(2) - 1) * randomness * randomness;
 
		float rand_x = (random(8) - 4) * randomness * randomness;
 
		float rand_y = (random(8) - 4) * randomness * randomness; 
 
		translate(origin_x, origin_y);
 
		rotate(rand_rad);
 
		rect(rand_x, rand_y, rect_width, rect_height);
 
		rotate(-rand_rad);
 
		translate(-origin_x, -origin_y);
 
	}
 
}
 
void draw() {
 
}

As a Java applet:

As code:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
 
size(404,730);
 
 
 
int rows = 22;
 
int cols = 12;
 
int count = cols * rows;
 
int rect_width = 384 / cols;
 
int rect_height = rect_width;
 
 
 
smooth();
 
translate(10,10);
 
background(#ffffff);
 
noFill();
 
 
 
for (int ii = 0; ii < count; ii ++){
 
  int origin_x = (ii%cols) * rect_width;
 
  int origin_y = floor(ii / cols) * rect_height;
 
 
 
  float randomness = ((float)ii / (float)count);
 
  float rand_rad = (random(2f) - 1f) * randomness * randomness;
 
  float rand_x = (random(8f) - 4f) * randomness * randomness;
 
  float rand_y = (random(8f) - 4f) * randomness * randomness; 
 
  translate(origin_x, origin_y);
 
  rotate(rand_rad);
 
  rect(rand_x, rand_y, rect_width, rect_height);
 
  rotate(-rand_rad);
 
  translate(-origin_x, -origin_y);
 
}

A video demonstrating it done in OpenFrameworks as well: