Posts from the “Blog” Category

Debouncing Method Dispatch in Objective-C

I’ve been doing a lot of Node.js programming recently, and Javascript / Coffeescript programming style is starting to boil over into the way I think about Objective-C. One of my favorite practices in Javascript is the concept of deferring, throttling, or debouncing method calls. There are tons of uses for this. For example, let’s say your app is updating a model object, and you want to persist that model object when the updates are complete. Unfortunately, the model is touched in several bits of code one after another, and your model gets saved to disk twice. Or three times. Or four times. All in one pass through the run loop.

It’s pretty easy to defer or delay the execution of an objective-c method using “performSelectorAfterDelay” or any number of methods. Debouncing—running a method just once in the next pass through the run loop after it’s been called multiple times—is a bit trickier. However, in the case described above it’s perfect. Touch the model all you want, call “save” a dozen times, and in the next pass through the run loop, it get’s saved just once.

Here’s how I implemented a new debounce method “performSelectorOnMainThreadOnce”:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
@implementation NSObject (AssociationsAndDispatch)
 
- (void)associateValue:(id)value withKey:(void *)key
{
	objc_setAssociatedObject(self, key, value, OBJC_ASSOCIATION_RETAIN);
}
 
- (void)weaklyAssociateValue:(id)value withKey:(void *)key
{
	objc_setAssociatedObject(self, key, value, OBJC_ASSOCIATION_ASSIGN);
}
 
- (id)associatedValueForKey:(void *)key
{
	return objc_getAssociatedObject(self, key);
}
 
- (void)performSelectorOnMainThreadOnce:(SEL)selector
{
    [self associateValue:[NSNumber numberWithBool: YES] withKey: (void*)selector];
 
    dispatch_async(dispatch_get_main_queue(), ^{
        if ([self associatedValueForKey: (void*)selector]) {
            [self performSelector: selector];
            [self associateValue:nil withKey:(void*)selector];
        }
    });
}
 
@end

A more advanced version of this method would allow you to say “perform this selector just once in the next 100 msec” rather than performing it in the next iteration through the run loop. Anybody want to take a stab at that?

stringWithFormat: is slow. Really slow.

I’m working on a project that makes extensive use of NSDictionaries. Buried deep in the model layer, there are dozens of calls to stringWithFormat: used to create dictionary keys. Here’s a quick example:

1
2
3
4
5
6
7
8
- (CGRect)rect:(NSString*)name inDict:(NSDictionary*)dict
{
    float x = [[dict objectForKey:[NSString stringWithFormat: @"%@@0", name]] floatValue];
    float y = [[dict objectForKey:[NSString stringWithFormat: @"%@@1", name]] floatValue];
    float w = [[dict objectForKey:[NSString stringWithFormat: @"%@@2", name]] floatValue];
    float h = [[dict objectForKey:[NSString stringWithFormat: @"%@@3", name]] floatValue];
    return CGRectMake(x, y, w,h);
}

In this example, I’m using stringWithFormat: in a simple way. To read four CGRect values for the rect ‘frame’ from the dictionary, it creates the keys [email protected], [email protected], [email protected], and [email protected] Because of the way my app works, I call stringWithFormat: to create strings like this a LOT. In complex situations, to the tune of 20,000x a second.

I was using Instruments to identify bottlenecks in my code and quickly discovered that stringWithFormat: was responsible for more than 40% of the time spent in the run loop. In an attempt to optimize, I switched to sprintf instead of stringWithFormat. The result was incredible. The code below is nearly 10x faster, and made key creation a negligible task:

1
2
3
4
5
6
7
8
- (NSString*)keyForValueAtIndex:(int)index inPropertySet:(NSString*)name
{
        // the following code just creates %@@%d  - but it's faster than stringWithFormat: by a factor of 10x.
        char cString[25];
        sprintf (cString, "@%d", ii);
        NSString* s = [[[NSString alloc] initWithUTF8String:cString] autorelease];
        return [name stringByAppendingString: s];
}

It’s worth mentioning that I’ve refactored the app even more—to completely avoid saving structs this way (since NSValue is uh… an obvious solution), but I felt like it was worth posting this anyway, since you might not be able to refactor in the way I did.

Pitfalls of the Coffeescript Syntax

I’ve been using Coffeescript extensively over the last few weeks. Overall, I love it. The flexibility of the language and the clean, minimalist syntax occasionally causes problems, though. In this blog post, I’ve documented some of the pitfalls that have cost me hours of time. Each of these problems is the result of user error, but are also largely the result of language design choices. Hopefully reading this will save you from these pitfalls!

1. Optional Function Parenthesis and Spacing
The low down: a - 1 is math, a -1 is a function call.
Advice: Always place spaces on both sides of mathematical operators, and think about whether your code could be interpreted as an argument list.

Example Coffeescript:

1
2
3
4
5
6
7
a = 1
if (a - 1 == 0)
    b = 2
if (a-1 == 0)
    b = 2
if (a -1 == 0)
    b = 2

Resulting Javascript:

1
2
3
4
5
6
7
8
9
10
a = 1;
if (a - 1 === 0) {
    b = 2;
}
if (a - 1 === 0) {
    b = 2;
}
if (a(-1 === 0)) {
    b = 2;
}

2. Safely Iterating Backwards
The low down: Converting a C-style for loop to for i in [array.length - 1 .. 0] produces unexpected behavior when array.length = 0.
Advice:When you intend to iterate by -1, add by -1 to the end of your for statement to ensure that you never iterate from [-1..0]

Example Coffeescript:

1
2
3
4
5
6
7
# Bad - calls array[-1] if array.length == 0
for x in [array.length - 1..0]
    array.splice(x, 1) if array[x] == true
 
# Good - no iteration occurs if array.length == 0
for x in [array.length - 1..0] by -1
    array.splice(x, 1) if array[x] == true

Resulting Javascript:

1
2
3
4
5
6
7
8
9
10
11
12
13
var x, _i, _j, _ref, _ref1;
 
for (x = _i = _ref = array.length - 1; _ref <= 0 ? _i <= 0 : _i >= 0; x = _ref <= 0 ? ++_i : --_i) {
    if (array[x] === true) {
        array.splice(x, 1);
    }
}
 
for (x = _j = _ref1 = array.length - 1; _j >= 0; x = _j += -1) {
    if (array[x] === true) {
        array.splice(x, 1);
    }
}

Even with these pitfalls, Coffeescript is a great language. It’s incredibly readable and provides class structures that make it easy to write object-oriented javascript. Loose rules governing parenthesis and brackets mean that Coffeescript is sometimes ambiguous – “a -1″ could be a function call, or basic algebra. However, the ability to leave off parenthesis makes for beautiful function declarations.

Know of more Coffeescript syntax pitfalls? Help me make this blog post a great resource—let me know what you think in the comments and I’ll cite you!

Explorations of Varnish

For the last couple days, I’ve been working on a Sinatra-based web service for resizing images. You can hit the service with an image URL and a desired size, and it uses the high-speed image library VIPS to convert it to the appropriate size. To cache images, the Populr team decided to use Varnish. Varnish makes it painless to cache HTTP responses by proxying requests to your web service and returning cached data when it’s available.

EngineYard provides a Chef script for configuring Varnish that automatically configures it based on the size of your instance. However, when I activated the Chef script and started a deployment, the instance wouldn’t spin up. The /var/log/syslog on the machine showed that Varnish was failing to spawn a child process:

1
2
3
4
Jun 28 12:21:28  varnishd[12518]: Pushing vcls failed: CLI communication error (hdr)
Jun 28 12:21:28 varnishd[12518]: Child (12521) died signal=16
Jun 28 12:21:28 varnishd[12518]: Child (-1) said
Jun 28 12:21:28 varnishd[12518]: Child (-1) said Child starts

It turns out, Varnish is built with 64bit architectures in mind and the default settings assume a 64bit stack. On his blog, Kristian Lyngstol noted:

Varnish works on 32-bit, but was designed for 64bit. It’s all about virtual memory: Things like stack size suddenly matter on 32bit. If you must use Varnish on 32-bit, you’re somewhat on your own. However, try to fit it within 2GB. I wouldn’t recommend a cache larger than 1GB, and no more than a few hundred threads… (Why are you on 32bit again?)

Antonio Carpentieri wrote a great blog post on trying to start Varnish on a 32-bit Amazon instance, and found that the problem is the sess_workspace configuration parameter. It’s default value 262144 (256k) is too large to fit on a 32-bit stack. He suggests setting it to only 19264 (16k) and also changing the thread_pool_stack to 64k to prevent problems with Varnish trying to start child processes.
When using the Varnish Chef recipe, these changes need to be made in varnishd.monitrc.erb, which contains the commands monit uses to start and stop Varnish. The configuration parameters are conveniently passed as arguments to the start command, so it’s easy to edit them.

1
2
3
4
check process varnish_80
  with pidfile /var/run/varnish.80.pid
  start program = "/usr/sbin/varnishd -a :<%= @varnish_port %> -T 127.0.0.1:6082 -s <%= @cache %> -f /etc/varnish/app.vcl -P /var/run/varnish.80.pid -u nobody -g nobody -p obj_workspace=4096 -p sess_workspace=262144 -p listen_depth=2048 -p overflow_max=<%= @overflow_max %> -p ping_interval=2 -p log_hashstring=off -h classic,5000009 -p thread_pool_max=<%= @thread_pool_max %> -p lru_interval=60 -p esi_syntax=0x00000003 -p sess_timeout=10 -p thread_pools=<%= @thread_pools %> -p thread_pool_min=100 -p shm_workspace=32768 -p thread_pool_add_delay=1"
  stop program = "/usr/bin/pkill -KILL varnish"

Rather than pursue these changes and handicap Varnish on a 32-bit server, we decided to create a custom build of the VIPS static library for EngineYard’s 64-bit high CPU instances. This turned out to be a bit of a hassle, but EngineYard’s default Varnish Chef recipe ran without a hitch on the 64 bit machine.

Looking Outwards – Kinect Edition

So I’ve been thinking a lot about what I want to do with the Kinect. I got one for christmas, and I still haven’t had time to do much with it. I’m a huge music person, and I’d like to create an interactive audio visualizer that takes input from body movement instead of perceivable audio qualities (volume, frequency waveforms, etc…). I think that using gestural input from a person dancing, conducting, or otherwise rocking out to music would provide a much more natural input, since it would accurately reflect the individual’s response to the audio. I can imagine pointing a Kinect at a club full of dancing people and using their movement to drive a wall-sized visualization. It’d be a beautifully human representation of the music.

I’ve been Googling to see if anyone is doing something like this already, and I haven’t been able to find anything really compelling. People have wired the Kinect through TUIO to drive fluid systems and particle emitters, but not for the specific purpose of representing a piece of music. I don’t find these very impressive, because they’re really dumbing down the rich input from the Kinect. They just treat the users’ hands as blobs, find their centers, and use those as multitouch points. It must be possible to do something more than that. But I haven’t tried yet, and I want everything to be real-time – so maybe not ;-)

Here are a few visual styles I’ve been thinking of trying to reproduce. The first is a bleeding long-exposure effect that was popularized by the iPod commercials a few years ago. Though it seems most people are doing this in After Effects, I think I can do it in OpenGL or maybe Processing:

This is possibly the coolest visualization I’ve seen in a while. However, it was done in 3D Studio Max with the Krakatoa plugin, and everything was painstakingly hand-scripted into a particle system. I love the way the light shoots through the particles (check out 0:16), though. I’d like to create something where the user’s hands are light sources… It’d be incredibly slick.

I’m not sure how to approach implementing something like this, and I’m still looking for existing platforms that can give me a leg-up. I have significant OpenGL experience and I’ve done fluid dynamics using Jos Stam’s Navier-Stokes equation solver, so I could fuse that to a custom renderer to get this done, but I’d like to focus on the art and input and let something else handle the graphics, so suggestions are welcome!

Looking Outwards – Generative Art

I’m a huge fan of Dave Bollinger’s work “Density” (http://www.davebollinger.com/works/density/). He does a mix of generative art and traditional art, and he blends computer programming with traditional mediums. He’s done some generative works that are in a wood block style, and I think they look pretty cool. Unfortunately, he doesn’t document his process very much.

There’s a service online called DNA11 (www.dna11.com) that produces generative art from DNA. You submit a small DNA sample, and they run a PCR of it, colorize it, and enlarge it onto a large canvas. I think it’s a really cool form of generative art because it’s completely personalized.

I think it’d be fun to use this assignment to create an art piece I can hang in my apartment (my walls are looking pretty bare right now…) so I’ve been focusing on generative art that creates static images. I found the work of Marius Watz pretty interesting because he uses code to produce large wall-sized artworks that are visually intriguing and have a lot of originality from piece to piece, while retaining a sense of unity among the set. You can browse the collection of final images here: http://systemc.unlekker.net/showall.php?id=SystemC_050114_150004_04.

Cool Computational Art

The Graffiti Analysis project by Evan Roth makes an effort to capture the motion of graffiti in an artistic fashion. I’m interested in using the Kinect to capture hand gestures representative of audio, and I think this is a really cool visualization of gestural input. The way that velocity information is presented as thin rays is visually appealing. I think it would be more interesting if the project incorporated color, though–since real graffiti communicates with the viewer using color as well as shape.

Cosmogramma Fieldlines is an interactive music visualization created in OpenFrameworks. It was created by Aaron Meyers for the release of an album by the band Flying Lotus. I really like the steampunk, ink and paper graphic design of the project, and I like the way the lines radiating from the object in the center converge around the “planets.” I think it’d be cool to change the interaction approach so that the user could “strum” or otherwise manipulate the radial lines instead of the planets, but it might be harder to do?

This project, called “Solar Rework”, is a really fantastic visualization of audio that uses colored blobs, bright colors and glassy “waves” to represent audio data. I think it’s cool because it visually conveys the idea that the sound is “washing over” the blobs in the scene. I really don’t have any complaints with this one, except that I wish there was source I could download and try out myself.

http://www.turbulence.org/Works/song/mono.html

The Shape of Song is a way of visualizing music that reveals repetition within a track. It’s an interesting way of profiling a song and revealing the underlying data, and the implementation uses arcs for some pretty cool looking shapes. Unfortunately, the visualization is static–when I ran it for the first time, I really expected the visualization to be generated as I listened to the song, and I was a little disappointed when it was already there.

Text Rain

I implemented the text rain exercise in Processing and used code from the Background Subtraction sample at Processing.org to do the underlying detection of objects in the scene.

Learning Processing – Schotter

In Processing.js:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
 
//Info: http://processingjs.org/reference
 
void setup() {
 
	size(404,730);
 
	int rows = 22;
 
	int cols = 12;
 
	int count = cols * rows;
 
	int rect_width = 384 / cols;
 
	int rect_height = rect_width;
 
 
 
smooth();
 
translate(10,10);
 
background(#ffffff);
 
noFill();
 
 
 
	for (int ii = 0; ii < count; ii ++){
 
		int origin_x = (ii%cols) * rect_width;
 
		int origin_y = floor(ii / cols) * rect_height;
 
		float randomness = ((float)ii / (float)count);
 
		float rand_rad = (random(2) - 1) * randomness * randomness;
 
		float rand_x = (random(8) - 4) * randomness * randomness;
 
		float rand_y = (random(8) - 4) * randomness * randomness; 
 
		translate(origin_x, origin_y);
 
		rotate(rand_rad);
 
		rect(rand_x, rand_y, rect_width, rect_height);
 
		rotate(-rand_rad);
 
		translate(-origin_x, -origin_y);
 
	}
 
}
 
void draw() {
 
}

As a Java applet:

As code:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
 
size(404,730);
 
 
 
int rows = 22;
 
int cols = 12;
 
int count = cols * rows;
 
int rect_width = 384 / cols;
 
int rect_height = rect_width;
 
 
 
smooth();
 
translate(10,10);
 
background(#ffffff);
 
noFill();
 
 
 
for (int ii = 0; ii < count; ii ++){
 
  int origin_x = (ii%cols) * rect_width;
 
  int origin_y = floor(ii / cols) * rect_height;
 
 
 
  float randomness = ((float)ii / (float)count);
 
  float rand_rad = (random(2f) - 1f) * randomness * randomness;
 
  float rand_x = (random(8f) - 4f) * randomness * randomness;
 
  float rand_y = (random(8f) - 4f) * randomness * randomness; 
 
  translate(origin_x, origin_y);
 
  rotate(rand_rad);
 
  rect(rand_x, rand_y, rect_width, rect_height);
 
  rotate(-rand_rad);
 
  translate(-origin_x, -origin_y);
 
}

A video demonstrating it done in OpenFrameworks as well:

HexDefense

Intense, arcade-style tower defense for Android

The Story

HexDefense started as a class project for a mobile prototyping lab I took while at Carnegie Mellon. The lab required that apps be written in Java on the Android platform, and I figured it’d be a good opportunity to try writing a game. I’m a big fan of the tower defense genre and I’ve been heavily influenced by games on the iPhone like Field Runners and GeoDefense Swarm. From the outset, I wanted the game to have arcade style graphics reminiscent of Geometry Wars. That way, I figured, I wouldn’t have to find an artist to create the sprites, and I could focus on explosive OpenGL particle effects and blend-based bloom.

During the fall semester, I collaborated with Paul Caravelli and Tony Zhang on the first iteration of the game. I had the strongest graphics and animation background, so I focused on the gameplay and wrote all of the OpenGL code behind the game. I also created most of the game model, implementing the towers and creeps and creating actions with game logic for tower targeting, attacks, projectile motion, explosions, implosions and other effects. Paul contributed path finding code for the creeps based on breadth-first-search and created interfaces for implementing in-game actions based on the command pattern. He also contributed the original implementation of the grid model and worked on abstract base classes in the game model. Tony created the app’s settings screen and linked together activities for the different screens of the application.

At the end of the fall semester, the game was functional but unrefined. There were no sounds, no levels, and I’d only created one type of enemy. After the class ended, I talked with Paul and decided to finish it over my Christmas break. Paul was too busy to continue working on the app, so I continued development independently. I worked full-time for four weeks to deliver the level of polish I was accustomed to on the iPhone. I refined the graphics, tested the app across a variety of phones and added fifteen levels. I also added 3D directional sound, boss creeps and wrapped everything in a completely new look and feel. People say that the last 10% is the 90% of the work, and I think that’s particularly true on Android – there are minor differences across devices that make writing a solid game a lot more work than I expected.

The game was released at the end of January and has been well received so far. I created a lot of promotional art and setup a website with gameplay footage and press resources, and the game has garnered quite a bit of attention. It’s been featured on the front page of the Android marketplace and has a 4 1/2 stars. It’s rising in the “Paid Apps” rankings and is currently the #16th most popular game on the Android platform!

Lessons Learned:

I’ve learned a lot about the Android platform developing HexDefense. A couple of tips and takeaways:

  1. Let the OpenGL view run in CONTINUOUS mode. Nothing else (timers, threads that trigger redraws) will give performance close to this.
  2. Write all of the game logic so that it can advance the model by an arbitrary number of milliseconds. Because multitasking can cause hiccups in the game framerate, this is _really_ important for a smooth game.
  3. OpenGL textures are not numbered sequentially on all devices. The original DROID will choose random integer values each time you call glGenTexture.
  4. There are numerous drawbacks to using the Java OpenGL API. If your game needs to modify vertex or texcoord buffers every frame, you’ll have to accept a performance hit. The deformation of the grid in HexDefense is achieved by modifying the texcoords on a sub-segmented plane, and passing the data through a ByteBuffer to OpenGL is not cool.
  5. The iPhone’s OpenGL implementation is at least 2.5x faster, even on devices with half the processor speed. An iOS port of HexDefense is in progress, and the game runs twice as fast on an original iPod Touch as it does on a Nexus One. There are a lot of reasons for this, but it seems that drawing large textured quads has greater speed implications on Android devices.