cover

Kinect Hand-tracking Visualization

What if you could use hand gestures to control an audio visualization? Instead of relying on audio metrics like frequency and volume, you could base the visualization on the user’s interpretation of perceivable audio qualities. The end result would be a better reflection of the way that people feel about music.

To investigate this, I wrote an OpenFrameworks application that uses depth data from the Kinect to identify hands in a scene. The information about the users’ hands – position, velocity, heading, and size – is used to create an interactive visualization with long-exposure motion trails and particle effects.

There were a number of challenges in this project. I started with Processing, but it was too slow to extract hands and render the point sprite effects I wanted. I switched to OpenFrameworks and started using OpenNI to extract a skeleton from the Kinect depth image. OpenNI worked well and extracted a full skeleton with wrists that could be tracked, but it was difficult to test because the skeletal detection took nearly a minute every time the visualization was tested. It got frustrating pretty quickly, and I decided to do hand detection manually.

Detecting Hands in the Depth Image

I chose a relatively straightforward approach to finding hands in the depth image. I made three significant assumptions that made realtime detection possible:

  1. The users body intersects the bottom of the frame
  2. The user is the closest thing in the scene.
  3. The users hands are extended (at least slightly) in front of their body

Assumption 1 is important because it allows for automatic depth thresholding. By assuming that the user intersects the bottom of the frame, we can scan the bottom row of depth pixels to determine the depth of the users body. The hand detection ignores anything further away than the user.

Assumptions 2 and 3 are important for the next step in the process. The application looks for local minima in the depth image and identifies the points nearest the camera. It then uses a breadth-first search algorithm to repeatedly expand the blob to neighboring points and find the boundaries of hands. Each pixel is scored based on it’s depth and distance from the source. Pixels that are scored as part of one hand cannot be scored as part of another hand and this prevents near points in the same hand from generating multiple resulting blobs.

Interpreting Hands

Once pixels in the depth image have been identified as hands, a bounding box is created around each one. The bounding boxes are compared to those found in the previous frame and matched together, so that the user’s two hands are tracked separately.

Once each blob has been associated with the left or right hand, the algorithm determines the heading, velocity and acceleration of the hand. This information is averaged over multiple frames to eliminate noise.

Long-Exposure Motion Trails

The size and location of each hand are used to extend a motion trail from the user’s hand. The motion trail is stored in an array. Each point in the trail has an X and Y position, and a size. To render the motion trail, overlapping, alpha-blended point sprites are drawn along the entire length of the trail. A catmul-rom spline algorithm is used to interpolate between the points in the trail and create a smooth path. Though it might seem best to append a point to the motion trail every frame, this tends to cause noise. In the version below, a point is added to the trail every three frames. This increases the distance between the points in the trail and allows for more smoothing using catmul-rom interpolation.

Hand Centers

One of the early problems with the hand tracking code was the center of the blob bounding boxes were used as the input to the motion trails. When the user held up their forearm perpendicular to the camera, the entire length of their arm was recognized as a hand. To better determine where the center of the hand was, I wrote a midpoint finder based on iterative erosion of the blobs. This provided much more accurate hand centers for the motion trails.

Particle Effects

After the long-exposure motion trails were working properly, I decided that more engaging visuals were needed to create a compelling visualization. It seemed like particles would be a good solution because they could augment the feeling of motion created by the user’s gestures. Particles are created when the hand blobs are in motion, and more particles are created based on the hand velocity. The particles stream off the motion trail in the direction of motion, and curve slightly as they move away from the hand. They fade and disappear after a set number of frames.

Challenges and Obstacles

This is my first use of the open-source ofxKinect framework and OpenFrameworks. It was also my first attempt to do blob detection and blob midpoint finding, so I’m happy those worked out nicely. I investigated Processing and OpenNI but chose not to use them because of performance and debug time implications, respectively.

Live Demo

The video below shows the final visualization. It was generated in real-time from improv hand gestures I performed while listening to “Dare you to Move” by the Vitamin String Quartet.

header_badgemonsters
SMS Visualization

Using SMS Logs to Explore your Social Network

The Question

What can you infer about someone’s social network from their text messaging activity?

A few months ago, I started working on an app that syncs text messages from an Android phone to a desktop client for the Mac. The idea was to decouple text messaging from the phone, enabling the user to have a conversation anywhere and seamlessly transition between messaging on the phone and messaging on a laptop or desktop.

While developing the application, I noticed that the thousands of messages synced by the app revealed interesting trends about my conversational patterns, and it seemed like a perfect data set for a visualization. But what? I drew up a bunch ideas on paper, and a few stuck with me.

Idea 1: A wavestream visualization of the number of messages sent and received from certain contacts, over time.

Idea 2: A spring graph network with bubbles representing different contacts that you message. Size, distance from spindle used to represent relative frequency of messaging…

The Data

The application downloads all the user’s messages from their phone and stores them in an SQLite database. A text dump from this database formed the data used in the visualization. The format of the text dump is shown below:

6157141096:::Allison:::1:::119:::87

The text is ‘:::’ delimited. The columns are as follows:

  1. The phone number messaged
  2. The display name of the user messaged
  3. The origin of the message (0 = your phone, 1 = theirs)
  4. The frame number on which the message should appear in the animation. This is calculated by taking the timestamp of when the message was sent or received, subtracting the timestamp of the first message in the animation and dividing by an acceleration factor.
  5. The length of the text content in the message.

The Visualization

The Discoveries

The biggest discovery along the way was that Processing is pretty cool and very easy to use. I have a lot of experience working with OpenGL and Mac OS X’s Quartz2D APIs, and Processing was a nice surprise. I was able to go from concept to an early working version of the visualization in one afternoon. My one big complaint is that there’s no built-in debugger whatsoever… Coming from a programming background, that’s pretty damn ghetto. I’ve heard you can use Eclipse somehow, so I’ll try that next time.

I was unsure of how to create a graph of interconnected nodes in Processing. I wanted to create one dynamically without advance knowledge of the number of nodes needed, and I didn’t want to write any code to do it. I thought that using some sort of spring physics model would allow the graph to be self-organizing. I did some searching and found the Traer Physics library, which I dropped into my Processing libraries folder and linked into my sketch by binding each contact object to a node in the physics simulation. That was it. There was much rejoicing.

Each node was added to the physics model as a solid body, and negative attractors were added between each of the nodes to cause them to spread evenly. Springs were added between each node and the center ring. This turned out to be a great solution because the resting length of the spring could be adjusted to move the nodes toward and away from the center. I’d wanted to do this the whole time, but using the springs allowed me to smoothly animate that part of the visualization, too.

My original idea was to represent each message sent or received as an arc between nodes. However, I wasn’t sure whether Processing would be able to handle drawing the number of curves required at a decent framerate. With a data set of over 2300 messages, I was pretty sure it would become unworkably slow. Big surprise, it’s Java, and it did. I had to add a shortcut to disable lines so I could rapidly test the visualization.

The Critique

Overall, I’m pretty happy with the visualization. I was able to animate it, and it achieved the initial goal of revealing the social network inferred by your messaging habits. There are a few things I’d like to explore in the data that the visualization doesn’t reveal, though. There’s a lot of data in the actual text contents of the messages that would be fun to look at. How often do people use emoticons? Do you use emoticons more frequently when the person you’re talking to also does? Is there a bimodal distribution in message length that implies that some messages are part of complex multi-message conversations while others are simple “pings?” Answering those questions would require other visualizations, I think–but I’m really curious.

The Code

Dependencies: The processing applet requires the Traer Physics library.

Example Code Used: The code below draws on a large amount of sample code, from Processing.org and from the documentation of the Traer Physics Library. The Processing “Load File 2″ example was particularly useful. The code for the wavestream was written from scratch (in a rather ghetto way.) I’m still looking for a good library that creates them!

The source code is available here

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
 
import traer.physics.*;
 
import java.util.Date;
 
import java.text.SimpleDateFormat;
 
import java.lang.Long;
 
 
 
static int RECENT_FRAME_COUNT = 50;
 
 
 
boolean drawLines;
 
boolean simulationComplete;
 
boolean useCharacterCounts;
 
 
 
String[] lines;
 
int index = 0;
 
ArrayList messages;
 
int firstUnusedMessageIndex;
 
ArrayList contacts;
 
Contact me;
 
float mostMessagesInContact;
 
 
 
long timestamp;
 
int timestampsPerFrame;
 
int frameNumber;
 
int framesInSimulation;
 
 
 
int timeBeforeDecay = 150;
 
 
 
ParticleSystem system;
 
 
 
void setup() {
 
  size(900, 900);
 
  background(0);
 
  stroke(255);
 
  smooth();
 
 
 
  lines = loadStrings("/sms.txt");
 
  firstUnusedMessageIndex = 0;
 
  messages = new ArrayList();
 
  contacts = new ArrayList();
 
 
 
  frameNumber = 0;
 
  framesInSimulation = 0;
 
 
 
  drawLines = true;
 
  simulationComplete = false;
 
  useCharacterCounts = true;
 
  frameRate(60);
 
 
 
  system = new ParticleSystem();
 
  system.setIntegrator(ParticleSystem.RUNGE_KUTTA);
 
  system.setDrag(0.5);
 
 
 
  timestamp = Long.parseLong(lines[0]);
 
  timestampsPerFrame = 10;
 
  index ++;
 
 
 
  while (index < lines.length) {
 
    String[] pieces = split(lines[index], ":::");
 
    if (pieces.length > 1) {
 
      messages.add(new SMS(pieces));
 
      println(lines[index]);
 
      // Go to the next line for the next run through draw()
 
    }
 
    index = index + 1;
 
  }
 
  println("loaded "+messages.size() + " messages");
 
 
 
  HashMap hm = new HashMap();
 
  for (int ii = 0; ii < messages.size(); ii ++) {
 
    SMS m = (SMS)messages.get(ii);
 
    if (hm.containsKey(m.phoneNumber)) {
 
      int count = int((Integer)(hm.get(m.phoneNumber)));
 
      hm.put(m.phoneNumber, new Integer(count + 1));
 
    } 
 
    else
 
      hm.put(m.phoneNumber, new Integer(1));
 
  }
 
 
 
  Iterator i = hm.entrySet().iterator();  // Get an iterator
 
  mostMessagesInContact = 0;
 
  while (i.hasNext()) {
 
    Map.Entry me = (Map.Entry)i.next();
 
    int v = int((Integer)me.getValue());
 
    if (v > mostMessagesInContact)
 
      mostMessagesInContact = v;
 
  }
 
 
 
  framesInSimulation = ((SMS)messages.get(messages.size() - 1)).timestamp;
 
 
 
  me = new Contact("ME", "ME", width / 2, height / 2 + 50);
 
  me.root.makeFixed();
 
  me.radius = 20;
 
  contacts.add(me);
 
}
 
 
 
void mouseClicked() {
 
  drawLines = !drawLines;
 
}
 
 
 
void keyPressed() {
 
  useCharacterCounts = !useCharacterCounts;
 
}
 
 
 
void draw() {
 
  fill(0,0,0);
 
  noStroke();
 
  rect(0, 100, width, height);
 
  rect(0, 0, width/3+1, height/3);
 
  rect(width - 100, 0, 100, 41);
 
  fill(255, 60, 60);
 
  rect(20, 20, 30, 20);
 
  fill(60, 255, 60);
 
  rect(20, 50, 30, 20);
 
  fill(255,255,255);
 
  if (useCharacterCounts) {
 
    text("Volume of Text Sent To",55, 65);
 
    text("Volume of Text Received From",55, 35);
 
  } 
 
  else { 
 
    text("Messages Sent To",55, 65);
 
    text("Messages Received From",55, 35);
 
  }
 
  text(firstUnusedMessageIndex + " / " + messages.size(),20, 900-30);
 
 
 
  Date d = new Date(timestamp);
 
  SimpleDateFormat dateformatMMDDYYYY = new SimpleDateFormat("MM/dd/yyyy");
 
  text(dateformatMMDDYYYY.format(d), width-100,35);
 
 
 
  float wavestreamStepPerFrame = float(width * 2/3) / float(framesInSimulation);
 
  println(wavestreamStepPerFrame);
 
  int wavestreamX = int(frameNumber * wavestreamStepPerFrame) + width / 3;
 
  float wavestreamCenterY = 60;
 
  float wavestreamSmoothing = 0.95;
 
 
 
  float accumulatedSent = 0;
 
  float accumulatedReceived = 0;
 
 
 
  for (int ii = 0; ii < contacts.size(); ii++) {
 
    Contact c = (Contact)contacts.get(ii);
 
 
 
    // determine the height of this bar of the wavestream
 
    float r = 0;
 
    for (int x = 0; x < RECENT_FRAME_COUNT; x ++)
 
      r += c.messagesSentInRecentFrames[x] * 0.7f;
 
 
 
    c.sentVelocity = (float)c.sentVelocity * wavestreamSmoothing + (r - c.messagesSentAvg) * (float)(1-wavestreamSmoothing);
 
    c.messagesSentAvg = c.messagesSentAvg * wavestreamSmoothing + c.sentVelocity * (1-wavestreamSmoothing);
 
 
 
    // pick a random color for this person
 
    stroke(80,255,80);
 
    //stroke((212 * ii) % 255, (190 * ii) % 255, (123 * ii) % 255);
 
    line(wavestreamX, wavestreamCenterY + accumulatedSent, wavestreamX, wavestreamCenterY + accumulatedSent + c.messagesSentAvg);
 
    accumulatedSent += c.messagesSentAvg;
 
 
 
    r = 0;
 
    for (int x = 0; x < RECENT_FRAME_COUNT; x ++)
 
      r += c.messagesReceivedInRecentFrames[x] * 0.7f;
 
 
 
    stroke(255,80,80);
 
    c.receivedVelocity = (float)c.receivedVelocity * wavestreamSmoothing + (r - c.messagesReceivedAvg) * (float)(1-wavestreamSmoothing);
 
    c.messagesReceivedAvg = c.messagesReceivedAvg * wavestreamSmoothing + c.receivedVelocity * (1-wavestreamSmoothing);
 
 
 
    line(wavestreamX, wavestreamCenterY - accumulatedReceived, wavestreamX, wavestreamCenterY - accumulatedReceived - c.messagesReceivedAvg);
 
    accumulatedReceived += c.messagesReceivedAvg;
 
  }
 
 
 
 
 
  for (int ii = 0; ii < firstUnusedMessageIndex; ii++) {
 
    SMS s = (SMS)messages.get(ii);
 
    s.draw();
 
    s.update();
 
  }
 
 
 
  for (int ii = 0; ii < contacts.size(); ii++) {
 
    Contact c = (Contact)contacts.get(ii);
 
    c.draw();
 
    c.update();
 
  }
 
 
 
 
 
  for (int ii = 0; ii < contacts.size(); ii++) {
 
    Contact c = (Contact)contacts.get(ii);
 
    if ((mouseX > c.x - c.radius) && (mouseX < c.x + c.radius) && (mouseY > c.y - c.radius) && (mouseY < c.y + c.radius)) {
 
      c.drawHoverLabel();
 
    }
 
  }
 
 
 
  system.tick();
 
  frameNumber += timestampsPerFrame;
 
 
 
  if (!simulationComplete)
 
    timestamp = timestamp + (1000*1000)*10;
 
 
 
  boolean found = true;
 
  while (found == true) {
 
    if (firstUnusedMessageIndex >= messages.size()) {
 
      simulationComplete = true;
 
      break;
 
    }
 
 
 
    SMS m = (SMS)messages.get(firstUnusedMessageIndex);
 
    if (m.timestamp < frameNumber) {
 
      // the message has been sent this frame! figure out if we need to 
 
      // create the contact object that represents it. 
 
      Contact c = contactForPhoneNumber(m.phoneNumber);
 
 
 
      if (c == null) {
 
        c = new Contact(m.phoneNumber, m.name, width / 2 + random(-300, 300), height / 2 + random(-300, 300));
 
        contacts.add(c);
 
      }
 
 
 
      // are we starting at the contact and going to ME or the other way around?
 
      if (m.origin == 0) {
 
        m.source = me;
 
        m.notme = c;
 
        m.destination = c;
 
      } 
 
      else {
 
        m.source = c;
 
        m.notme = c;
 
        m.destination = me;
 
      }
 
 
 
      // make it so the other contact is pulled toward us just a bit more. They're more
 
      // improtant in my life.
 
      c.relationshipSpring.setRestLength(max(c.radius + 25, 0.99 * c.relationshipSpring.restLength()));
 
 
 
      // let the message initialize it's state
 
      m.prepareForAnimation();
 
      firstUnusedMessageIndex ++;
 
    } 
 
    else {
 
      found = false;
 
      break;
 
    }
 
  }
 
  //saveFrame();
 
}
 
 
 
Contact contactForPhoneNumber(String phoneNumber) {
 
  for (int ii = 0; ii < contacts.size(); ii++)
 
    if (((Contact)contacts.get(ii)).phoneNumber.equals(phoneNumber))
 
      return (Contact)contacts.get(ii);
 
  println("Couldn't find contact with pn = "+phoneNumber);
 
  return null;
 
}
 
 
 
 
 
class SMS {
 
  int origin;
 
  String phoneNumber;
 
  String name;
 
  int timestamp;
 
  int charcount;
 
 
 
  // only set once the SMS has been spawned
 
  Contact source;
 
  Contact destination;
 
  Contact notme;
 
  float x,y, destRadOffset, destROffset;
 
  float cx, cy;
 
  float fraction;
 
  float radius = 4;
 
  int age;
 
 
 
  public SMS(String[] pieces) {
 
    phoneNumber = pieces[0];
 
    name = pieces[1];
 
    origin = int(pieces[2]);
 
    timestamp = int(pieces[3]);
 
    charcount = int(pieces[4]);
 
  }
 
 
 
  public void prepareForAnimation() {
 
    x = source.x;
 
    y = source.y;
 
    destRadOffset = random(0, 360);
 
    destROffset = random(0,1);
 
    cx = random(-190, 190);
 
    cy = random(-190, 190);
 
  }
 
 
 
  public void draw() {
 
    noFill();
 
    if (drawLines) {
 
      int b = max(30, 255-age);
 
      if (origin == 0)
 
        stroke(b/3,b,b/3);
 
      else
 
        stroke(b,b/3,b/3);
 
 
 
      float destOffsetX = cos(destRadOffset) * destROffset * notme.radius;
 
      float destOffsetY = sin(destRadOffset) * destROffset * notme.radius;
 
      curve(me.x, me.y, me.x, me.y, notme.x + destOffsetX, notme.y + destOffsetY, notme.x + cx, notme.y + cy);
 
    }
 
    if (fraction < 1) {
 
      fill(255,255,255);
 
      noStroke();
 
      ellipse(x,y, radius * 2, radius * 2);
 
    }
 
  }
 
 
 
  public void update() {
 
    if (!simulationComplete)
 
      age ++;
 
 
 
    if (fraction < 1) {
 
      fraction += 0.1;
 
      if (fraction >= 1) {
 
        notme.wasMessaged(origin, charcount);
 
      }
 
      x = source.x * fraction + destination.x * (1-fraction);
 
      y = source.y * fraction + destination.y * (1-fraction);
 
    }
 
  }
 
}
 
 
 
class Contact {
 
  String phoneNumber;  
 
  String name;
 
  float x,y;
 
  float radius;
 
  Particle root;
 
  Spring relationshipSpring;
 
  int framesSinceMessage;
 
 
 
  int messagesSent;
 
  int messagesReceived;
 
  int[] messagesSentInRecentFrames;
 
  int[] messagesReceivedInRecentFrames;
 
  float messagesSentAvg; 
 
  float messagesReceivedAvg; 
 
  float sentVelocity;
 
  float receivedVelocity;
 
 
 
  int charsSent;
 
  int charsReceived;
 
 
 
  public Contact(String pn, String n, float x, float y) {
 
    radius = 6;
 
    phoneNumber = pn;
 
    name = n;
 
    framesSinceMessage = 0;
 
    root = system.makeParticle(1, x, y, 0);
 
    messagesSentInRecentFrames = new int[RECENT_FRAME_COUNT];
 
    messagesReceivedInRecentFrames = new int[RECENT_FRAME_COUNT];
 
 
 
    for (int ii = 0; ii < contacts.size(); ii++)
 
      system.makeAttraction(root, ((Contact)contacts.get(ii)).root, -3, 5);
 
 
 
    if (pn.equals("ME") == false)
 
      relationshipSpring = system.makeSpring(root, me.root, 0.05, 0.8, 350);
 
  }
 
 
 
  public void draw() {
 
    noStroke();  
 
    if (this != me) {
 
      int b = max(50, 255-framesSinceMessage);
 
 
 
      fill(b,b,b);
 
      ellipse(x,y,radius*2,radius*2);
 
 
 
      if (messagesReceived + messagesSent > 0) {
 
        float receivedAngle;
 
        float sentAngle;
 
        if (useCharacterCounts) {
 
          receivedAngle = (charsReceived / (float)(charsReceived + charsSent)) * (2 * PI);
 
          sentAngle = (charsSent / (float)(charsReceived + charsSent)) * (2 * PI);
 
        } 
 
        else {
 
          receivedAngle = (messagesReceived / (float)(messagesReceived + messagesSent)) * (2 * PI);
 
          sentAngle = (messagesSent / (float)(messagesReceived + messagesSent)) * (2 * PI);
 
        }
 
 
 
        float innerRadius = max(radius * 1.5, radius * 2 - 5);
 
        fill(b, b/3, b/3);
 
        arc(x,y, innerRadius,innerRadius, -PI / 2, -PI / 2 + receivedAngle);
 
        fill(b/3, b, b/3);
 
        arc(x,y, innerRadius,innerRadius, -PI / 2 + receivedAngle, -PI / 2 + receivedAngle + sentAngle);
 
      }
 
    }
 
    else {
 
      fill(255,255,255);
 
      ellipse(x,y,radius*2,radius*2);
 
      fill(0,0,0);
 
      ellipse(x,y,radius*1.5,radius*1.5);
 
    }
 
  }
 
 
 
  public void drawHoverLabel() {
 
    String s = name + " ("+phoneNumber+")";
 
    fill(0);
 
    text(s, x+9, y+11);
 
    fill(0);
 
    text(s, x+11, y+11);
 
    fill(0);
 
    text(s, x+9, y+9);
 
    fill(0);
 
    text(s, x+11, y+9);
 
    fill(255,255,255);
 
    text(s, x+10, y+10);
 
  }
 
 
 
  public void update() {
 
    if (!simulationComplete)
 
      framesSinceMessage ++;
 
    x = root.position().x();
 
    y = root.position().y();
 
 
 
    if ((framesSinceMessage > timeBeforeDecay) && (relationshipSpring != null))
 
      relationshipSpring.setRestLength(min(350, 1.005 * relationshipSpring.restLength()));
 
 
 
    for (int ii = RECENT_FRAME_COUNT - 2; ii >= 0; ii--) {
 
      messagesReceivedInRecentFrames[ii + 1] = messagesReceivedInRecentFrames[ii];
 
      messagesSentInRecentFrames[ii + 1] = messagesSentInRecentFrames[ii];
 
    }
 
    messagesReceivedInRecentFrames[0] = 0;
 
    messagesSentInRecentFrames[0] = 0;
 
  }
 
 
 
  public void wasMessaged(int origin, int charcount) {
 
    radius = radius + 0.25 * (300.0 / mostMessagesInContact);
 
    root.setMass(root.mass() + 0.25);
 
    framesSinceMessage = 0;
 
 
 
    if (origin == 0) {
 
      messagesSent ++;
 
      charsSent += charcount;
 
      messagesSentInRecentFrames[0] ++;
 
    } 
 
    else {
 
      messagesReceived ++;
 
      charsReceived += charcount;
 
      messagesReceivedInRecentFrames[0] ++;
 
    }
 
  }
 
}

A note about source data: Unfortunately, the source data for this experiment contains sensitive data including people’s phone numbers and names. I’ll be releasing the SMS synchronizing app for Mac and Android soon and that will allow you to gather and format your own messaging history for visualization.

HexDefense for Android

HexDefense

Intense, arcade-style tower defense for Android

The Story

HexDefense started as a class project for a mobile prototyping lab I took while at Carnegie Mellon. The lab required that apps be written in Java on the Android platform, and I figured it’d be a good opportunity to try writing a game. I’m a big fan of the tower defense genre and I’ve been heavily influenced by games on the iPhone like Field Runners and GeoDefense Swarm. From the outset, I wanted the game to have arcade style graphics reminiscent of Geometry Wars. That way, I figured, I wouldn’t have to find an artist to create the sprites, and I could focus on explosive OpenGL particle effects and blend-based bloom.

During the fall semester, I collaborated with Paul Caravelli and Tony Zhang on the first iteration of the game. I had the strongest graphics and animation background, so I focused on the gameplay and wrote all of the OpenGL code behind the game. I also created most of the game model, implementing the towers and creeps and creating actions with game logic for tower targeting, attacks, projectile motion, explosions, implosions and other effects. Paul contributed path finding code for the creeps based on breadth-first-search and created interfaces for implementing in-game actions based on the command pattern. He also contributed the original implementation of the grid model and worked on abstract base classes in the game model. Tony created the app’s settings screen and linked together activities for the different screens of the application.

At the end of the fall semester, the game was functional but unrefined. There were no sounds, no levels, and I’d only created one type of enemy. After the class ended, I talked with Paul and decided to finish it over my Christmas break. Paul was too busy to continue working on the app, so I continued development independently. I worked full-time for four weeks to deliver the level of polish I was accustomed to on the iPhone. I refined the graphics, tested the app across a variety of phones and added fifteen levels. I also added 3D directional sound, boss creeps and wrapped everything in a completely new look and feel. People say that the last 10% is the 90% of the work, and I think that’s particularly true on Android – there are minor differences across devices that make writing a solid game a lot more work than I expected.

The game was released at the end of January and has been well received so far. I created a lot of promotional art and setup a website with gameplay footage and press resources, and the game has garnered quite a bit of attention. It’s been featured on the front page of the Android marketplace and has a 4 1/2 stars. It’s rising in the “Paid Apps” rankings and is currently the #16th most popular game on the Android platform!

Lessons Learned:

I’ve learned a lot about the Android platform developing HexDefense. A couple of tips and takeaways:

  1. Let the OpenGL view run in CONTINUOUS mode. Nothing else (timers, threads that trigger redraws) will give performance close to this.
  2. Write all of the game logic so that it can advance the model by an arbitrary number of milliseconds. Because multitasking can cause hiccups in the game framerate, this is _really_ important for a smooth game.
  3. OpenGL textures are not numbered sequentially on all devices. The original DROID will choose random integer values each time you call glGenTexture.
  4. There are numerous drawbacks to using the Java OpenGL API. If your game needs to modify vertex or texcoord buffers every frame, you’ll have to accept a performance hit. The deformation of the grid in HexDefense is achieved by modifying the texcoords on a sub-segmented plane, and passing the data through a ByteBuffer to OpenGL is not cool.
  5. The iPhone’s OpenGL implementation is at least 2.5x faster, even on devices with half the processor speed. An iOS port of HexDefense is in progress, and the game runs twice as fast on an original iPod Touch as it does on a Nexus One. There are a lot of reasons for this, but it seems that drawing large textured quads has greater speed implications on Android devices.

Drill Down WebView Navigation

The next version of NetSketch will include a community browser, allowing you to view uploaded drawings, watch replays, and leave comments without leaving the app. When I started working on the community interface, I looked to other apps for inspiration. Almost every app I’ve used on the iPhone use a sliding navigation scheme, giving you the feeling that you’re drilling down into content as you use the application. This interface is intuitive in a lot of contexts, and dates back to the original iPod. The Facebook app allows you to browse other people’s facebook pages and uses a drill down navigation bar. This works well for the social-network space because you can drill down to look at information and then return to the first page quickly.

I decided to use a UINavigationBar and implement a similar drill-down interface for NetSketch. However, I didn’t want to create custom controllers for each page in the community. I wanted to be able to improve the community without updating the app, and didn’t want to write a communication layer to download and parse images and custom XML from the server.

Using a UIWebView seemed like the obvious choice. It could make retrieving content more efficient, and pages could be changed on the fly. With WebKit’s support for custom CSS, I could make the interface look realistic and comprable to a pile of custom-written views.

I quickly realized that it wasn’t all that easy to implement “drill down” behavior with a UIWebView. Early on, I ruled out the possibility of creating a mock navigation bar in HTML. Since Safari on the iPhone doesn’t support “position:static” or “position:fixed” CSS tags, there was no good way to make the bar sit at the top of the screen. I decided that a native UINavigationBar would be more practical and provide a better user experience. However, UINavigationController was built to use separate controllers for each layer, and doesn’t worry about freeing up memory when the stack of controllers gets big. I thought it was important that a maximum of eight UIWebViews were in memory at once, since Mobile Safari obeys that limitation and because pages could potentially be very large.

I tried several solutions, and finally created a custom DrillDownWebController class with a manually managed UINavigationBar to handle the interface. The class maintains a “stack” of DrillDownPages, with each page representing a single layer in the drill-down hierarchy. It can be a root level controller, or it can be loaded into an existing UINavigationController. When it appears, it silently swaps its parent’s navigation bar with it’s own.

The DrillDownPage is a wrapper for a UIWebView that acts as its delegate and provides higher-level access to important properties of the page, such as it’s title. When the user clicks a link in a web view, a new DrillDownPage object is created and it begins loading the requested page in an invisible UIWebView. The controller displays an activity indicator in the top right corner of the navigation bar, and slides in the new page when it finishes loading. All the other pages in the page “stack” are notified that their position in the stack has changed.

The notification step is important, because it allows the Page objects to

header_walterfoster
header_layers

Layers for iPad

I started working on Layers for iPad the morning apple released the iPad SDK. I’d been looking forward to the release of the iPad for months, and it made sense to make a dedicated version of the app. I’d always had a difficult time drawing on the iPhone’s small 3″ screen. The iPad seemed like a perfect tool for professional artists–a digital sketch book you could carry anywhere.

The iPad’s larger form factor also raised new interaction questions. It was too big to shake, so the “wrist flick”-triggered menus in the original Layers app were out. It had more room for contextual information-toolbars wouldn’t significantly limit the size of the canvas viewport. The iPad also created new opportunities for community. It’s large screen was so good for showing off art that it seemed natural to allow people to post paintings and browse and comment on others work within the app. From the earliest phase of design, the focus was on building a community around the art people created on the iPad.

I spent about three weeks designing an interface for the iPad app, prototyping and fleshing out different screens of the app in Fireworks. Since I knew I would be doing the entire app myself, I didn’t bother creating storyboards using the individual interface mockups.

I like to write code during the design phase when I work on personal projects: a few hours of design, a few hours of code, repeat. It helps to step away from the designs every few hours. I started moving code from the iPhone version of Layers over to the iPad on day one, and a couple big issues became obvious in the first few weeks:

1. The display / memory ratio on the iPad makes it hard to keep display-quality images in memory. The iPhone version of Layers kept six 512×512 OpenGL textures in memory, but the iPad version isn’t able to keep six 1024×1024 images in memory. Fail!

2. Saving the image for each layer took more time (roughly 4x more, since there were 4x as many pixels) and caused the app to be terminated early when sent to the background.

3. Some procedures in Layers, like undoing a “Shift Layer” operation, took too much memory when the images were scaled up.

Unfortunately, I didn’t have early access to iPad hardware, so I didn’t know that #1 and #3 would be an issue until the app was actually live on the store. I got a ton of emails the week the iPad went on sale, and I skipped an entire week of class to fix the problems. The solution was a much smarter method of memory management that allows layers and parts of the undo stack to be paged to disk when the app receives memory warnings. It brought some major speed improvements to the iPad version, and I’ve since been rolled it back into the iPhone version.

Layers for iPad consists of 22,000 lines of Objective-C code, about 10,000 of which are shared with the iPhone version. The community component of the app is built in PHP and mySQL, and uses HTML5 and advanced Safari CSS 3 markup to create a convincing and “native” experience within the app. My experience building the community website was very positive. CSS 3 support on the iPad is great, I think web views are the best way to deliver native-looking and richly customized interfaces. You can even specify your own fonts. It’s beautifully fast. Had I chosen to deliver content via XML and render it all in custom UIViews, there’s no way I could have completed the app in three months.

In the first six months after its release, Layers for iPad was downloaded about 10,500 times. The Layers Gallery, the community built around content generated in the app, hosts thousands of paintings. The app was featured on the front page of the App Store during the week of April 28th, and was reviewed by TUAW and MacWorld twice! It was featured in Incase’s 2010 advertising campaign and billed as the favorite painting app of Ron Radziner. The Layers for iPad website has also received recognition for its clean, refined design on OneExtraPixel”, Designer Daily and Web Design Tuts+

After Layers was released for iPad, I worked with MEDLMobile in San Francisco to develop the app “Drawing Step by Step with Walter Foster” using the Layers painting engine to emulate realistic pens and pencils. During it’s first week on the store, that app was ranked #1 in Productivity.

The Best WordPress Site Ever?

So I accidentally clicked an ad this afternoon and stumbled across Ecoki.com, an online community for eco-friendly folks. I hadn’t even scrolled half way down their home page when I found myself thinking: “What was this built in?” Ecoki is quite possibly the best designed wordpress site I’ve ever seen. I had to look at the page source to figure it out.

http://ecoki.com/

It looks like it’s a completely custom template. Must have cost a fortune… Great look though!

PNG compression in Android… (You have got to be kidding)

Over the last few weeks, I’ve been learning the Android SDK in an effort to bring Layers to Android devices. It’s going pretty well, but every once and a while I run into a truly WTF moment.

Tonight I was importing some images from the iPhone version of Layers when I noticed that Android seems to visibly reduce the quality of PNG files at compile time. In images with fine gradients, smooth color transitions, or very light shadows you tend to get banding. It almost looks like the image were being converted to a GIF file.

I figured it’d be easy to fix. Go into Eclipse, right click on everything, look in menus… repeat… profit! Unfortunately, it seems there’s no way to turn off compression for specific file or choose a non-lossy compression method. However, I found this gem of a solution in the Android Widget Design Guidelines:

“To reduce banding when exporting a widget, apply the following Photoshop Add Noise setting to your graphic.”

Um… what now?

It turns out, you can get around the compression algorithm by adding a small amount of pixel noise to your images. It’s mostly invisible, and it prevents the compression from producing obvious bands.

It’s an instant fix, but I almost laughed out loud. Seriously? This is the documented solution? *sigh*.

iPhone Development Tutorial