openFrameworks

Jul 20 11:02

The Wombats "Techno Fan"

I worked with the Found Collective on this Wombats music video. I designed and developed software (using C++ / openframeworks) to process live footage of the band. All images seen below are generated from this software.

Background 

In 2010 the label had originally commissioned someone else for the video (I'm not sure who), they filmed and edited a live performance of the band. The label (or band or commissioner) then got in touch with Barney Steel from the Found Collective ( www.thefoundcollective.com ) to "spice up the footage", having seen the Depeche Mode "Fragile Tension" video which we worked on together ( http://www.msavisuals.com/depeche_mode_fragile_tension ). Barney in turn got in touch with me to create an app / system / workflow which could "spice up the footage". In short we received a locked down edit of band footage, which we were tasked with "applying a process and making it pretty". 

 

Workflow 

We received a locked edit of the band performing the song live. This was then broken down shot by shot and various layers were rotoscoped, separated (e.g. foreground, background, singer, drummer etc.) and rendered out as quicktime files. (This was all done in the traditional way with AfterEffects, no custom software yet). Then each of these shots & layers were individually fed into my custom software. The software analyzes the video file and based on the dozens of parameters outputs a new sequence (as a sequence of png's). The analysis is done almost realtime (depending on input video size) and the user can play with the dozens of parameters in realtime, while the app is running and even while it is rendering the processed images to disk. So all the animations you see in the video, were 'performed' in realtime. No keyframes used. Lots of different 'looks' were created (presets) and applied to the different shots & layers. Each of these processed sequences were rendered to disk and re-composited and edited back together with Final Cut and AfterEffects to produce the final video.

 

Processing

This isn't meant as a tutorial, but a quick, high level overview of all the techniques used in the processing of the footage. There are a few main phases in the processing of the footage:

  1. analyze the footage and find some interesting points 
  2. create triangles from those interesting points 
  3. display those triangles
  4. save image sequence to disk
  5. profit

Phase #1 is where all the computer vision (opencv) stuff happens. I used a variety of techniques. As you can see from the GUI screenshots, the first step is a bit of pre-processing: blur (cvSmooth), bottom threshold (clamp anything under a certain brightness to black - cvThreshold), top threshold (clamp anything above a certain brightness to white - cvThreshold), adaptive threshold (apply a localized binary threshold, clamping to white or black depending on neighbours only - cvAdaptiveThreshold), erode (shrink or 'thinnen' bright pixels - cvErode), dilate (expand or 'thicken' bright pixels - cvDilate). Not all of these are always used, different shots and looks require different pre-processing.

Next, the first method of finding interesting points was 'finding contours' (cvFindContours) - or 'finding blobs' as it is also sometimes known as. This procedure basically allows you to finds the 'edges' in the image, and return them as a sequence of points - as opposed to applying say just a canny or laplacian edge detector, which will also find the edges, but will return a B&W image with a black background and white edges. The latter (canny, laplacian etc) finds the edges *visually* while the cvFindContours will go one step further and return the edge *data* in a computer readable way, i.e. an array of points, so you can parse through this array in your code and see where these edges are. (cvFindContours also returns other information regarding the 'blobs' like area, centroid etc but that is irrelevant for this application). Now that we have the edge data, we can triangulate it? No, because it's way too dense - a coordinate for every pixel. So some simplification is in order. Again for this I used a number of techniques. A very crude method is just to omit every n'th point. Another method is to omit a point if the dot product of the vector leading up to that point from previous point, and the vector leading away from that point to the next point, is greater than a certain threshold (that threshold is the cosine of the minimum angle you desire). In english: omit a point if it is on a relatively straight line. OR: if we have points A, B and C. Omit point B if: (B-A) . (B-C) > cos(angle threshold). Another method is to resample along the edges at fixed distance intervals. For this I use my own MSA::Interpolator class ( http://msavisuals.com/msainterpolator). (I think there may have been a few more techniques, but I cannot remember as it's been a while since I wrote this app!)

Independent to the cvFindContours point finding method, I also looked at using 'corner detection' (feature detection / feature extraction). For this I looked into three algorithms: Shi-Tomasi and Harris (both of which are implemented in opencv's cvGoodFeaturesToTrack function) and SURF (using the OpenSURF library). Out of these three Shi-Tomasi gave the best visual results. I wanted a relatively large set of points, that would not flicker too much (given relatively low 'tracking quality'). Harris was painfully slow, whereas SURF would just return too few features, adjusting the parameters to return a higher set of features just made the feature tracking too unstable. Once I had a set of points returned by the Shi-Tomasi (cvGoodFeaturesToTrack) I tracked these with a sparse Lucas Kanade Optical Flow (cvCalcOpticalFlowPyrLK) and omited any stray points. Again a few parameters to simplify, set thesholds etc.

Phase #2 is quite straightforward. I used "delaunay triangulation" (as many people have pointed out on twitter, flickr, vimeo). This is a process for creating triangles given a set of arbitrary points on a plane ( See http://en.wikipedia.org/wiki/Delaunay_triangulation for more info ). For this I used the 'Triangle' library by Jonathan Shewchuk, I just feed it the set of points I obtained from Phase #1, and it outputs a set of triangle data.

Phase #3 is also quite straightforward. As you can see from the GUI shots below, a bunch of options for triangle outline (wireframe) thickness and transparency, triangle fill transparency, original footage transparency etc. allowed customization of the final look. (Where colors for the triangles were picked as the average color in the original footage underneath that triangle). Also a few more display options on how to join together the triangulation, pin to the corners etc.

Phase #4 The app allowed scrubbing, pausing and playback of the video while processing in (almost) realtime (it could have been realtime if optimizations were pushed, but it didn't need to be, so I didn't bother). The processed images were always output to the screen (so you can see what you're doing), but also optionally written to disk as the video was playing and new frames were processed. This allowed us to play with the parameters and adjust while the video was playing and being saved to disk - i.e. animate the parameters in realtime and play it like a visual instrument.

The software was written in C++ with openFrameworks http://www.openframeworks.cc

libraries used:

 

WOMBATS__0004_Layer 76
WOMBATS__0013_Layer 67
WOMBATS__0002_Layer 78
WOMBATS__0003_Layer 77
WOMBATS__0011_Layer 69
WOMBATS__0014_Layer 66
WOMBATS__0063_Layer 17
WOMBATS__0062_Layer 18
WOMBATS__0039_Layer 41
WOMBATS__0048_Layer 32
WOMBATS__0072_Layer 8
Screen shot 2011-07-12 at 15.26.39
Screen shot 2011-07-12 at 15.26.49
Screen shot 2011-07-11 at 18.05.00
Screen shot 2011-07-11 at 18.01.14
Screen shot 2011-07-11 at 18.02.07

Jul 15 19:11

iSteveJobs

In case you've been living under a rock for the past week, this happened recently:
http://mashable.com/2011/07/07/secret-service-apple-store-art-2/
http://www.bbc.co.uk/news/technology-14080438
http://fffff.at/people-staring-at-computers/
http://eyeteeth.blogspot.com/2011/07/feds-visit-artist-behind-people-sta...
http://en.wikipedia.org/wiki/People_Staring_at_Computers
http://www.google.com/search?q=%22people+staring+at+computers%22

(Cease & Desist letters may have affected the content on these sites since posting).

Inspired by the events and the FAT Lab censor, I knocked up this project. It slaps on a Steve Jobs mask on any face it finds in a live webcam feed.

Feel free to install it on Apple Stores around the world. It should be legal (though don't quote me on that).

Download the source and mac binary at https://github.com/memo/iSteveJobs

iSteveJobs

Mar 17 01:14

Tweak, tweak, tweak. 41 pages of GUI, or "How I learned to stop worrying and love the control freak within"

I often tell people that I spend 10% of my time designing + coding, and the rest of my time number tweaking. The actual ratio may not be totally accurate, but I do spend an awful lot of time playing with sliders. Usually getting the exact behaviour that I want is simply a balancing act between lots (and lots (and lots (and lots))) of parameters. Getting that detail right is absolutely crucial to me, the smallest change in a few numbers can really make or break the look, feel and experience. If you don't believe me, try 'Just Six Numbers' by Sir Martin Rees, Astronomer Royal.

So as an example I thought I'd post the GUI shots for one of my recent projects - interactive building projections for Google Chrome, a collaboration between my company (MSA Visuals), Flourish, Seeper and Bluman Assoicates. MSA Visuals provided the interactive content, software and hardware.

In this particular case, the projections were run by a dual-head Mac Pro (and a second for backup). One DVI output went to the video processors/projectors, the other DVI output to a monitor where I could preview the final output content, input camera feeds, see individual content layers and tweak a few thousand parameters - through 41 pages of GUI!. To quickly summarize some of the duties carried out by the modules seen in the GUI:

  • configure layout for mapping onto building architecture and background anim parameters
  • setup lighting animation parameters
  • BW camera input options, warping, tracking, optical flow, contours etc.
  • color camera input options
  • contour processing, tip finding, tip tracking etc.
  • screen saver / timeout options
  • fluid sim settings
  • physics and collision settings
  • post processing effects settings (per layer)
  • tons of other display, animation and behaviour settings

(This installation uses a BW IR camera and Color Camera. When taking these screenshots the color camera wasn't connected, hence a lot of black screens on some pages.)

Check out the GUI screen grabs below, or click here to see them fullscreen (where you can read all the text)

Feb 19 00:16

Speed Project: RESDELET 2011

Back in the late 80s/early 90s I was very much into computer viruses - the harmless, fun kind. To a young boy, no doubt the concept of an invisible, mischievous, self-replicating little program was very inviting - and a great technical + creative challenge.

The very first virus I wrote was for an 8088, and it was called RESDELET.EXE. This was back in the age of DOS, before windows. In those days to 'multitask' - i.e. keep your own program running in the background while the user interacted with another application in the foreground - was a dodgy task. It involved hooking into interrupt vectors and keeping your program in memory using the good old TSR: Terminate, Stay Resident interrupt call 27h.

So RESDELET.EXE would hang about harmlessly in memory while you worked on other things - e.g. typing up a spreadsheet in Lotus 123 - then when you pressed the DELETE key on the keyboard, the characters on the screen would start falling down - there and then inside Lotus 123 or whatever application you were running.

RESDELET 2011 is an adaptation of the original. It hangs about in the background, and when you press the DELETE or BACKSPACE key, whatever letters you have on your screen start pouring down - with a bit of added mouse interactivity. This version does *not* self-replicate - it is *not* a virus, just a bit of harmless fun.

Source code coming real soon (as soon as I figure out how to add a git repo inside another repo)

This is a speed project developed in just over half a day, so use at your own risk!

Sorry for the flicking, there was a conflict with the screen recording application I couldn't resolve. Normally there is no flicker it's as smooth as silk.

Nov 14 19:46

First tests with Kinect - gestural drawing in 3D

Yes I'm playing with hacking Kinect :)

The XBox Kinect is connected to my Macbook Pro, and I wrote a little demo to analyse the depth map for gestural 3D interaction. One hand to draw in 3D, two hands to rotate the view. Very rough, early prototype.

You can download the source for the above demo (GPL v2) at
https://github.com/memo/ofxKinect-demos

Within a few hours of receiving his Kinect, Hector Martin released source code to read in an RGB and depth map from the device for Linux.
http://git.marcansoft.com/?p=libfreenect.git

within a few hours of that Theo Watson ported it to Mac OSX and release his source, which - with the help of others - became an openFrameworks addon pretty quickly.
https://github.com/ofTheo/ofxKinect

Now demos are popping up all over the world as people are trying to understand the capabilities of this device and how it will change Human Computer Interaction on a consumer / mass level.

Nov 05 15:38

OpenCL Particles at OKGo's Design Miami 2009 gig

For last years Design Miami (2009) I created realtime visuals for an OKGo performance where they were using guitars modded by Moritz Waldemeyer, shooting out lasers from the headstock. I created software to track the laser beams and project visuals onto the wall where they hit.

This video is an opensource demo - written with openframeworks - of one of the visualizations from that show, using an OpenCL particle system and the macbook multitouch pad to simulate the laser hit points. The demo is audio reactive and is controlled by my fingers (more than one) on the macbook multitouch pad (each 'attractor' is a finger on the multitouch pad). It runs at a solid 60fps on a Macbook Pro, but unfortunately the screen capture killed the fps - and of course half the particles aren't even visible because of the video compression.

The app is written to use the MacbookPro multitouch pad, so will not compile for platforms other than OSX, but by simply removing the multitouch pad sections (and hooking something else in), the rest should compile and run fine (assuming you have an OpenCL compatible card and implementation on your system).

Uses ofxMultiTouchPad by Jens Alexander Ewald with code from Hans-Christoph Steiner and Steike.
ofxMSAfft uses core from Dominic Mazzoni and Don Cross.

Source code (for OF 0062) is included and includes all necessary non-OFcore addons (MSACore, MSAOpenCL, MSAPingPong, ofxMSAFFT, ofxMSAInteractiveObject, ofxSimpleGuiToo, ofxFBOTexture, ofxMultiTouchPad, ofxShader) - but bear in mind some of these addons may not be latest version (ofxFBOTexture, ofxMultiTouchPad, ofxShader), and are included for compatibility with this demo which was written last year.

More information on the project at
http://msavisuals.com/okgo_fendi_design_miami_show

Most of the magic is happening in the opencl kernel, so here it is (or download the full zip with xcode project at the bottom of this page)

typedef struct {
    float2 vel;
    float mass;
    float life;
} Particle;
 
 
typedef struct {
    float2 pos;
    float spread;
    float attractForce;
    float waveAmp;
    float waveFreq;
} Node;
 
#define kMaxParticles       512*512
 
#define kArg_particles          0
#define kArg_posBuffer          1
#define kArg_colBuffer          2
#define kArg_nodes              3
#define kArg_numNodes           4
#define kArg_color              5
#define kArg_colorTaper         6
#define kArg_momentum           7
#define kArg_dieSpeed           8
#define kArg_time               9
#define kArg_wavePosMult        10
#define kArg_waveVelMult        11
#define kArg_massMin            12
 
 
float rand(float2 co) {
    float i;
    return fabs(fract(sin(dot(co.xy ,make_float2(12.9898f, 78.233f))) * 43758.5453f, &i));
}
 
 
__kernel void update(__global Particle* particles,      //0
                     __global float2* posBuffer,        //1
                     __global float4 *colBuffer,        //2
                     __global Node *nodes,              //3
                     const int numNodes,                //4
                     const float4 color,                //5
                     const float colorTaper,            //6
                     const float momentum,              //7
                     const float dieSpeed,              //8
                     const float time,                  //9
                     const float wavePosMult,           //10
                     const float waveVelMult,           //11
                     const float massMin                //12
                     ) {                
 
    int     id                  = get_global_id(0);
    __global Particle   *p      = &particles[id];
    float2  pos                 = posBuffer[id];
 
    int     birthNodeId         = id % numNodes;
    float2  vecFromBirthNode    = pos - nodes[birthNodeId].pos;                         // vector from birth node to particle
    float   distToBirthNode     = fast_length(vecFromBirthNode);                            // distance from bith node to particle
 
    int     targetNodeId        = (id % 2 == 0) ? (id+1) % numNodes : (id + numNodes-1) % numNodes;
    float2  vecFromTargetNode   = pos - nodes[targetNodeId].pos;                        // vector from target node to particle
    float   distToTargetNode    = fast_length(vecFromTargetNode);                       // distance from target node to particle
 
    float2  diffBetweenNodes    = nodes[targetNodeId].pos - nodes[birthNodeId].pos;     // vector between nodes (from birth to target)
    float2  normBetweenNodes    = fast_normalize(diffBetweenNodes);                     // normalized vector between nodes (from birth to target)
    float   distBetweenNodes    = fast_length(diffBetweenNodes);                        // distance betweem nodes (from birth to target)
 
    float   dotTargetNode       = fmax(0.0f, dot(vecFromTargetNode, -normBetweenNodes));
    float   dotBirthNode        = fmax(0.0f, dot(vecFromBirthNode, normBetweenNodes));
    float   distRatio           = fmin(1.0f, fmin(dotTargetNode, dotBirthNode) / (distBetweenNodes * 0.5f));
 
    // add attraction to other nodes
    p->vel                      -= vecFromTargetNode * nodes[targetNodeId].attractForce / (distToTargetNode + 1.0f) * p->mass;
 
    // add wave
    float2 waveVel              = make_float2(-normBetweenNodes.y, normBetweenNodes.x) * sin(time + 10.0f * 3.1416926f * distRatio * nodes[birthNodeId].waveFreq);
    float2 sideways             = nodes[birthNodeId].waveAmp * waveVel * distRatio * p->mass;
    posBuffer[id]               += sideways * wavePosMult;
    p->vel                      += sideways * waveVelMult * dotTargetNode / (distBetweenNodes + 1);
 
    // set color
    float invLife = 1.0f - p->life;
    colBuffer[id] = color * (1.0f - invLife * invLife * invLife);// * sqrt(p->life);    // fade with life
 
    // add waviness
    p->life -= dieSpeed;
    if(p->life < 0.0f || distToTargetNode < 1.0f) {
        posBuffer[id] = posBuffer[id + kMaxParticles] = nodes[birthNodeId].pos;
        float a = rand(p->vel) * 3.1415926f * 30.0f;
        float r = rand(pos);
        p->vel = make_float2(cos(a), sin(a)) * (nodes[birthNodeId].spread * r * r * r);
        p->life = 1.0f;
//      p->mass = mix(massMin, 1.0f, r);
    } else {
        posBuffer[id+kMaxParticles] = pos;
        colBuffer[id+kMaxParticles] = colBuffer[id] * (1.0f - colorTaper);  
 
        posBuffer[id] += p->vel;
        p->vel *= momentum;
    }
}

Oct 30 19:57

ofxQuartzComposition and ofxCocoa for openFrameworks

Two new addons for openFrameworks. Actually one is an update, and major refactor, so much so that I've changed its name: ofxCocoa (was ofxMacOSX) is a glut-replacement addon for openframeworks to allow native integration with opengl and cocoa windowing system, removing dependency on glut. Has a bunch of features to control window and opengl view creation, either programatically or via InterfaceBuilder. http://github.com/memo/msalibs/tree/master/ofxCocoa/

ofxQuartzComposition is an addon for openFrameworks to manage Quartz Compositions (.qtz files).
http://github.com/memo/msalibs/tree/master/ofxQuartzComposition/

Currently there is support for:

  • loading multiple QTZ files inside an openframeworks application.
  • rendering to screen (use FBO to render offscreen)
  • passing input parameters (float, int, string, bool etc) to the QTZ input ports
  • reading ports (input and output) from the QTZ (float, int, string, bool etc)

Todo:

  • passing Images as ofTextures to and from the composition (you currently can pass images as QC Images, but you would have to manually convert that to ofTexture to interface with openFrameworks)

 

How is this different to Vades ofxQCPlugin (http://code.google.com/p/ofxqcplugin/) ? 
ofxQuartzComposition is the opposite of ofxQCPlugin. ofxQCPlugin allows you to build your openframeworks application as a QCPlugin to run inside QC. ofxQuartzComposition allows you to run and control your Quartz Composition (.qtz) inside an openframeworks application.


Here there are two quartzcompositions being loaded and mixed with openframeworks graphics, in an openframeworks app. The slider on the bottom adjusts the width of the rectangle drawn by openframeworks (ofRect), the 6 sliders on the floating panel send their values directly to the composition while it's running in openframeworks.

Sep 22 18:13

Impromptu, improvised performance with Body Paint at le Cube, Paris.

My Body Paint installation is currently being exhibited at le Cube festival in Paris. At the opening night two complete strangers, members of the public, broke into an impromptu, improvised performance with the installation. Mind blowing and truly humbling. Thank you. My work here is done.

Sep 15 18:07

"Who am I?" @ Science Museum

MSA Visuals' Memo Akten was commissioned by All of Us to provide consultancy on the design and development of new exhibitions for the Science Museum's relaunch of their "Who Am I?" gallery, as well as taking on the role of designing and developing one of the installations. MSAV developed the 'Threshold' installation, situated at the entrance of the gallery, creating a playful, interactive environment inviting visitors to engage with the installation whilst learning about the gallery and the key messages.

http://www.wired.co.uk/news/archive/2010-06/25/science-museum-revamps-who-am-i-gallery

Sep 08 16:06

"Waves" UK School Games 2010 opening ceremony

MSA Visuals' Memo Akten was commissioned by Modular to create interactive visuals for the UK School Games 2010 opening ceremony at Gateshead stadium in Newcastle. The project involved using an array of cameras to convert the entire runway into an interactive space for the opening parade of 1600+ participants walking down the track as well as a visual performance to accompany a breakdance show by the Bad Taste Cru. All of the motion tracking and visuals were created using custom software written in C++ and using openFrameworks, combining visual elements created in Quartz Composer. Again using custom mapping software, the visuals were mapped and displayed on a 30m LED wall alongside the track. The event was curated and produced by Modular Projects, for commissioners Newcastle Gateshead Initiative. 

Aug 06 18:14

Announcing Webcam Piano 2.0

Jul 13 12:14

MSALibs for openFrameworks and Cinder

I am retiring my google code rep for openframeworks addons in favor of github. You can now find my addons at http://github.com/memo/msalibs . Actually I've taken a leaf out of Karsten Schmidt's book and registered http://msalibs.org too. For now it just forwards to the github rep, but maybe soon it will be it's own site. (Note you can download the entire thing as a single zip if you don't want to get your hands dirty with git - thank you github!).

There are some pretty big changes in all of these versions. Some of you might have seen that the Cinder guys ported MSAFluid to Cinder and they got a 100% speed boost! Well it's true, they've made some hardcore mods to the FluidSolver allowing it to run exactly 2x faster. Now I've ported it back to OF, so now we have the 100% speed boost in OF too. In fact carrying on their optimization concepts I managed to squeeze another 20% out of it, so now it's 120% faster! (And these mods also lend themselves to further SSE or GPU optimizations too).

To prevent this porting back and forth between Cinder and OF I created a system introducing an MSACore addon which simply maps some basic types and functions and forms a tiny bridge (with no or negligible overheads) between my addons and OF or Cinder (or potentially other C/C++ frameworks or hosts). MSACore is really tiny and not intended to allow full OF code to run in Cinder or vice versa, but just the bare essentials to get my classes which mainly do data processing (such as Physics, Fluids, Spline, Shape3D etc. - hopefully OpenCL soon) to run on both without modifying anything.

So now any improvement made to the addon by one community will benefit the other. Feeling the love :) ?

Some boring tech notes: Everything is now inside the MSA:: namespace instead of having an MSA prefix. I.e. MSA::FluidSolver instead of MSAFluidSolver. So just by adding using namespace MSA; at the top of your source file you can just use FluidSolver, Physics, Shape3D, Spline etc. without the MSA prefix (or just carry on using MSA:: if you want). I think it aids readability a lot while still preventing name clashes.

There are more changes in each addon so check the changelog in each for more info. e.g. MSA::Physics now has a MSA::Physics::World which is where you add your particles and springs (instead of directly to physics), and the MSA::Fluid has an improved API which is more consistent with itself. So backwards compatibility will be broken a bit, but a very quick search and replace should be able to fix it. Look at the examples.

P.S. this is the first version of this MSACore system (more like 0.001) so it may change or there may be issues. If you are nearing a deadline and using one of these addons, I'd suggest you make a backup of all of your source (including your copy of MSAxxxx addon) before updating!

Any suggestions, feedback, tips, forks welcome.

Jul 09 00:29

ofxWebSimpleGuiToo for openFrameworks (call for JQuery gurus!)

ofxWebSimpleGuiToo is an amazingly useful new addon for openFrameworks from Marek Bereza. With one line of code it allows you to create a webserver from within your OF app and send your ofxSimpleGuiToo gui as an html/javascript page, allowing remote clients to control your OF app from a regular web browser. These can be another PC or Mac, or android device, iPod Touch, iPhone, iPad etc. you name it. No specific app is needed on the client, just a simple web browser. In the photo below you can see the OF app running on the laptop sending the gui structure to an iPad and an iPhone - both running safari, which in turn can control the OF app.

there is still more work to be done, especially any Javascript / JQuery gurus out there willing to improve the client end are encouraged to come on board and finish it off!

If you're interested please get in touch

More information on ofxWebSimpleGuiToo and download can be found on Marek's google code
http://code.google.com/p/ofxmarek/wiki/ofxWebSimpleGuiTooWebService
(you will also need his ofxWebServer).

and you will need the latest ofxSimpleGuiToo from my github
http://github.com/memo/msalibs
(from here you will also need ofxMSAInteractiveObject)

Jun 23 11:16

BLAZE visuals

Earlier this year MSA VIsuals directed and produced the visuals for the west-end dance show BLAZE. Below is a short snipped of some of the visuals. You can see more information at www.msavisuals.com/blaze

Feb 13 17:42

Vertex Arrays, VBO's and Point Sprites with C/C++ in openFrameworks 006

A while ago I'd posted an example and source code for using Vertex Arrays, Vertex Buffer Objects and Point Sprites in openFrameworks. This was for openFrameworks 005 and needed some mods to the core and other hacks to get it to do what we needed. In the current version of openframeworks (006+) a lot of the required functionality has been moved to the core and so we don't need the extra classes MSAImage and MSATexture, or to hack the core. The updated example is attached and can be downloaded from below.

P.S. An example on particle system with OpenCL for even more performance (updating the particles on the GPU) can be found here.

 

 

Feb 07 17:42

Midi Time Code to SMPTE conversion (C++ / openframeworks)

I've recently needed to work with Midi Time Code (MTC) and could not find any code to parse the midi messages and construct an SMPTE timecode. Closest I got was finding this documentation (which is pretty good) on how the data is encoded in the bits of 8 bytes sent over 2 SMPTE frames, each byte sent at quarter frame intervals. From that I wrote the code below (I've only really tested the 25 fps). The code is from an openframeworks application but should work with any C/C++ code.

P.S. Some info on bits, bytes and nibbles here.

class ofxMidiEventArgs: public ofEventArgs{
public:
    int     port;
    int     channel;
    int     status;
    int     byteOne;
    int     byteTwo;
    double  timestamp;
};
 
#define kMTCFrames      0
#define kMTCSeconds     1
#define kMTCMinutes     2
#define kMTCHours       3
 
// callback for when a midi message is received
void newMidiMessage(ofxMidiEventArgs& eventArgs){
 
    if(eventArgs.status == 240) {                       // if this is a MTC message...
        // these static variables could be globals, or class properties etc.
        static int times[4]     = {0, 0, 0, 0};                 // this static buffer will hold our 4 time componens (frames, seconds, minutes, hours)
        static char *szType     = "";                           // SMPTE type as string (24fps, 25fps, 30fps drop-frame, 30fps)
        static int numFrames    = 100;                          // number of frames per second (start off with arbitrary high number until we receive it)
 
        int messageIndex        = eventArgs.byteOne >> 4;       // the high nibble: which quarter message is this (0...7).
        int value               = eventArgs.byteOne & 0x0F;     // the low nibble: value
        int timeIndex           = messageIndex>>1;              // which time component (frames, seconds, minutes or hours) is this
        bool bNewFrame          = messageIndex % 4 == 0;
 
 
        // the time encoded in the MTC is 1 frame behind by the time we have received a new frame, so adjust accordingly
        if(bNewFrame) {
            times[kMTCFrames]++;
            if(times[kMTCFrames] >= numFrames) {
                times[kMTCFrames] %= numFrames;
                times[kMTCSeconds]++;
                if(times[kMTCSeconds] >= 60) {
                    times[kMTCSeconds] %= 60;
                    times[kMTCMinutes]++;
                    if(times[kMTCMinutes] >= 60) {
                        times[kMTCMinutes] %= 60;
                        times[kMTCHours]++;
                    }
                }
            }           
            printf("%i:%i:%i:%i | %s\n", times[3], times[2], times[1], times[0], szType);
        }           
 
 
        if(messageIndex % 2 == 0) {                             // if this is lower nibble of time component
            times[timeIndex]    = value;
        } else {                                                // ... or higher nibble
            times[timeIndex]    |=  value<<4;
        }
 
 
        if(messageIndex == 7) {
            times[kMTCHours] &= 0x1F;                               // only use lower 5 bits for hours (higher bits indicate SMPTE type)
            int smpteType = value >> 1;
            switch(smpteType) {
                case 0: numFrames = 24; szType = "24 fps"; break;
                case 1: numFrames = 25; szType = "25 fps"; break;
                case 2: numFrames = 30; szType = "30 fps (drop-frame)"; break;
                case 3: numFrames = 30; szType = "30 fps"; break;
                default: numFrames = 100; szType = " **** unknown SMPTE type ****";
            }
        }
    }
}

Feb 07 00:00

Imogen Heap "Twitdress" for Grammys 2010

For the Grammy awards 2010, musician & artist Imogen Heap wanted a dress that would display the tweets and photos coming from her fans in realtime, so she could take her fans with her onto the red carpet. Moritz Waldemeyer designed a flexible LED ribbon which she could wear, and I developed the controlling software, an iPod Touch application that she could carry in her bag that would collect all the tweets and photos from the net, and send the information to the custom LED ribbon.

http://entertainment.timesonline.co.uk/tol/arts_and_entertainment/music/...

http://mashable.com/2010/01/31/grammys-imogen-heap-twitdress/

http://www.trendhunter.com/trends/imogen-heap-grammy-awards

 

 

http://www.msavisuals.com/imogen_heap_twitdress_for_grammys_2010

Jan 21 23:31

Laser tracking visuals for OKGo & Fendi @ Design Miami 2009

I designed and programmed visuals (projections) for the OKGo performance and Fendi installation at Design Miami 2009. OKGo were playing modified Les Paul guitars mounted with laser beams in the headstock, designed by Fendi in collaboration with Moritz Waldemeyer. Using a PC equipped with a high-speed firewire camera, I developed custom software to track the laser beams and generate visuals around the spots they hit the wall. These visuals were also audio-reactive, responding to the live audio feed coming from the sounddesk.

More images & video coverage coming soon.

P.S. For additional laser tracking goodness, checkout the Graffiti Research Lab's Laser Tag.

Nov 03 08:39

Zoetrope for iPhone

Finally, after 4 weeks of waiting, my new iPhone app has hit the appstore. "Zoetrope for iPhone". iTunes link and more information can be found here.

 

(video coming soon).

Oct 30 00:24

OpenCL in openFrameworks example - 1 milion particles @ 100-200fps

Recently I've been playing a lot with OpenCL, the new API / framework designed to handle cross-platform parallel computing (i.e. a simple way of running code simultaneously on all cores of your CPU, GPU or other processors). Implementations have been cropping up this year in NVidia drivers or ATI drivers, but most famously it's included with Mac OSX 10.6 Snow Leopard.

To cut a long story short I've been working on a simple-to-use C++ wrapper for some of the most common functions, imaginatively called ofxOpenCL and here is a little demo of 1 million particles running at 100-200fps.

NOTE: The Vimeo compression destroys most of the particles, so I suggest downloading the quicktime directly from the vimeo page at http://www.vimeo.com/7332496


This is 1,000,000 particles being interacted on by mouse, updated on GPU (with springy behaviours ) via an OpenCL kernel, data written straight to a VBO and rendered - without ever coming back to host (i.e. main memory + cpu etc.)

Frame-rate is around 100-200fps running on a macbook pro with GF 9600GT. That's 100-200fps on a laptop! (albeit a pretty decent one), but I'm dying to try this on a GF 285 GTX - which has 7.5x the number of cores, 2.5x the fillrate and 3.5x the memory bandwidth - for only £250!!

The kernel for this is surprisingly simple:

__kernel void updateParticleWithoutCollision(__global Particle* pIn, __global float2* pOut, const float2 mousePos, const float2 dimensions){
	int id = get_global_id(0);
	__global Particle *p = &pIn[id];
 
	float2 diff = mousePos - pOut[id];
	float invDistSQ = 1.0f / dot(diff, diff);
	diff *= 300.0f * invDistSQ;
 
	p->vel += (dimensions*0.5 - pOut[id]) * CENTER_FORCE2 - diff* p->mass;
	pOut[id] += p->vel;
	p->vel *= DAMP2;
 
	float speed2 = dot(p->vel, p->vel);
	if(speed2<MIN_SPEED2) pOut[id] = mousePos + diff * (1 + p->mass);
}

This example is based on Rui's opencl example at http://vimeo.com/7298380.

Discussion on the matter at http://www.openframeworks.cc/forum/viewtopic.php?f=10&t=2728&p=15107#p15...

source code for ofxOpenCL and the above example at
http://code.google.com/p/ofxmsaof/downloads/list
(the SVN is likely to be more recent).

Aug 11 00:35

looping via NSThread vs NSTimer

There are many posts out there on the internet discussing the pros and cons of running an update loop on iPhone (or on desktop for that matter) via an NSTimer vs NSThread, and many suggest they can squeeze an extra 3-5fps out of using an NSThread. So after this discussion on the openFrameworks forum, and with help from Robert Carlsen, I added this feature as an option to ofxiPhone. By default the update loop is still triggered from an NSTimer for backwards compatibility (and safety), but if in your testApp::setup() you call iPhoneEnableLoopInThread(), then the loop will be initialized to run in a separate NSThread and throttled by mach_absolute_time(). It is very much in its infancy (uploading to SVN as I type) but seems to do the job (obviously you'll need to write threadsafe code if you use it).

Aug 07 00:05

Cross platform, open source, C++ UDP TCP bridge (for OSC, TUIO etc.)

A cross platform, C++ UDP-TCP Bridge.

Originally created to forward UDP TUIO (OSC) messages straight to TCP to be read from within Flash.

This application forwards all incoming UDP messages straight to TCP without touching the data, just a straight forward.(Since version 0.2.1 there is the option to prefix the size of the packet before sending the data to comply with OSC / TCP specifications). This enables applications that don't support UDP (e.g. Flash) to receive the data. Since OSC / TUIO are generally sent via UDP, this enables Flash to recieve those messages in their raw binary form.

Settings can be edited from data/settings.xml.

Source and binaries at http://code.google.com/p/udp-tcp-bridge/

Jun 10 18:30

Body Paint performance at Clicks or Mortar, March 2009

I finally got round to editing the footage from the Body Paint performances at Clicks or Mortar, March 2009.

designed & created by Mehmet Akten, http://www.memo.tv
choreography & performance by Miss Martini, http://www.myspace.com/maleficentmartini
music "Kill me" by Dave Focker, http://www.myspace.com/davefocker

Excerpts from performance at
“Clicks or Mortar”, Tyneside Cinema, March 2009
curated by Ed Carter / The Pixel Palace, http://www.thepixelpalace.org/

http://www.memo.tv/body_paint

Jun 03 15:17

XCode templates for openFrameworks on Desktop and iPhone

UPDATE:

The templates attached below were for openFrameworks & ofxiPhone pre-006. For the current version of openFrameworks new templates are required, for now they can be found at http://github.com/memo/openFrameworks/tree/master/xcode%20templates/


Inspired by Roxlu's brilliant openFrameworks wizard for code::blocks I thought I'd have a go at creating similar XCode templates - turned out it's super easy and you can download them below (templates for both desktop applications and iphone applications). Instructions are included in the zip but I'm attaching it below too.

Note: the iPhone template is for the latest version of ofxiPhone from the svn because there are additional files in the current version. (Thanks to everybody for pointing this out).

 

May 21 13:44

openFrameworks London Workshop

I'm going to be giving an openframeworks workshop in London along with Marek Bereza and Joel Gethin Lewis, organized by InteractiveArchitecture.org and hosted by University College London’s MSc Adaptive Architecture & Computation Programme.

More information here.