Bilue recently turned 5 years old, and we held a party at the Museum of Contemporary Art in Sydney just to celebrate. With some space left over to decorate we decided we wanted to build something a little different for our staff, customers and family to interact with. The brief we landed on was to create something unique, exciting and modern that encompassed the skill and flare that is involved in each and every day in the life of the Bilue team.

Our developers worked tirelessly against a tight deadline piecing together what became the final interactive visualisation seen below. It was an absolute hit on the night, projected centre stage across the wall and overall succeeded in demonstrating what we at Bilue are capable of achieving.


The basic gist of the visualisation is that the projection is given a field of dots that all have their own physical mass and velocity. Initially, these dots are all at rest.

The app reads input from a Microsoft Kinect, determines where geometric blobs are, and inserts objects into the scene corresponding with these blobs that repulse the dots using a variable force. As the blobs move around, the dots try to return to their home position.


The code does rely on Processing 3, however, it wouldn’t take much to back-port to Processing 2, if required.

The following Processing libraries are used:

All three of the above are downloadable using the Processing library manager.


Due to the brilliant cross-platform abilities of Processing, by default, it should work out of the box on a Mac. However, after running for 30 minutes or so, the Kinect sometimes just stops sending images. It turns out there is a bug deep in the bowels of libfreenect2 that seems to cause this. We attempted to build the Open Kinect for Processing library from source (with that patch included), but the Processing runtime kept complaining saying that it was an invalid .dylib.

This meant that we had to end up deploying on Windows 10 (using bootcamp on our Mac). To get this working, we needed to install the libusbK driver. Once that was installed, the Processing code worked with no changes.


Following on from the visualisation’s main event, we’ve since held some internal development discussions and breakdowns of our process during development. Part of this discussion involved considering possible applications of the project in our day to day, such as showcasing the project as a permanent feature around the Bilue office.

We’re also considering iterating on the project in order to add support for multiple Kinect devices, multiple output screens, more accurate blob detection and more intricate colouring.

If you’d like to run it or even contribute, we’ve open sourced the project on Github.