My research assistant, Sam Ferguson, and I are working on an aesthetic sonification toolkit. It emerged while we are repeatedly building sonification projects with different interfaces and data-sets, that it is still useful to have a kind of real-time-capable toolkit for semi-automated calibration, scaling and chunking of data for sonification but customisable in the respect that the user and listener can both modify the listening experience. There are a few variations of a sonification toolkit out there, a patch or program usually developed with Max/MSP or Super-collider or C-sound, to treat various data in the same way, analysing and generating an auditory graph. Most dwell on frequency as a dimension of time-based data and magnitude on the y-axis. We are very interested to control dimensions like timbre and spatialisation and also to consider non-linear representation to clarify, even exaggerate, the interesting facets of the data peaks and trends.
If you are interested to try our toolkit, it is now available open source and we welcome feedback for improvements. You need Max/MSP and IRCAM FTM objects. http://www.kirstybeilharz.com.au/aeson.html or http://code.google.com/p/aesontoolkit/downloads/list
We (Kirsty Beilharz and Sam Ferguson) are investigating the aesthetics of sonification (representation of data through sound), as currently a lot of the toolkits out there are not necessarily designed with aesthetics or musical sound in mind. The eventual aim is to do have sonifications that are as engaging as information visualizations can be.
The aim is to produce a toolkit that maintains the massive flexibility that Max/MSP gives us, while simplifying the process of organising and mapping data to sound attributes. Currently, the toolkit consists of a large number of abstractions.
Included there are:
** a set of dataset management abstractions that load in a data file and split it up into various variables (passed as dict structures), abstractions for transforming these variables, and
**methods for stepping through this data.
** mapping abstractions that accept the dict structures in the rightmost inlet and out the rightmost outlet and then autoconfigure themselves.
** Synthesis and Sampling abstractions built to accept the outputs from above.
** 'Manipulation' objects that map data to a dsp process (eg lowpass filter) and 'manipulate' audio streams.
The idea is that by building a chain of the handful of objects used in transforming data to sound the design can be iterated quickly and better and more musical outcomes can result.
I haven't done a huge number of help patches yet but many of the crucial ones are currently there. There are also a few sample datasets, and some examples of the datasets being sonified. Currently I'm using small timeseries and multi-variate datasets, but I would like to scale it up to larger 3d datasets in the future. Generally, to use the system you connect all the various objects, and then load the data file which promulgates the data to each of the various objects. Then you press start on the Datatset.Control object, and you're sonifying.
I'm hoping that maybe if you download the toolkit and like it (or don't) you might provide some feedback on how to improve it. It's still 4.6 - but should work ok in 5 theoretically.