Time of the Flies

22.07.2022

Karaoke Bar Version 2

Time of the flies is a further development of Drosophila Karaoke Bar.

It is shown at the Science Gallery Melbourne as a part of the SWARM exhibition.

During the time of the pandemic I was experiencing our biological condition in a new way: Pandemics are teaching us about spheres, proximities, our distance behavior. We are asking ourselves, if we have been paying enough attention to the safety of our sphere of life so far and who we let come close. I called this period „time of flies“, mainly as I was living in my one room apartment in Berlin alone with fruitflies – cultures of fruit flies in my home. 

As soon as you work with a biological organism (animals or cells), something very fundamental is changing: your day is determined by the needs of your culture, the critters you let into your habitat are dominating your daily routines.  You have to assure that they are comfortable, reproduce, even if the preferred living condition would be different.

To make flies my housemates was an idea that came to me because i was looking for a living being to serve as an object of observation. It should exhibit interesting collective behavior, and have a significance in science and our civilized way of life.

That’s how I ended up with flies. Neuroscientists have chosen flies because their brains are more simple than ours. If one looks at the behavior of the flies, one can perceive that their activities e.g. during their ‘party hour’ are already very complex.

One sees flying fruit flies, sitting, chasing and sniffing each other, visiting friends, courting flies, mating flies, fighting flies (as reported by Ralph Greenspan[1]).

Second video screen showing group behavior of fruit flies

Fruit flies sing. Beyond the well-known buzzing during their flight, they are also singing with their wings to communicate with each other. Of these songs, sinus tone and pulse tone are known, the latter of those being semantically encoded But I noticed sonic patterns, besides the scientifically described pulse song by Birgit Brüggemeier[2] and sine song like the buzzing of flying flies, but also other patterns that sound rather aggressive or like strange signals to make themselves heard.

To engage in an exchange with the flies, I draw on these behavioral patterns. With this work I offer you a possibility to talk directly to fruit flies through singing. When we humans sing, our songs often develop along already known melodies. But here, we don’t know which song to sing. We first have to listen.

Schematic set-up of the installation
Installation in my flat in Berlin, experiments with a different type of petri dish and inverse microphones.

To facilitate the contact, we have used a software with which we adapt the sound of human song to the song of flies. The software is based on audio mosaicing provided by the Fraunhofer Institut IIS[3]. Audio mosaicing is more advanced than the vocoder software we used for Drosophila Karaoke Bar. A vocoder adapts only the envelope, i.e. the characteristic curve of the amplitude of a sound that shapes the timbre, to a target sound. Audio mosaicing also adapts the frequency spectrum in its activation patterns to a pre-selected template.

How has the audio mosaicing changed the interaction with the flies? While with the vocoder I still had the feeling of speaking through an instrument, the audio mosaicing immediately induced me to sing with the usual pitches and timbres of fly song. This fundamentally changed the experience with the flies. The vocoder allowed to immediately stimulate the flies. Very soon it was apparent that this tool is good to arouse the flies. After an initial enthusiasm to be accepted into the world of flies, however, the question quickly arose: what happens next? Finally, you can only disappoint the flies as soon as they realize that you are not a suitable partner. And here’s where audio mosaicing became interesting. It allowed me to enter a sound sphere that was that of the flies.

I have often and repeatedly tried to develop an understanding of the flies by singing with the audio mosaicing. I had to understand that the flies are lazy and quirky. If they don’t want something, they simply don’t care. When they are having fun, they don’t bother with us humans. They don’t need us –  except maybe our food. But that was it. When I was in the sound spectrum of the fly, I got into complaining, moaning, groaning, and every utterance was physical, immediate, not very sophisticated.

I ofteh wished that my voice was trained, that of a singer. And I could demand more artistry from it. But – and this seems to me the bottom line – in these times the obvious is the essential.

But what does all this work mean now: sharing one’s habitat with a so-called nuisance? And to sing laments or songs of suffering, of perseverance, of waiting, with this nuisance? Sartre (“Les mouches”) would give an unambiguous answer: Overcome the guilt. Step out of the shadows of the past…

I, however, wanted just to create an open situation, in which we observe and interact without bias before ‘knowing’ or making decisions about an opponent. And any artfulness would be out of place.

And I focused on fruit flies because they are part of that what we eliminate from daily life, what we eradicate with pest control, what does not belong to a culture of modernity. The confrontation with our own shortcomings seems to me essential for overcoming it.

Interaction Screen of the installation
set-up instruction

The work has been produced with the support of a grant from the Fraunhofer Artists in Lab Program.

But it would not have been possible without the technological design, programming, support and endurance of Felix Bonowski.


[1] Hans Dierick, Ralph Greenspan; Molecular analysis of flies selected for aggressive behavior. Nat Genet. 2006 Sep;38(9):1023-31 https://doi.org/10.1038/ng1864

[2] Birgit Brüggemeier, Mason A. Porter, Jim O. Vigoreaux, Stephen F. Goodwin; Female Drosophila melanogaster respond to song-amplitude modulations. Biol Open 15 June 2018; 7 (6): bio032003. doi: https://doi.org/10.1242/bio.032003

[3] Patricio López-Serrano, Christian Dittmar, Yiğitcan Özer, and Meinard Müller
NMF Toolbox: Music Processing Applications of Nonnegative Matrix Factorization
In Proceedings of the International Conference on Digital Audio Effects (DAFx), 2019.

 


Kontinuum

02.01.2020

Art installation by Ursula Damm and Felix Bonowski, 2021

Curator: Yvonne Volkart

Commissioned for the Flux building by Eawag: Swiss Federal Institute of Aquatic Science and Technology

Technique: Two channel projection; one based on live camera footage and neural network learning rules; simulation based on Perlin noise, Navier Stokes-Solver, reaction diffusion kinetics parameterized with live measurements of Oxygen content, temperature, turbidity.

Kontinuum auf der Website der EAWAG

Felix Bonowski controling the “bachcam” screen, EAWAG Flux Building, Zürich
View of the projection – simulation and generative video

Kontinuum is a generative 2 channel projection based on live data of the Chriesbach, a rivulet flowing alongside the Institute of Water Research. The two projections represent a certain mode of “reality” of the Chriesbach and its flux throughout the year. Both translate data of seasonal variations, color patterns and physical principles of the stream into sensual images that associate impressionist and Japanese painting. By bringing the outside into the inside, the water into the Flux building, the object of observation to the site of its investigation, the installation reflects the Kontinuum of the stream and the function of the house in an aesthetic way.

The colored projection collects images of the Chriesbach stream and its inhabitants. Real time images of three cameras are passed through a graphics shader which is based on classical neural network learning rules that “remember” colors in areas of high activity. The resulting video is a collage of combined aspects of the streams’ visual appearance from different times and viewing angles. With their daily changes they serve as a kind of aesthetic weather report.

The black and white projection is a live simulation of a fluid meandering through a rock-strewn valley. Based on an ecosystem of nutrients, primary producers, and grazing microorganisms of the stream, it manifests digitally “how the world would look like, if nature followed these rules”. The formulas that govern the shape of the landscape, the dynamics of flow, and the evolution of life in the simulation are parameterized with values derived from actual real-time measurements of physical water properties. The measurements are performed by a station operated by the research institute just a few meters from where the cameras look onto the stream. Correspondences between measurements and model parameters are chosen so that seasonal changes (in temperature), daily rhythms (in oxygen saturation from photosynthesis) and occasional events (turbidity caused by thunderstorms and construction work) leave their traces in the graphics. Transforming from a valley with a few large boulders to a (virtual) riverbed with many small pebbles, from one emergent biological pattern into another, from a slowly meandering flow into a violent gusher, the simulation reveals itself as a being in permanent flux.

At the right border of each projection, the image logic of the other projection intervenes, so that the color-data of the live stream and the patterns of the black and white simulation intersect: contrasty movements in the colored projection (e.g. reflections of light or swimming leaves) become lines, scratches and holes in the black and white one. They appear as forces, which wipe out organic life and destroy the image. Thus, it becomes clear, that no image and no “reality” stand for themselves, rather they can be questioned, disrupted, or interpreted by manifold approaches.

Video documentation of the installation
Real time play-out of the so-called ‘bachcam’
Cameras observing the Chriesbach stream
Positioning of the cameras
Real time data from Chriesbach for the simulation screen
Felix Bonowski installing the cameras
Video auf der Website der Eawag
Schematic drawing of data flow
This is the concept video with samples of different interesting moments

Membrane

28.10.2019

Membrane at Entangled Realities – Leben mit künstlicher Intelligenz im HeK (Haus der elektronischen Künste Basel), 08.05.-11.08.2019. Foto: Sabine Himmlsbach
 
 
Ausstellung Entangled Realities Foto: Franz Wamhof
Ausstellung Entangled Realities Foto: Franz Wamhof
View into the restaurant area of the museum
Membrane with interface at Kunstverein Tiergarten Galerie Nord. Operator: Sandra Anhalt



 

Membrane is an art installation which was produced as the main work of a similarly named exhibition at the Kunstverein Tiergarten in Berlin early 2019. It builds on a series of generative video installations with real time video input. Membrane allows the viewer to interact directly with the generation of the image by a neural network, here the so-called TGAN algorithm. An interface allows to experience the ‘imagination’ of the computer, guiding the visitor according to curiosity and personal preferences.

The images of Membrane are derived from a static video camera observing a street scene in Berlin. A second camera is positioned in the exhibition space and can be moved around at will. Two screens are showning both scenes in realtime.

In my earlier artistic experiments within this context we considered each pixel of a video data stream as an operational unit. One pixel learns from colour fragments during the running time of the programme and delivers a colour which can be considered as the sum of all colours during the running time of the camera. This simple method of memory creates something fundamentally new: a recording of patterns of movement at a certain location.

Diagrammatic drawing of data flow

On a technical level, Membrane not only controls pixels or clear cut details of an image, but image ‘features’ which are learnt, remembered and reassembled. With regards to the example of colour: we choose features but their characteristics are delegated to an algorithm. TGANs (Temporal Generative Adversarial Nets) implement ‘unsupervised learning’ through the opposing feedback effect of two subnetworks: a generator produces short sequences of images and a discriminator evaluates the artificially produced footage. The algorithm has been specifically designed to produce representations of uncategorised video data and – with the help of it – to produce new image sequences. (Temporal Generative Adversarial Nets).

We extend the TGAN algorithm by adding a wavelet analysis which allows us to interact with image features as opposed to only pixels from the start. Thus, our algorithm allows us to ‘invent’ images in a more radical manner than classical machine learning would allow.

In practical terms, the algorithm speculates on the the basis of its learning and develops its own, self organised temporality. However, this does not happen without an element of control:   feature classes from a selected data set of videos are chosen as target values. In our case, the dataset consists of footage from street views of other cities from around the world, taken while travelling.

The concept behind this strategy is not to adapt our visual experience in Berlin to global urban aesthetics but rather to fathom the specificity and to invent by associations. These associations can be localised, varied and manipulated within the reference dataset. Furthermore, our modified TGAN algorithm will generate numerous possibilities to perform dynamic learning on both short and long timescales and ultimately to be controlled by the user/ visitor. The installation itself allows the manipulation of video footage from a unchanged street view to purely abstract images, based on the found features of the footage. The artwork wants to answer the question of how we want to alter realistic depictions. What are the distortions of ‘reality’ we are drawn to? Which fictions are lying behind these ‘aberrations’? Which aspects of the seen do we neglect? Where do we go with such shifts in image content and what will be the perceived experience at the centre of artistic expression?

From an artistic point of view, the question now arises, how can something original and new be created with algorithms? This is the question behind the software design of Membrane. Unlike other AI-Artworks  we don’t want to identify something specific within the video footage, but rather we are interested in how people perceive the scenes. That is why our machines look at smaller, formal image elements and features which intrinsic values we want to reveal and strengthen. We want to expose the visitors to intentionally vague features: edges, lines, colours, geometrical primitives, movement. Here, instead of imitating a human way of seeing and understanding, we reveal the machine’s way to capture, interpret and manipulate visual input. Interestingly, the resulting images resemble pictorial developments of classical modernism (progressing abstraction on the basis of formal aspects) and repeat artistic styles like Pointilism, Cubism and Tachism in a uniquely unintentional way. These styles fragmented the perceived as part of the pictorial transformation into individual sensory impressions. Motifs are now becoming features of previously processed items and are successively losing their relation to reality. At the same time, we question whether these fragmentations of cognition are proceeding in an arbitrary way or whether there may be other concepts of abstraction and imagery ahead of us?

From a cultural perspective, there are two questions remaining:
– How can one take decisions within those aesthetic areas of action (parameter spaces)?
– Can the shift of the perspective from analysis to fiction help to asses our analytical procedures in a different way – understanding them as normative examples of our societal fictions serving predominantly as a self-reinforcement of present structures?

Thus unbiased artistic navigation within the excess/surplus of normative options of actions might become a warrantor for novelty and the unseen.

Programming: Peter Serocka, Leipzig

Sound: Teresa Carrasco, Bern

The work has been produced with the kind support of Kreativfonds Bauhaus Universität Weimar and HEK Basel
https://vimeo.com/349417754
my talk at the conference Re-Imagining AI
https://vimeo.com/showcase/6375722/video/366513412

Article on Membrane

Art and Science Talk LABoral Gijon our talk was starting at 1:15:00. 

MEMBRANE at Base, Milano Digital Week
 
Teresa and Peter working on the final set-up in Berlin

Fly song

03.10.2019

Fly songs from Ursula Damm and Johann Nigel

We are using Max/MSP) to indentify the pitch of natural fly songs and modulates them with their own tones. Naturally occuring harmonics are shown in the real-time illustration. First shown at the Mo Museum in Vilnius, May 2019.

Programming: Johann Nigel
Concept: Ursula Damm

Drosophila Karaoke Bar

01.12.2018

Installation setting at ars electronica 2019
Documentation of the inauguration at MoMuseum Vilnius 2019
A perfect feedback when ER plays on a Bach B50 Basstrombone!

Project Description in German
With our Drosophila Karaoke Bar, we want to look at one of the most widely used model organisms in medicine and brain research: the fruit fly, Drosophila. While humans in their everyday life keep away from flies, science uses these creatures for experiments. Drosophila are cheap, they reproduce quickly, have enough genetic resemblance to humans to study genetic diseases and a brain small enough for us to study.

One striking and little-known behaviour of flies are their mating songs. Fly males sing to females by vibrating their wings in rhythmical patterns. With our karaoke bar we want to offer a possibility to sing with flies, to experience their nature and culture in a shared sensual experience.

Look into the sound isolation box

Can our karaoke bar bring items from our high-tech culture back to our environment? Does it allow the audience to immerse themselves into science? Our attempt to invite people to sing with flies offers a performance to experience a holistic approach to scientific investigations. The setup discusses ecological questions: to which degree do we need to separate the habitats of humans and flies to feel comfortable? Which measures are necessary to make their faint songs audible to humans? How does a laboratory environment affect the behaviour of flies? Under which conditions are we able to enjoy their presence?

The installation invites visitors to establish a direct exchange with fruit flies through a technical interface. A software is transforming human speech into signal that can be perceived by flies. It allows auditory feedback between people and animals. For blending human and fly songs we will use a special signal processing vocoder provided by Berd Edler of the Fraunhofer Institute for Integrated Circuits.

Visitors are requested to talk and sing with flies. Birgit Brüggemeier, Neuroscientist and fly researcher explains in a video the meaning of the separate constituents of fly song. She informs listeners about the syntax and semantics of Drosophila songs, in order to give the visitors a better understanding of fly communication. The video encourages the visitors to sing and speak to flies.

the interior of the sound isolation box

A sound visualization in 2D enhances the auditory perception of the sphere of the flies with a visual monitoring of fly songs on a screen: the location, amplitude and pattern of the sound sources shall help the performer to identify its influence on the fly behaviour.

A large pile of sand covers with its weight the habitat of flies to isolate their buzzing against the noise of humans shows. The massiv sand pile repesents the sensual and the semantic gap between a fly and a human.

Inside the box and under the sand pile
Karaoke Bar @ MOMuseum Vilnius

In a future version, another set of headphones offer an ‘anthropocentric’ view point on flies: we track the occurring frequencies of our fly community, constituting in courtship songs and flight sounds (differing from about 1 octave). A specially designed software enhances the real-time sounds by modulating them with previously found chords. This software raises the question: are there more hidden patterns of communication within fly songs than described by science yet?

Fly song – a rendering of a generative software

With Karaoke Bar we learn to become silent and careful to be able to hear the voice of Drosophila. Our setting offers a possibility to communicate with Drosophila at eye (ear) level. By concentrating on Drosophilas proper way of expression (what kind of signals are they sending to their surroundings? How are they communicating? How does it sound if they are approaching their comrades? What do they want to negotiate? What are our common windows of perception?), we want to circumvent an anthropocentric world view. The installation not only translates the signals of drosophila (sonifying and visualizing, as it was done earlier), but allows a shared practice in a direct feedback situation, offering a novel sensual experience.

Ursula Damm (Artist, Project Lead)

Birgit Brüggemeier (Neuroscientist, former Fly Researcher)

Felix Bonowski (Programming, Interface)

Insect songs

19.02.2018

Insect Songs on Vimeo.

An excerpt from a performance with Christina Meissner and Teresa Carrasco

When I left the county side and moved to a city I begun to miss the sound of the fields and the forrest. And when I later returned to the small village in the middle of vineyards, called Diedesfeld, something was gone. I took me a while to figure out that I missed the sounds of insects. And that this sound was like a confirmation of a strong, ecological balance. Science proofed only years later that insecticides diminished insects up to 80 % of their former presence.

Christina Meissner, Teresa Carrasco and me gathered in Weimar to experience the singing of Chironomid midges (Chironomus riparius, commonly used in ecotoxicology) and their aibility to react to our music. In a direct feedback situation between humans and animals technology should be used only to adapt our senses and make it easier to understand the message of the other. In a first performace we noticed that Christina Meissner with her Cello was able to stimulate lazy midges (stimulation – Soundexample from our first session ) to start swarming intensively (swarming in dialogue). We were thrilled to notice that it was so easy and obvious that humans and midges interfere. In our second concert, it was no longer necessary / voluntary to force midges into swarming, but instead to develop a kind of q/a, to listen and to respond to the phrases of the mosquitoes. Our first performance ready for publishing (in full length) is here.

Our concerts can be seen as a call for the subtle atmosphere which allows insects to stay in our neighbourhood. And our readiness to listen to them.

Frequency spectrum of our first performance, showing an initial stage where Christina Meissner stimulates the midges with special sounds, and a later phase, where the midges are swarming intensively
inside the “box” for culturing Chironomus riparius

If you appreciate midges, you might also look at the following video, showing synchronized swarms at Taubensuhl, Palatinat, Germany. They can be observed only some days per year – I was very happy to record them. When I came back with my sound equipment, they were gone.

chironomidae@taubensuhl from resoutionable on Vimeo.

U-Bahnhof Schadowstrasse

29.01.2016

‘Turnstile’ – an interactive installation

Turnstile, generative Installation von Ursula Damm (Foto Thomas Mayer)

On the front wall of the Schadowstrasse underground station, an LED wall displays a generative video. In front of the wall, a light shaft extends to the surface of the plaza where a video camera is set up. The camera continuously films passing pedestrians on the plaza and streams the feed to a specially developed generative software application (coded by Felix Bonowski) which derives proposed geometries for structures based on the movement patterns of the pedestrians. These interpretations of the real-time video generate new geometries for the location and propose axes and parcels.

Turnstile, Ursula Damm, Schadowstrasse Düsseldorf 2016 (Foto by Thomas Mayer)

Two elevators, to the left and right of the large video image, lead from the plaza to the rail platform.

Pattern drawings on aerial photos of Düsseldorf, Schadowstrasse

Turnstile (Drehkreuz) from resoutionable on Vimeo.
On the platform, the geometric structures can be heard as a sound interpretation (by Yunchul Kim). At the centre of the artistic intervention is the video image and its artistic concept. The concept is reflected in the design of the entrance areas. Plates are inserted in the blue glass of the underground station at 21 locations, which display geometries over districts of Düsseldorf.

Foto: Ursula Damm und Felix Bonowski, Credits: netzwerkarchitekten, Darmstadt, © 2015 by Jörg Hempel; www.joerg-hempel-com
From the stairs to the screen, Foto: Achim Kukulies
Pattern drawing on aerial photo of Düsseldorf, photo by Achim Kukulies

In the east concourse is the aerial image of the city of Düsseldorf that was analysed according to the geometric concept.

Geometric pattern generation on the spacial structure of Duesseldorf

As excerpts from this aerial picture, 16 locations in Düsseldorf were interpreted at the level of a local aerial image. These urban areas were described with regular polygons as energy centres which fitted themselves together through the development of the city architecture (see the text on the concept of the generated patterns).

Schadowstrasse, pedestrian aerea, location of camera; Photo Achim Kukulies

The fine structure of the patterns juxtaposes both the sensibility of nature and the human, formative gestures against the massive edifice, calling to mind a mode of formation that creates sweeping interconnections through the symbiotic organisation of a multitude of individual elements. In doing so, this formative process completes the social principle through which individuals experience their effect on the whole.

The pattern drawings are generated in slow steps: First a line drawing is created over the image of the city. As this progresses, important motion axes of traffic and pedestrians are emphasised. The areas these axes enclose become polygons. At this point, the angles of the lines and axes are examined in the search for whole-number fractions of regular polygons.

The smallest polygon integrating all of the symmetries at the location (for instance, five-angled and four-angled fragments would be assembled into a 20-sided polygon) is then used to describe an intersection.

A subsequent step is the search for connections (network) between large neighbouring polygons.

Work with the aerial images revealed that the city centre has very small polygons, while outer areas have a significantly more expansive structure. Thus, density is indicated by the presence of small polygons and complex symmetries. Often, the transition from non-rectangles to rectangles can indicate historical breaks in the urban landscape. In this way, the interpretations represent a study of the settlement and planning history of the city.

Drawing Nr. 24, Düsseldorf Flehe

The sound installation

The generative video installation interprets traces of movement created by geometric “agents.” The activity of these agents is translated into sounds which track the visual artefacts. As such, the sounds form the noise that the virtual artefacts generate in their world, and thus represent and extended artistic “level of reality” of the installation.

PROGRAM

  • Select a location (origin)
  • Determine the movement axes of people and traffic
  • Look to see if these axes are at angles to one another, which when mirrored and rotated can form a polygon, the sides of which all extend outward equally
  • Draw this polygon to approximate the natural geometry of the location
  • Look to see if, starting from this, the intrinsic geometries of the location can form a surface structure, (tessellation) that periodically repeats the original geometries
  • Determine whether and how, in the aerial image of the location, the areas fit together in the revealed geometry of  the place
  • Enhance existing structures by developing their geometries
  • Connect existing structures into the logic of the original geometry

Konzept: Ursula Damm
Programmierung: Felix Bonowski
Sound: Yunchul Kim

Article on the production of the artwork in german

Turnstile article english translation

Text von Georg Trogemann über den Besuch der Eröffnung

official website Stadt Düsseldorf

English text on the production of the artwork

 

Urban development kit

29.11.2014

The Urban Development Kit  provides tools to ameliorate the atmosphere in contemporary cities. The Urban Development Kit is a collection of tools. Over the time the website aims to become a resource of ideas, concepts and tools for a citizen-driven urban design.

Visualization for the workshop in Osaka

One of our kits supports watchful citizens and plants to compete with pavement, concrete and asphalt. It helps plants to interact with modern cities, to prevail against soil sealing. A website and an interactive map enables the people to collect photos of “asphalt flowers” in Helsinki and other cities and to monitor the progress of the “cultivation”. With respect to urban environmental research, the urban development kit is a statement about the importance to counteract the sealing of the surfaces in the city. Accordingly, in the exhibition can be seen design for urban surfaces which are based on the geometry the plants themselves.

Patterns on polygonium aviculare
pattern drawing on polygonium aviculare

The work has been developed for the Art&HENVI project, organized by the finish Bioart Society.

counteracting soil sealing

In 2014, a new version of the urban development kit was presented at a creative cloud workshop, organized from ars electronica (see the fotos from the workshop)

Plese look at the presentation of the workshop

cosmical sperm osaka

The outline of paradise (installation)

06.09.2014

Installation view at the Hybris Exhibition at ACC Weimar
Detail – culture items for breeding chironomid midges

Non-biting midges (chironomidae) are bred in an aquarium. Inside the aquarium mosquito eggs and larvae swim in sand and water. They are ventilated and supplied with abundant artificial daylight. The choice of midges (Chironomus Riparius, a laboratory strain) allows for captive breeding.

Aquauarium, Ipad with video from military airshows flying in loops
Looping airplanes behind swarming midges

Pink elephants for Hollywood
  • The scientific paper

For a performance I invited Christina Meissner  to improvise on the theme of the wingbeat sound. We could experience that the tracks of the midges were visibly influenced especially by dark plucking sounds.

  • "the outline of paradise" - view into the soundbox
The work at Hybris Exhibition Halle 14 Leipzig, Photo: Claus Bach
The work at Hybris Exhibition Halle 14 Leipzig, Photo: Claus Bach


This setting allows to find out how swarms develop and how they can be influenced. This installation follows “sustaibable luminosity” and explores the possibilities to train migdes and to  pass this behaviour to the next generations

Work at Halle14, Leipzig

Concept: Ursula Damm
Artistic + scientific consultation: Dr. Klaus Fritze
Cello, Sounds: Christina Meissner
Programming: Sebastian Stang

Chromatographic Ballads

16.12.2013

Test situation in my studio with brain device, operator Lisa and Martin Schneider

The installation received a honorary mention at VIDA 15.0

The Artwork

Martin explaining the Neurovision software

Chromatographic Orchestra is an artistic installation which allows a visitor to direct a software framework with an EEG device. In an exhibition environment with semi-transparent video screens a visitor is sitting in an armchair and learns to navigate unconsciously – with his/her brain waves the parameter space of our software – Neurovision.

Neurovision interacts with live video footage of the location of the exhibition and its surroundings. By navigating with his/her own brain waves the visitor can define and navigate the degree of abstraction of a generative (machine learning) algorithm,  performed on the footage of different, nearby video cameras.

Vizualisation for ideal set-up
ideal set-up for the installation
Our Operator Lisa sitting in front of the screen with a EEG device

The installation refers back to painting techniques in the late 19th and early 20th century, when painting became more an analysis of the perception of a setting then a mere representation of the latter. Impressionism and Cubism were fragmenting the items of observation while the way of representation was given by the nature of the human sensory system.

The installation “chromatographic orchestra” does not apply arbitrary algorithms to the live footage: we developed a software – the Neurovision framework – which mimics the visual system of the human brain. Thus we question whether our algorithms meet the well being of the spectator by anticipating processing steps of our brain.

Artistic Motivation

How much complexity can our senses endure, or rather how could we make endurable what we see and hear? Many communication tools have been developed, to adjust human capabilities to the requirements of the ever more complex city.

Our installation poses the opposite question: How can information emerging from the city be adjusted to the capabilities of the human brain, so processing them is a pleasure to the eye and the mind?

At the core of our installation is the NeuroVision Sandbox, a custom made framework for generative video processing in the browser based on WebGL shaders.

Inside this Sandbox we developed several sketches, culminating in the
“Chromatographic Neural Network”, where both optical flow and color information of the scene are processed, inspired by information processing in the human visual system.

We critically assess the effect of our installation on the human sensory system:

  • Does it enhance our perception of the city in a meaningful way?
  • Can it and if so – how will it affect the semantic level of visual experience?
  • Will it create a symbiotic feedback loop with the visitor’s personal way to interpret a scene?
  • Will it enable alternate states of consciousness? Could it even allow visitors to experience the site in a sub-conscious state of “computer augmented clairvoyance”

Installation

In a location close to the site a single visitor directs a video-presentation on a large screen with a setup we like to call “the Neural Chromatographic Orchestra” (NCO).
Our installation uses an EEG-Device (Emotiv NeuroHeadset) that lets visitors interact with a custom neural network. The setup allows visitors to navigate through various levels of abstraction by altering the parameters of the artificial neural net.

With the NCO device, a visitor can select and explore real-time views provided by three cameras located in public space with different perspectives on the passer-byes (birds-eye view and close-ups)

The installation is based on the NeuroVision Sandbox used in the development of “transits”.
Other than transits, chromatographic ballads uses multi-channel real-time video-input and enables a visitor to interact with irectly via biofeedback with the neural network.

The Neural Chromatographic Orchestra investigates how human perception reacts to the multifaceted visual impressions of public space via an artistic setting. Using an EEG-Device visitors can interact with a self-organizing neural network and explore real-time views of an adjacent hall from several perspectives and at various levels of abstraction.

Biological Motivation

The Chromatographic Neural Network is a GPU-based video processing tool. It was inspired by parallel information processing in the visual system of the human brain. Visual information processing inside the brain is a complex process involving various processing stages.The visual pathway includes the retina, the Lateral Geniculate Nucleus (LGN) and the visual cortex

Scheme of the optical tract with the image being processed (simplified): http://en.wikipedia.org/wiki/File:Lisa_analysis.png

Low-level visual processing is already active at the various layers of the retina. The Interconnection of neurons between retina layers, and the ability to retain information using storage or delayed feedback, allows for filtering the visual image in the space and time domain.

Both image filters and motion detection can easily be achieved by accumulating input from neurons in a local neighborhood, in a massively parallel way.

Our Chromatographic Neural Network uses this approach to cluster colors and to compute the visual flow (or retina flow ) from a video source. The resulting attraction-vectors and flow-vectors are used to transform the memory retained in the memory layer.

The visual output of the system directly corresponds to the state of the output layer of the neural network. The neural layers of the Chromatographic Neural Network, are connected to form a feedback loop. This giving rise to a kind of homeostatic-system that is structurally coupled to the visual input but develops its own dynamics over time.

The set-up

A visitor enters the site – a highly frequented passage, a spacious hall or a public place. Two videocameras, mounted on a tripod, can be moved around at will.

Another camera observes the passer-byes – their transits and gatherings – from an elevated location. The video footage from this site is streamed into a neighboring room – the orchestra chamber of the Neural Chromatographic Orchestra.

Here one can see – in front of a a large video wall a monitor displaying the videos from the adjacent room and the “orchestra pit” – an armchair equipped with a touch device and a neuro-headset. The video wall, showing abstract interpretations of the site itsself, should ideally be visible both from the orchestra pit and from the large hall.

The Orchestra Chamber

Inside the chamber the visitor is seated in a comfortable armchair and an assistant helps her put on and adjust the neuro-headset.

The orchestra chamber should be isolated from the public area as much as possible. A sense of deprivation from outside stimuli allows the visitor to gain control over her own perception and achieve a state of mind similar to meditation or clairvoyance.

The Orchestral Performance

Training Cognitive Control

A performance with the Neural Chromatographic Orchestra starts with a training of up to six mental actions, corresponding to the “push/pull”, “left/right“ and “up/down” mental motions provided by the Emotiv Cognitiv suite. The training typically lasts 10 to 30 minutes.

Playing the Sandbox

After successful training the visitor is asked to sit in front of the NeuroVision Sandbox:

The visitor in the orchestra chamber has three modes of conducting the neural network

  • either the birds-eye view or one of the cameras that take a pedestrian’s perspective
  • A graphical user interface lets her switch between different neural networks and control their parameters
  • A menu lets her choose any of the three cameras as a video source:
  • the NeuroHeadset allows to navigate the parameter space of the selected neural network

Conducting the Orchestra

Once the visitor feels comfortable conducting the NCO on the small screen, she can perform on the large screen, that is also visible from the outside.

On the public screen sliders are not shown, but the conductor may still use a tablet device to access the graphical user interface.

The current position in parameter spaces is represented by a 3d-cursor or wire-frame box, which is very helpful for making the transition from voluntary conduction moves, to a style of conducting that is more directly informed by immersion and interaction with the output of the Chromatographic Neural Network.

The Chromatographic Neural Network

The flow of information is arranged into several processing layers. To realize memory, each processing layer is in turn implemented as stack of one or more memory layers.This allows us to access the state of a neuron at a previous point in time.

Example

The video layer is made up of two layers, so the system can access the state of any input neuron at the current point in time, and its state in the previous cycle.

Processing Layers

The Video layer

The Video layer contains the input neurons. Each neuron corresponds to a pixel of the video source. The Video layer provides the input for the Flow layer.

The Ghost Layer

The Ghost layer represents a haunting image from the past. It implements the long term memory, that interferes and interacts with the current visual input. It does not change over time, and is provided as additional input to the Flow layer

The Flow layer

The Flow layer accumulates the input from the Video layer and the Ghost layer. Each Neuron aggregates input from its neighborhood in the Video Layer at times (t) and (t-1). The computed 2d vector is directly encoded into the the state of the neuron, creating a flow map.

The Blur layers

The Blur layers are used to blur the flow map. While the computation of visual flow is restricted to a very small neighborhood, the blur layer is needed to spread the flow information to a larger region, since flow can only be detected on the edge of motion.

For efficiency reasons the blur function is split into two layers, performing a vertical and a horizontal blur respectively.

Neuron Processing

The state of each neuron corresponds to an RGB color triplet. Every neuron of the Flow layer gets input from corresponding neurons inside a local neighborhood of the input layers. Each of those input samples corresponds to a single synapse. The vector from the center of the neuron towards the input neuron is referred to as the synapse vector.

Color Attraction

To achieve some kind of color dynamics, colors that are close in color space are supposed to attract each other.

The distance between synapse input and the neuron state in RGB color-space, serves as a weight, which is used to scale the synapse vector. The sum of scaled synapse vectors results in a single color attraction vector.

Color Flow

While color attraction is the result of color similarities or differences in space, color flow is the result of a color changes over time. Rather than calculating the distance of the neuron state to a single synapse input, its temporal derivative is calculated by using input from a neuron and its corresponding memory neuron. This time the sum of scaled synapse vectors results in a flow vector.

Encoding

Both color flow and color attraction vectors are added up and their components are encoded in the flow layer.

Parameters

here are various parameters in each layer controlling the amount and direction of color attraction, color flow, the metrics used for calculating color distances, the neuron neighborhood, etc …

Implementation

All neural computation is performed on the GPU using OpenGL and GLSL shaders. This is the mapping from neural metaphors to OpenGL implementation:

Memory layers → Texture-Buffers
Processing Layers → GLSL shaders
Parameters → GLSL uniforms

Outlook

In our implementation both color flow and attraction are integrated into a single level flow map. While this generates interesting local interactions, there is little organization on a global level. The work on Multilevel Turing Patterns as popularized by Jonathan McCabe shows that it is possible to obtain complex and visually interesting self organizing patterns without any kind of video input.

Our future research will combine several layers of flow maps, each operating on a different level of detail. Additional directions include alternate color spaces and distance metrics.
In the current model input values are mixed and blurred, resulting in a loss of information over time. We have also been experimenting with entropy-conserving models and are planning to further investigate this direction.

This project is based on two recent artworks, “transits” and “IseethereforeIam”

Conceppt: Ursula Damm
Programming: Martin Schneider