Chromatographic Ballads

16.12.2013

Test situation in my studio with brain device, operator Lisa and Martin Schneider

The installation received a honorary mention at VIDA 15.0

The Artwork

Martin explaining the Neurovision software

Chromatographic Orchestra is an artistic installation which allows a visitor to direct a software framework with an EEG device. In an exhibition environment with semi-transparent video screens a visitor is sitting in an armchair and learns to navigate unconsciously – with his/her brain waves the parameter space of our software – Neurovision.

Neurovision interacts with live video footage of the location of the exhibition and its surroundings. By navigating with his/her own brain waves the visitor can define and navigate the degree of abstraction of a generative (machine learning) algorithm,  performed on the footage of different, nearby video cameras.

Vizualisation for ideal set-up
ideal set-up for the installation
Our Operator Lisa sitting in front of the screen with a EEG device

The installation refers back to painting techniques in the late 19th and early 20th century, when painting became more an analysis of the perception of a setting then a mere representation of the latter. Impressionism and Cubism were fragmenting the items of observation while the way of representation was given by the nature of the human sensory system.

The installation “chromatographic orchestra” does not apply arbitrary algorithms to the live footage: we developed a software – the Neurovision framework – which mimics the visual system of the human brain. Thus we question whether our algorithms meet the well being of the spectator by anticipating processing steps of our brain.

Artistic Motivation

How much complexity can our senses endure, or rather how could we make endurable what we see and hear? Many communication tools have been developed, to adjust human capabilities to the requirements of the ever more complex city.

Our installation poses the opposite question: How can information emerging from the city be adjusted to the capabilities of the human brain, so processing them is a pleasure to the eye and the mind?

At the core of our installation is the NeuroVision Sandbox, a custom made framework for generative video processing in the browser based on WebGL shaders.

Inside this Sandbox we developed several sketches, culminating in the
“Chromatographic Neural Network”, where both optical flow and color information of the scene are processed, inspired by information processing in the human visual system.

We critically assess the effect of our installation on the human sensory system:

  • Does it enhance our perception of the city in a meaningful way?
  • Can it and if so – how will it affect the semantic level of visual experience?
  • Will it create a symbiotic feedback loop with the visitor’s personal way to interpret a scene?
  • Will it enable alternate states of consciousness? Could it even allow visitors to experience the site in a sub-conscious state of “computer augmented clairvoyance”

Installation

In a location close to the site a single visitor directs a video-presentation on a large screen with a setup we like to call “the Neural Chromatographic Orchestra” (NCO).
Our installation uses an EEG-Device (Emotiv NeuroHeadset) that lets visitors interact with a custom neural network. The setup allows visitors to navigate through various levels of abstraction by altering the parameters of the artificial neural net.

With the NCO device, a visitor can select and explore real-time views provided by three cameras located in public space with different perspectives on the passer-byes (birds-eye view and close-ups)

The installation is based on the NeuroVision Sandbox used in the development of “transits”.
Other than transits, chromatographic ballads uses multi-channel real-time video-input and enables a visitor to interact with irectly via biofeedback with the neural network.

The Neural Chromatographic Orchestra investigates how human perception reacts to the multifaceted visual impressions of public space via an artistic setting. Using an EEG-Device visitors can interact with a self-organizing neural network and explore real-time views of an adjacent hall from several perspectives and at various levels of abstraction.

Biological Motivation

The Chromatographic Neural Network is a GPU-based video processing tool. It was inspired by parallel information processing in the visual system of the human brain. Visual information processing inside the brain is a complex process involving various processing stages.The visual pathway includes the retina, the Lateral Geniculate Nucleus (LGN) and the visual cortex

Scheme of the optical tract with the image being processed (simplified): http://en.wikipedia.org/wiki/File:Lisa_analysis.png

Low-level visual processing is already active at the various layers of the retina. The Interconnection of neurons between retina layers, and the ability to retain information using storage or delayed feedback, allows for filtering the visual image in the space and time domain.

Both image filters and motion detection can easily be achieved by accumulating input from neurons in a local neighborhood, in a massively parallel way.

Our Chromatographic Neural Network uses this approach to cluster colors and to compute the visual flow (or retina flow ) from a video source. The resulting attraction-vectors and flow-vectors are used to transform the memory retained in the memory layer.

The visual output of the system directly corresponds to the state of the output layer of the neural network. The neural layers of the Chromatographic Neural Network, are connected to form a feedback loop. This giving rise to a kind of homeostatic-system that is structurally coupled to the visual input but develops its own dynamics over time.

The set-up

A visitor enters the site – a highly frequented passage, a spacious hall or a public place. Two videocameras, mounted on a tripod, can be moved around at will.

Another camera observes the passer-byes – their transits and gatherings – from an elevated location. The video footage from this site is streamed into a neighboring room – the orchestra chamber of the Neural Chromatographic Orchestra.

Here one can see – in front of a a large video wall a monitor displaying the videos from the adjacent room and the “orchestra pit” – an armchair equipped with a touch device and a neuro-headset. The video wall, showing abstract interpretations of the site itsself, should ideally be visible both from the orchestra pit and from the large hall.

The Orchestra Chamber

Inside the chamber the visitor is seated in a comfortable armchair and an assistant helps her put on and adjust the neuro-headset.

The orchestra chamber should be isolated from the public area as much as possible. A sense of deprivation from outside stimuli allows the visitor to gain control over her own perception and achieve a state of mind similar to meditation or clairvoyance.

The Orchestral Performance

Training Cognitive Control

A performance with the Neural Chromatographic Orchestra starts with a training of up to six mental actions, corresponding to the “push/pull”, “left/right“ and “up/down” mental motions provided by the Emotiv Cognitiv suite. The training typically lasts 10 to 30 minutes.

Playing the Sandbox

After successful training the visitor is asked to sit in front of the NeuroVision Sandbox:

The visitor in the orchestra chamber has three modes of conducting the neural network

  • either the birds-eye view or one of the cameras that take a pedestrian’s perspective
  • A graphical user interface lets her switch between different neural networks and control their parameters
  • A menu lets her choose any of the three cameras as a video source:
  • the NeuroHeadset allows to navigate the parameter space of the selected neural network

Conducting the Orchestra

Once the visitor feels comfortable conducting the NCO on the small screen, she can perform on the large screen, that is also visible from the outside.

On the public screen sliders are not shown, but the conductor may still use a tablet device to access the graphical user interface.

The current position in parameter spaces is represented by a 3d-cursor or wire-frame box, which is very helpful for making the transition from voluntary conduction moves, to a style of conducting that is more directly informed by immersion and interaction with the output of the Chromatographic Neural Network.

The Chromatographic Neural Network

The flow of information is arranged into several processing layers. To realize memory, each processing layer is in turn implemented as stack of one or more memory layers.This allows us to access the state of a neuron at a previous point in time.

Example

The video layer is made up of two layers, so the system can access the state of any input neuron at the current point in time, and its state in the previous cycle.

Processing Layers

The Video layer

The Video layer contains the input neurons. Each neuron corresponds to a pixel of the video source. The Video layer provides the input for the Flow layer.

The Ghost Layer

The Ghost layer represents a haunting image from the past. It implements the long term memory, that interferes and interacts with the current visual input. It does not change over time, and is provided as additional input to the Flow layer

The Flow layer

The Flow layer accumulates the input from the Video layer and the Ghost layer. Each Neuron aggregates input from its neighborhood in the Video Layer at times (t) and (t-1). The computed 2d vector is directly encoded into the the state of the neuron, creating a flow map.

The Blur layers

The Blur layers are used to blur the flow map. While the computation of visual flow is restricted to a very small neighborhood, the blur layer is needed to spread the flow information to a larger region, since flow can only be detected on the edge of motion.

For efficiency reasons the blur function is split into two layers, performing a vertical and a horizontal blur respectively.

Neuron Processing

The state of each neuron corresponds to an RGB color triplet. Every neuron of the Flow layer gets input from corresponding neurons inside a local neighborhood of the input layers. Each of those input samples corresponds to a single synapse. The vector from the center of the neuron towards the input neuron is referred to as the synapse vector.

Color Attraction

To achieve some kind of color dynamics, colors that are close in color space are supposed to attract each other.

The distance between synapse input and the neuron state in RGB color-space, serves as a weight, which is used to scale the synapse vector. The sum of scaled synapse vectors results in a single color attraction vector.

Color Flow

While color attraction is the result of color similarities or differences in space, color flow is the result of a color changes over time. Rather than calculating the distance of the neuron state to a single synapse input, its temporal derivative is calculated by using input from a neuron and its corresponding memory neuron. This time the sum of scaled synapse vectors results in a flow vector.

Encoding

Both color flow and color attraction vectors are added up and their components are encoded in the flow layer.

Parameters

here are various parameters in each layer controlling the amount and direction of color attraction, color flow, the metrics used for calculating color distances, the neuron neighborhood, etc …

Implementation

All neural computation is performed on the GPU using OpenGL and GLSL shaders. This is the mapping from neural metaphors to OpenGL implementation:

Memory layers → Texture-Buffers
Processing Layers → GLSL shaders
Parameters → GLSL uniforms

Outlook

In our implementation both color flow and attraction are integrated into a single level flow map. While this generates interesting local interactions, there is little organization on a global level. The work on Multilevel Turing Patterns as popularized by Jonathan McCabe shows that it is possible to obtain complex and visually interesting self organizing patterns without any kind of video input.

Our future research will combine several layers of flow maps, each operating on a different level of detail. Additional directions include alternate color spaces and distance metrics.
In the current model input values are mixed and blurred, resulting in a loss of information over time. We have also been experimenting with entropy-conserving models and are planning to further investigate this direction.

This project is based on two recent artworks, “transits” and “IseethereforeIam”

Conceppt: Ursula Damm
Programming: Martin Schneider

I am a Sensor

08.10.2013

Aus dem Äther in den Körper

Since the 90ies many devices and machines have found their place between the artist and its audience. Communication happens in an controlled, planned, downsized way – in short: posthuman. In doing so we accept that every interface deflect the attention from our senses in favour of technical devices and data.

An experiment shows what happens when the body is given back its place as the ultimate instance of evaluation and the natural senses of humans and other living beings are brought into the center of consideration.

The video has been produced for the annual meeting of the german association of media sciences, Leuphana University Lüneburg 2013

Website der Veranstaltung

Interview über die Arbeit auf der Tagung
Sensordata measured at the toes

Transits

30.09.2012

Trailer from the 1 hour video

For the work Transits, Ursula Damm filmed the Aeschenplatz in Basel during a period of 24 hours. Every day, thousands of cars, pedestrians and trams pass Basel’s most important traffic junction. “Transits” captures and alienates the recorded stream of motion with an intelligent algorithm that was developed by the artist and her team: It evaluates the audiovisual data and categorizes the patterns of movement and the color schemes. Based on the human memory structure and the visual system, the artificial neuronal network integrated in the software – where every pixel corresponds to one neuron – computes the visual flow. Various perceptional image layers overlap to generate an intriguing visual language in which stationary picture elements compete against the color scene. This begins at night and leads via dawn and noon to dusk; at the same time it is pervaded by arbitrary passersby, by cars, trams and people in the streets. The generative video interprets movements as atmospheres and eventually throws the viewer back to an individual perception of the city.

A detailed description of the algorithms and a further development of a interface for the installation you may find here

Transits has been produced for the exhibition sensing place of the  House of Electronic Arts Basel and is part of the collection of the museum.

close-up on the pedestrian level
People standing at the platform
top view on the Aeschenplatz

Screen print

Konzept der Installation:
Ursula Damm
Vorarbeiten: Matthias Weber
Software: Martin Schneider
Sound: Maximilian Netter

The outline of paradise (sustainable luminosity, video)

25.06.2012

This artwork has initially been produced as a produkt provided from super-cell.org, a supermarket offering speculative products related to Synthetic Biology:

What would our cities look like if advertising messages were produced not from artificial lighting but
from swarming midges, glowing like fireflies? Would this natural light production not also be sensuously
much more appealing than the techno aesthetic of conventional advertising? The “Outline of
Paradise” explores the promises and capabilities of technoscience and develops an installation out of
these narratives. It sets the technology towards a natural, sensual aesthetic, which would be natural
and sustainable.
In order to implement this new technology, a natural light source, which has been produced by
nature, should be used and modified: from those animals that live in the dark which have formed
their own light organ. In the deep sea, it enables orientation – at night it helps a firefly in attracting
a mate. „Sustainable Luminosity“ takes up this suggestion of nature and makes use of this light emitting
capability.
Sustainable Luminosity should take as a model the form of a swarm of glow worms in the act of
wooing a partner, and develop new advertising media for cities. For the installation we would train
non-biting midges (Chironomidae) to fly in a way that their swarm takes the shape of advertisement
messages. The insects are genetically modified to glow in the dark and to alter their genetic make-up
according to the training and sound input we provide them. This initial training would be inherited
over generations and keeps the swarm in shape. In this manner breeding for larvae can be purchased
as an individual product and therefore introduced into the market.

With super-cell.org we founded in 2010 the first online supermarket of synthetic biology. It was
created by my students as a speculative design project within the iGEM competition, a student competition at MIT in Boston. The team was a collaboration of students of Prof. Roland Eils, Bioquant /
Ruprecht-Karls-University of Heidelberg, and staff and students of my chair at the Bauhaus University
Weimar.
As part of the super-cell, I developed the product „sustainable luminosity“ as a model for the subsequently
resulting products of my students.
Today I wish, unlike my students, to experience my work as an artist not only virtually – therefore I‘ve
started to breed mosquitoes, their larvae made available to me from the Senckenberg Institute, to
train them in a sound feedback Installation so that their swarms fly in formations. The installation
shall examine formally the material and installative realities the tasks that in everyday interactions
would occur when fantasies become real.
Today I wish, unlike my students, to experience my work as an artist not only virtually – therefore I‘ve
started to breed mosquitoes, their larvae made available to me from the Senckenberg Institute, to
train them in a sound feedback Installation so that their swarms fly in formations. The installation
shall examine formally the material and installative realities the tasks that in everyday interactions
would occur when fantasies become real.

Situation at ACC – a swarm looses its shape

Video edited by Thomas Hawranke

For our experiment, we pretend to train non-biting midges (Chironomidae) to fly in a way that their swarm takes the shape of advertisement messages. The insects are genetically modified to glow in the dark and alter their genetic make-up according to the training and sound input we provide. This initial training will be inherited over generations and keeps the swarm in shape.

How can letters be tought to insects? How can we teach the alphabet to midges?
As chironommidae are sensitive to sound, we use a real-time sound spatialisation system to teach the midges. Until now we are only able to produce clouds of midges forming a simple LED font

Natural midges (chironomidae) form swarms with the shape of a circulating sphere. The swarms consists of male adults congregating for courtship. They are organized through the sound of the wingbeats of the male midges. Our system uses the sensitivity of chironomidae for sound and organize them with synthetic wing beat sound.

Larvae, visit at a production facility in Poland
Selection facility
Training station for midges
Code assigned to letters

paradise_allgemein_ok

I see therefore I am

14.01.2011

Exhibition at Haus-am-Horn, Weimar 2011

Zwei Kameras, mit einem Rechner verbunden.  Die Stelen, die die Kameras tragen, sind mit Rollen versehen, sind beweglich und von Besuchern  zu verschieben. An den Stelen sind Knöpfe, über die die Besucher das Bild der jeweiligen Kamera als Zuspieler für die Software aktivieren können.



Im Innern des „Haus am Horn“ – des historischen Bauhaus-Modellhauses – stehen zwei Kameras auf beweglichen Holzstativen. Eine Kamera schaut aus dem Fenster, eine ist auf das Innere des Gebäudes gerichtet. Besucher können diese Kameras bewegen und sie als Zuspieler für ein Bild auswählen, das im Wohnraum als Projektion entsteht: die erste Kamera sensibilisiert für einen Blick, das Bild der zweiten Kamera entsteht in den aktiven Bereichen des ersten Bildes. Sichtbar ist nur dort etwas, wo bereits zuvor etwas passiert ist.
Die Installation operiert mit den Mechanismen der Wahrnehmung und untersucht algorithmische Verfahren auf ihr assoziativ/künstlerischen Qualitäten. Wie wir Menschen nutzt points-of-view zwei „Augen“ um zu sehen. Allerdings „sehen“ diese Kameras nie gleichzeitig, sondern versetzt in Raum und in Zeit. Sie sind montiert auf zwei beweglichen, mit Rollen versehenen Steelen, betrachten den Raum, in welchem sie stehen, aus der Perspektive der Besucher der Ausstellung. Die Besucher können durch Knopfdruck an den Steelen die jeweilige Kamera aktivieren. Dadurch wird das Bild der Kamera zugespielt zu einer Videoprojektion. Diese berechnet ein Bild, das nicht nur die aktuelle Perspektive der aktiven Kamera abbildet, sondern
über ein programmiertes Gedächtnis (eine vereinfachte, neuronale Karte) das aktuelle Bild mit den vorherigen Perspektiven überblendet. Dabei erkennt die Software insbesondere in den Bildbereichen
etwas, in welchen zuvor bereits Bildveränderungen geschehen sind. Es entsteht eine Überblendung von lebendigen „Augenblicken“ und Perspektiven, ähnlich dem Vorgang, wie unsere Wahrnehmung Bilder erkennt – ohne Vorwissen – sondern das jeweils Neue aus dem Vorigen verstehend ableitend. Die Installation der zwei Steelen ermöglich, durch das Bewegen der Kameraständer im Raum und
das wiederholte Aktivieren der jeweils anderen Kamera die Eigenschaften der Software spielerisch zu erforschen. Dabei erlebt der Besucher, wie Technologie sind den Verständnis – und Wahrnehmungsprinzipien
des Menschen immer mehr annähert und die Bildästhetik vergangener Epochen (wie z.B. des Impressionismus oder des Kubismus) in Artefakten programmierter Bildwelten wieder
lebendig wird.

greenhouse converter

18.03.2010

greenhouse converter, installation view

The greenhouse converter is an apparatus for algae, water fleas and people.
Water from a fountain, enriched with atmospheric gases, especially carbon dioxide, is pumped from beneath via an air supply into an aquarium.
This feeds an algal culture which, influenced by light, produces biomass and oxygen from the carbon dioxide.
The light is observed in the aquarium as the word “beloved”, in blue, made up from single LEDs which can be individually controlled.
“Beloved” is a reference to the endosymbiosis theories of Lynn Margulis. According to her, cells with a nucleus originate from symbiotic relationships between different types of bacteria.

User interface

Then water from the fountain is also added, so that, with increased carbon dioxide content, plant growth is stimulated.
Over time, the algae grow and grow. They also colonise the word, formed of LEDs, in the water, reducing its legibility.
If algal growth is excessive then although a lot of carbon dioxide is broken down, the ecological balance threatens to tip over if the water fleas do not dispose of the algae by vigorously consuming them. To compensate, the light supply is then reduced, since water fleas avoid the yellow daylight and recognize, in the blue light, deep water which protects them from their enemies.

daphnia attracted by different light colours
internal states of the LED’s, working as sensors and effectors


The status of the display of the word “beloved” serves then as the thermometer of this little ecosystem or rather of its relationship with the greater biosphere which exists outside it.
If the word can be clearly read, with a lot of points of light displaying as blue, then the system is in a state of balance. Distortions in the ecological system will be manifested in the word becoming increasingly hard to read.

experiments with different algae
pump mechanism
Interface to regulate Oxygen input

To operate the greenhouse converter there is, alongside the aquarium, a box for a pump, with which passers-by can control the exchange of water and gas using a lever. An LCD display shows the oxygen content of the water outside and inside the aquarium. The lever creates the illusion of being in control and is a concession to the desire to be able to use technology to control nature, which derives from a state of balance.

aquarium with sufficient algal growth
Diagrammatic drawing of input and output of the system

Documentation (long version)

documentation

08.09.2009

won competition

  • Uni Cottbus (1999)
  • metro station Wehrhahnlinie Düsseldorf (2002)

competitions

  • Campus der Fachhochschule Regensburg (2004)
  • Landesvertretung Rheinland-Pfalz, Berlin (2003)
  • Vorraum der Uni-Mensa der Uni-Koblenz (2000)
  • Gestaltung des Ortszentrums Wörgl/Österreich (1999/2000)

documentation downloaden (PDF, 1 MB)

598

01.09.2009

598 (teaching the ground)

Installation at “Springhornhof” as part of the exhibition “Landscape 2.0”

The video installation “598” comprises a high definition video installation, digitally manipulated sound and five mats made of unprocessed sheep’s wool (42′ video stream gained from a custom made software).

“598” shows a landscape of primitive heathland, filtered through computer software operating without our knowledge or powers of interpretation. As if in our mind’s eye, a kind of landscape of the perceived image builds up. In their movements sheep are oriented not just to the group, but also in relation to the supply of food beneath them. The movements of individual animals, as well as of flocks as a whole, thus tell us about the composition of the landscape. The image is in itself already an interpretive vision of the soil beneath their feet which, over the centuries, has been used as pasture in the same way.

The form of the Lüneburg Heath already shows the effects of grazing sheep. Their consumption of grass and young shoots protects the heath from afforestation and creates clearings and open, green spaces; in other words, a heath. The appearance of freely grazed areas has, over the centuries, allowed a landscape to develop whose form derives from this symbiosis. But it is possible to recognize other forms of co-operation as well, and the sheep, which lack a sense of will, are a rewarding example for illustrating the study of behavioural strategies that go beyond egotism.

It is now not enough for us humans simply to observe and to learn to understand. In the computer we have designed a tool which grazing sheep, and a calming landscape, analyse and categorize by means of a structure that is capable of learning. This software learns from the landscape and from the 598 sheep – recorded on a video camera mounted on a crane, as if we humans, looking down from the clouds, were omniscient.

A new, artificial landscape develops which allows the properties of the things observed to be perceived. Due to the movement of the sheep across the heath these properties can be recognised but as individual images they are not evident.

Trailer of the video

Die Lüneburger Heide zeigt in ihrer Gestalt bereits die Auswirkungen der Schafe. Ihr Fressen von Gras und Schößlingen bewahrt die Heide vor der “Verwaldung”, schafft Lichtungen und Grünflächen, eben eine “Heide”.
Das entstehen von freigefressenen Terrains hat so über die Jahrhunderte eine Landschaft entstehen lassen, die ihre Gestalt aus dieser Symbiose herleitet. Aber nicht nur dieses Zusammenspiel ist erkennbar, die willenlosen Schafe sind ein dankbares Anschauungsbeispiel, um kollektive Handlungstrategien jenseits von Egosimen zu studieren.

Nun reicht es uns Menschen ja nicht, einfach hinzuschauen und verstehen zu lernen. So haben wir im Rechner ein Werkzeug entworfen, die weidenden Schafe und die ruhende Landschaft durch eine lernfähige Struktur zu analysieren und kategorisieren. Diese Software lernt von der Landschaft und von 598 Schafen – aufgenommen von einer Videokamera auf einem Kran, so als wären wir Menschen allwissend aus einer Wolke schauend.

Es entsteht eine neue, künstliche Landschaft, die es ermöglicht, Eigenschaften des Gesehenen wahrzunehmen, die aufgrund der Bewegungen der Schafe durch die Heide erkennbar werden, aber im Einzelbild nicht offensichtlich sind.

Screenprint der Installation

zur Software:
Die bishergie Software berechnet Differenzbilder vom aktuellen Bild eines Videos zu einem Referenzbild, um so Bewegungen fest zu stellen. Sie enthält weiterhin eine Art neuronales Netz, dessen Neuronen sich einzeln an die Differenzwerte jedes einzelnen Bildpunktes anpassen. Jeder Bildpunkt entspricht einem Neuron. Diese Software soll weiter entwickelt werden, so dass jedes Neuron nicht nur einzeln für sich den ihm zugewiesenen Bildpunkt lernt. Es sollen zusätzlich die Nachbarschaftsbeziehungen sowohl der Bildpunkte als auch der Neuronen mit einfließen. Dadurch entsteht ein selbst organisierendes System, Neuronen, die schon bestimmte Differenzwerte gelernt haben, fühlen sich zu ähnlichen Werten stärker angezogen. Sie lernen diese stärker mit. Somit werden sich Bereiche innerhalb des neuronalen Netzes ausbilden, die einerseits ähnlichen Differenzwerten entsprechen andererseits aber auch immer noch an den Ort der jeweiligen Bildpunkte gebunden sind.
August 2009
Programmierung: Matthias Weber, Sebastian Stang
Sound: Maximilian Netter, Sebastian Stang

on the nature of things

01.06.2009

on the nature of things, 2009
comprises five video projections which atmospherically depict and re-interpret our habitat.
In the exhibition space five projections are set up like a landscape. Each of these projections represents one aspect of the manifestation of nature as altered by civilization, so creating a space where a “third nature”‚ is experienced. The projections are always visible but need to be viewed from some distance so that an audio environment can evolve in front of each them. The areas can be experienced just by strolling through them.
Each projection consists of one video loop lasting between 90 seconds and six minutes. In these projections, weather and global energy balance phenomena are set to the sounds of social events such as football, Formula One racing or group laughter. In these, people develop their desires and visions with collective power.
The installations seek out people’s motivations in designing and running their civilization – sporting and event culture, energy management as the heartbeat of modern society. In the end the dream of cold technology sweeps temperature charts and climate change before it.
In his work On the nature of things Lucretius still pre-supposed a closed cosmos and the presence of natural deities. But in the age of On the Internet of things, the title of his work assumes a new flavour: human artefacts acquire their own nature while moving, once they have been created and located, into a self-explanatory state of existence. Here then, technology thus appears to acquire its own justification for existence, which like the creation of the world, is beyond dispute.
In this sense I now depict the first, original nature as a consequence of human behaviour, as it can certainly now be understood in this the era of climate change.
Social upheaval is exemplified not only climatically, or by way of natural disasters, but also by social phenomena and processes.
Laola

On 27 February 1990 I went to the Netherlands to film the sea. I had selected the period of spring tides in the hope of a moderate sea swell. Unexpectedly a storm of historic proportions developed. We had great difficulty keeping the tripod steady and not being blown away ourselves, camera and all. Nineteen years later climate change is on everyone’s lips. Our society has changed a great deal.
I set the surges of enthusiasm from crowds in football stadiums to the waves. With all the drama of the situation – high tides and a rising sea level – a sense of euphoria persisted, stemming as much from the power of the sea as from the naturally synchronised voices of thousands of spectators in a stadium.
quichote

Wind turbines, nothing but wind turbines in the landscape, rotating at slightly different speeds, but not with the local sound of the wind but with the sound of Formula One racing.
We are no longer used to believing that an element in motion does not move “by itself” or “naturally”, we instantly assume it to be driven mechanically, appropriating this with enthusiasm. In the video a sound space develops in which a car is allocated to each wind turbine, but differentiated by its close or distant location. A vision of a technically codified landscape emerges, in which artefacts of civilization appear to be causing natural phenomena such as the wind.

aerial photos [1993-1997]

27.02.2009

Musterzeichnung über Luftphoto (Aachen) 1993

Musterzeichnung

Luftbild von Mailand, mit geometrischen Zeichnungen als Interpretation von Bewegungsachsen und ihrem Verhältnis zu den Quartieren

Luftbild von Mailand

Installation aus Musterzeichnungen während und für das Programmieren von trace pattern, zusammen mit Fotos von Steinen und Felsen, E-Mail-Korrespondez mit Informatikern

Musterzeichnung

Musterzeichnung über Felsen in der Finnmark (Nordnorwegen) 1997

Musterzeichnung über Felsen in der Finnmark