the outline of paradise (installation) [2014]

  • the sound box with control station
  • look into the box
  • computer control (traces) of sound input
  • video camera, loudspeaker
  • Christina Meissner sitting outside the box with control monitor
  • Christina Meissner playing
  • Christina Meissner with control monitor
  • corresponding vibrations
  • playing with the swarm
  • the midges, loudspeakers
  • swarm
  • traces of midges on monitor

In a sound-isolated box with controlled light conditions non-biting midges (chironomidae) are bred. Inside the box in an aquarium mosquito eggs and larvae swim in sand and water. They are ventilated and supplied with abundant artificial daylight. The choice of midges (Chironomus Riparius, a laboratory strain) allows for captive breeding.

Inside the aquarium a microphone records the sounds of flying midges.

A matrix of speakers are used to sonicate the mosquitoes with the sound of flapping wings, and so they can learn their behavior by the acoustically simulated presence of counterparts, and in turn control this.

By a computer programm the sounds are supervised by video recordings to allow the conditioning of the swarming behaviour of the midges.

Concept: Ursula Damm
Cello, Sounds: Christina Meissner
Programming: Sebastian Stang

I am a Sensor [2013]

I am a Sensor from resoutionable on Vimeo.

Seit den 90er Jahren haben sich zwischen den Künstler und sein Publikum jede Menge Apparate und Apparaturen geschoben. Kommunikation geschieht geplant, kontrolliert, reduziert, erweitert – kurz: posthuman. Dabei wird in Kauf genommen, dass jedes Device bereits die Aufmerksamkeit des Performers vom eigenen Körper ablenkt auf technische Artefakte.

In einigen Experimenten wird gezeigt, was passiert, wenn der Körper als ultimative Instanz der Bewertung seinen Platz zurück erhält und die natürlichen Sinne von Mensch und anderen Lebewesen ins Zentrum der Betrachtung gerückt werden.

Das Video wurde uraufgeführt zur Jahrestagung der Gesellschaft für Medienwissenschaften, Leuphana Universität Lüneburg

Chromatographic Ballads [2013]

chomatographic ballads from resoutionable on Vimeo.

The installation received a honorary mention at VIDA 15.0

The Artwork

Chromatographic Orchestra is an artistic installation which allows a visitor to direct a software framework with an EEG device. In an exhibition environment with semi-transparent video screens a visitor is sitting in an armchair and learns to navigate unconsciously – with his/her brain waves the parameter space of our software – Neurovision.

Neurovision interacts with live video footage of the location of the exhibition and its surroundings. By navigating with his/her own brain waves the visitor can define and navigate the degree of abstraction of a generative (machine learning) algorithm,  performed on the footage of different, nearby video cameras.

Lisa training the Interface

Lisa training the Interface

The installation refers back to painting techniques in the late 19th and early 20th century, when painting became more an analysis of the perception of a setting then a mere representation of the latter. Impressionism and Cubism were fragmenting the items of observation while the way of representation was given by the nature of the human sensory system.

The installation “chromatographic orchestra” does not apply arbitrary algorithms to the live footage: we developed a software – the Neurovision framework – which mimics the visual system of the human brain. Thus we question whether our algorithms meet the well being of the spectator by anticipating processing steps of our brain.

Artistic Motivation

How much complexity can our senses endure, or rather how could we make endurable what we see and hear? Many communication tools have been developed, to adjust human capabilities to the requirements of the ever more complex city.

Our installation poses the opposite question: How can information emerging from the city be adjusted to the capabilities of the human brain, so processing them is a pleasure to the eye and the mind?

At the core of our installation is the NeuroVision Sandbox, a custom made framework for generative video processing in the browser based on WebGL shaders.

Martin explaining the Neurovision software

Martin explaining the Neurovision software

Inside this Sandbox we developed several sketches, culminating in the
“Chromatographic Neural Network”, where both optical flow and color information of the scene are processed, inspired by information processing in the human visual system.

We critically assess the effect of our installation on the human sensory system:

  • Does it enhance our perception of the city in a meaningful way?
  • Can it and if so – how will it affect the semantic level of visual experience?
  • Will it create a symbiotic feedback loop with the visitor’s personal way to interpret a scene?
  • Will it enable alternate states of consciousness? Could it even allow visitors to experience the site in a sub-conscious state of “computer augmented clairvoyance”

Installation

visualisierung_site_cr

In a location close to the site a single visitor directs a video-presentation on a large screen with a setup we like to call “the Neural Chromatographic Orchestra” (NCO).
Our installation uses an EEG-Device (Emotiv NeuroHeadset) that lets visitors interact with a custom neural network. The setup allows visitors to navigate through various levels of abstraction by altering the parameters of the artificial neural net.

With the NCO device, a visitor can select and explore real-time views provided by three cameras located in public space with different perspectives on the passer-byes (birds-eye view and close-ups)

The installation is based on the NeuroVision Sandbox used in the development of “transits”.
Other than transits, chromatographic ballads uses multi-channel real-time video-input and enables a visitor to interact with irectly via biofeedback with the neural network.

The Neural Chromatographic Orchestra investigates how human perception reacts to the multifaceted visual impressions of public space via an artistic setting. Using an EEG-Device visitors can interact with a self-organizing neural network and explore real-time views of an adjacent hall from several perspectives and at various levels of abstraction.

Biological Motivation

The Chromatographic Neural Network is a GPU-based video processing tool. It was inspired by parallel information processing in the visual system of the human brain. Visual information processing inside the brain is a complex process involving various processing stages.The visual pathway includes the retina, the Lateral Geniculate Nucleus (LGN) and the visual cortex

Scheme of the optical tract with the image being processed (simplified): http://en.wikipedia.org/wiki/File:Lisa_analysis.png

Low-level visual processing is already active at the various layers of the retina. The Interconnection of neurons between retina layers, and the ability to retain information using storage or delayed feedback, allows for filtering the visual image in the space and time domain.

Both image filters and motion detection can easily be achieved by accumulating input from neurons in a local neighborhood, in a massively parallel way.

Our Chromatographic Neural Network uses this approach to cluster colors and to compute the visual flow (or retina flow ) from a video source. The resulting attraction-vectors and flow-vectors are used to transform the memory retained in the memory layer.

The visual output of the system directly corresponds to the state of the output layer of the neural network. The neural layers of the Chromatographic Neural Network, are connected to form a feedback loop. This giving rise to a kind of homeostatic-system that is structurally coupled to the visual input but develops its own dynamics over time.

The set-up

visualisierung_site_p_zentral

main hall with passangers

A visitor enters the site – a highly frequented passage, a spacious hall or a public place. Two videocameras, mounted on a tripod, can be moved around at will.

Another camera observes the passer-byes – their transits and gatherings – from an elevated location. The video footage from this site is streamed into a neighboring room – the orchestra chamber of the Neural Chromatographic Orchestra.

Here one can see – in front of a a large video wall a monitor displaying the videos from the adjacent room and the “orchestra pit” – an armchair equipped with a touch device and a neuro-headset. The video wall, showing abstract interpretations of the site itsself, should ideally be visible both from the orchestra pit and from the large hall.

The Orchestra Chamber

visualisierung_sitecrpv

view from the “orchestra chamber”

Inside the chamber the visitor is seated in a comfortable armchair and an assistant helps her put on and adjust the neuro-headset.

The orchestra chamber should be isolated from the public area as much as possible. A sense of deprivation from outside stimuli allows the visitor to gain control over her own perception and achieve a state of mind similar to meditation or clairvoyance.

The Orchestral Performance

Training Cognitive Control

A performance with the Neural Chromatographic Orchestra starts with a training of up to six mental actions, corresponding to the “push/pull”, “left/right“ and “up/down” mental motions provided by the Emotiv Cognitiv suite. The training typically lasts 10 to 30 minutes.

Playing the Sandbox

After successful training the visitor is asked to sit in front of the NeuroVision Sandbox:

The visitor in the orchestra chamber has three modes of conducting the neural network

  • either the birds-eye view or one of the cameras that take a pedestrian’s perspective
  • A graphical user interface lets her switch between different neural networks and control their parameters
  • A menu lets her choose any of the three cameras as a video source:
  • the NeuroHeadset allows to navigate the parameter space of the selected neural network

http://perceptify.com/neurovision/

Conducting the Orchestra

Once the visitor feels comfortable conducting the NCO on the small screen, she can perform on the large screen, that is also visible from the outside.

On the public screen sliders are not shown, but the conductor may still use a tablet device to access the graphical user interface.

The current position in parameter spaces is represented by a 3d-cursor or wire-frame box, which is very helpful for making the transition from voluntary conduction moves, to a style of conducting that is more directly informed by immersion and interaction with the output of the Chromatographic Neural Network.

The Chromatographic Neural Network

The flow of information is arranged into several processing layers. To realize memory, each processing layer is in turn implemented as stack of one or more memory layers.This allows us to access the state of a neuron at a previous point in time.

Example

The video layer is made up of two layers, so the system can access the state of any input neuron at the current point in time, and its state in the previous cycle.

Processing Layers

 

The Video layer

The Video layer contains the input neurons. Each neuron corresponds to a pixel of the video source. The Video layer provides the input for the Flow layer.

The Ghost Layer

The Ghost layer represents a haunting image from the past. It implements the long term memory, that interferes and interacts with the current visual input. It does not change over time, and is provided as additional input to the Flow layer

 

The Flow layer

The Flow layer accumulates the input from the Video layer and the Ghost layer. Each Neuron aggregates input from its neighborhood in the Video Layer at times (t) and (t-1). The computed 2d vector is directly encoded into the the state of the neuron, creating a flow map.

The Blur layers

The Blur layers are used to blur the flow map. While the computation of visual flow is restricted to a very small neighborhood, the blur layer is needed to spread the flow information to a larger region, since flow can only be detected on the edge of motion.

For efficiency reasons the blur function is split into two layers, performing a vertical and a horizontal blur respectively.

 

Neuron Processing

The state of each neuron corresponds to an RGB color triplet. Every neuron of the Flow layer gets input from corresponding neurons inside a local neighborhood of the input layers. Each of those input samples corresponds to a single synapse. The vector from the center of the neuron towards the input neuron is referred to as the synapse vector.

Color Attraction

To achieve some kind of color dynamics, colors that are close in color space are supposed to attract each other.

The distance between synapse input and the neuron state in RGB color-space, serves as a weight, which is used to scale the synapse vector. The sum of scaled synapse vectors results in a single color attraction vector.

 

Color Flow

While color attraction is the result of color similarities or differences in space, color flow is the result of a color changes over time. Rather than calculating the distance of the neuron state to a single synapse input, its temporal derivative is calculated by using input from a neuron and its corresponding memory neuron. This time the sum of scaled synapse vectors results in a flow vector.

Encoding

Both color flow and color attraction vectors are added up and their components are encoded in the flow layer.

Parameters

here are various parameters in each layer controlling the amount and direction of color attraction, color flow, the metrics used for calculating color distances, the neuron neighborhood, etc …

Implementation

All neural computation is performed on the GPU using OpenGL and GLSL shaders. This is the mapping from neural metaphors to OpenGL implementation:

Memory layers → Texture-Buffers
Processing Layers → GLSL shaders
Parameters → GLSL uniforms

Outlook

In our implementation both color flow and attraction are integrated into a single level flow map. While this generates interesting local interactions, there is little organization on a global level. The work on Multilevel Turing Patterns as popularized by Jonathan McCabe shows that it is possible to obtain complex and visually interesting self organizing patterns without any kind of video input.

Our future research will combine several layers of flow maps, each operating on a different level of detail. Additional directions include alternate color spaces and distance metrics.
In the current model input values are mixed and blurred, resulting in a loss of information over time. We have also been experimenting with entropy-conserving models and are planning to further investigate this direction.

This project is based on two recent artworks, “transits” and “IseethereforeIam”

transits [2012]

nah_bahn_2-1

For her work “Transits,” German artist Ursula Damm filmed the Aeschenplatz in Basel during a period of 24 hours. Every day, thousands of cars, pedestrians and trams pass Basel’s most important traffic junction. “Transits” captures and alienates the recorded stream of motion with an intelligent algorithm that was developed by the artist and her team: It evaluates the audiovisual data and categorizes the patterns of movement and the color schemes. Based on the human memory structure and the visual system, the artificial neuronal network integrated in the software – where every pixel corresponds to one neuron – computes the visual flow. Various perceptional image layers overlap to generate an intriguing visual language in which stationary picture elements compete against the color scene. This begins at night and leads via dawn and noon to dusk; at the same time it is pervaded by arbitrary passersby, by cars, trams and people in the streets. The generative video interprets movements as atmospheres and eventually throws the viewer back to an individual perception of the city.

Transits ist eine Videoinstallation, die für die Ausstellung sensing place des Haus für elektronische Künste Basel produziert wurde.

Die Installation verwendet statische Videoaufnahmen des Aeschenplatz in Basel, um Spuren von Passanten auf einem städtischen Verkehrsknotenpunkt aufzuzeichnen und in ihren Charakteristiken sichtbar zu machen. Eine speziell neu entwickelte Software (Autor: Martin Schneider) begreift das gesamte Videobild als neuronale Merkmalskarte (kohonen map). Jeder Pixel des Videobildes wird gespeichert und in der Folge mittels spezieller Algorithmen “erinnert” oder verarbeitet. Einerseits wollten wir, dass lang verweilende Elemente sich ins Bild einschreiben, zum anderen aber hat diese Bildverarbeitung auch eine eigene Dynamik: Farben ziehen sich an und Bewegungen schieben Pixel in die erkannte Richtung.

Das Bildergebnis schwankt zwischen Beschreibung von ortsspezifischen Ereignissen und der Eigendynamik der Bildverarbeitung, wie sie unserem Gehirn zugeschrieben wird, hier vorweggenommen von einer Software .

Konzept der Installation:
Ursula Damm
Vorarbeiten: Matthias Weber
Software: Martin Schneider
Sound: Maximilian Netter

Produziert von Haus für elektronische Künste Basel

the outline of paradise (sustainable luminosity, video) [2012]

3erMaster_Out_V5_FINE_high_2

 

This artwork has initially been produced as a produkt provided from super-cell.org, a supermarket offering speculative products related to Synthetic Biology:

http://super-cell.org/shopping/product-17/

What would our cities look like if advertising messages are produced not from artificial lighting but from swarming midges, glowing like fireflies? The “outline of paradise” explores the promises and capabilities of technoscience and developes and installation out of this narratives.

For the installation we train non-biting midges (Chironomidae) to fly in a way that their swarm takes the shape of advertisement messages. The insects are genetically modified to glow in the dark and alter their genetic make-up according to the training and sound input we provide. This initial training will be inherited over generations and keeps the swarm in shape.

How can letters be tought to insects? How can we teach the alphabet to midges?
As chironommidae are sensitive to sound, we use a real-time sound spatialisation system to teach the midges. Until now we are only able to produce clouds of midges forming a simple LED font

Natural midges (chironomidae) form swarms with the shape of a circulating sphere. The swarms consists of male adults congregating for courtship. They are organized through the sound of the wingbeats of the male midges. Our system uses the sensitivity of chironomidae for sound and organize them with synthetic wing beat sound.

Midges are sensitive to sounds within the range of the wing beat of their species. This sounds are normally ± 50 Hz around the specific frequence.
To teach midges the alphabet, letters are coded with nine different sounds withing this range. Through the spatial placement of the loudspeakers midges learn to react in a certain manner to polyphonic tones by memorizing sound frequencies and the letter-related collective behaviour of their swarm.

Die Idee
Es gibt Tiere, die im Dunkeln leben. Einige von ihnen haben ihr eigenes Lichtorgan ausgebildet. In der Tiefsee ermöglicht es die Orientierung – in der Nacht hilft es einem Glühwürmchen beim Werben um einen Partner. Von der Natur möchte ich die Anregung aufnehmen und mir das Werben der Mücken um einen Partner zunutze machen. Wie sähen unsere Städte aus, wenn die Werbebotschaften der Unternehmen nicht mit Leuchtstoffröhren oder LED‘s, sondern direkt von leuchtenden Mücken produziert würden?
Wäre diese natürliche Lichtproduktion nicht auch sinnlich viel ansprechender als die technoide Ästhetik herkömmlicher Werbung? Ganz nebenbei spart diese Werbung Energiekosten und löst das ökologische Problem der „Lichtverschmutzung“.
Wie es anfing…
Mit super-cell.org entstand 2010 der erste online-supermarkt der synthetischen Biologie. Er wurde geschaffen von meinen Studenten als spekulatives Designprojekt innerhalb des iGEM-competition,
ein studentischer Wettbewerbs am MIT in Boston. Das Team war eine Kooperation von Studierenden von Prof. Roland Eils, Bioquant/Ruprecht-Karls-University Heidelberg und Mitarbeitern und Studierenden meines Lehrstuhls an der Bauhaus Universität Weimar (http://www.uni-weimar.de/medien/
wiki/GMU:Synthetic_Biology
).
das Projekt
Anders als meine Studenten möchte ich als Künstlerin meine Werke nicht nur virtuell erfahren – deshalb habe ich nun begonnen, Mücken zu züchten, um in einer Soundfeedbackinstallation zu trainieren sie zu trainieren, so dass ihre Schwärme in Formationen fliegen. Parallel beginne ich, in Kollaboration mit der Uni Heidelberg, die Möglichkeiten zu eruieren, Chironomus riparius mit einem Leuchtgen auszustatten. Dabei greifen wir auf Vorarbeiten zurück, die vom iGEM Team Cambridge 2010 im Rahmen des E.Glowli Projektes (http://2010.igem.org/Team:Cambridge) bearbeitet wurden. Gleichzeitig können wir auf die Erkenntnisse zugreifen, welche in Forschungsprojekten um die Kodierung von Organen bei Drosophila (Fruchtfliege) erarbeitet wurden.
Ferner liegen Untersuchungen vor zur Lernfähigkeit von Insekte, die vielversprechend sind. Ob das Projekt am Ende die vorgesehene Form annimmt oder ob die Umstände (Sicherheitsbestimmungen, technische Machbarkeit, ökologische Problemstellungen, Grenzen der Natur) die Realisation ermöglichen oder ein Scheitern herbeiführen bzw. ein Umdisponieren notwendig machen, ist der eigentliche Prozess, den es zu veranschaulichen gilt. Es ist Bestandteil des Projekts, Methoden und ihre Artefakte auf dem Weg der Umsetzung sinnlich nachvollziehbar zu machen, indem sie zum Gegenstand einer prozessualen künstlerischen Installation werden, die über die ökologischen wie gesellschaftlichen Implikationen der Biotechnologie berichtet.

das Konzept
„Sustainable Luminosity“ ist eine Beleuchtungsmethode für die Stadt, die nachhaltig und natürlich ist. Luciferin, ein Naturstoff, sorgt bei Glühwürmchen für ein Licht, das in seiner Intensität und seinem Nutzungsgrad jeder technischen Leuchtquelle überlegen ist. Das Produkt schlägt vor, das Werben der Glühwürmchen um einen Partner als Vorbild für die Werbeträger in den Städten zu nehmen. Leuchtreklame wird anstatt elektronisch gesteuert, von Mücken geflogen, die mit einem Leuchtgen
ausgestatteten werden und ein spezielles Flugtraining erhalten haben. „Zucht“ tritt hier als Form der Gestaltung der Natur neben das Engineering und verweisen auf die Epigenetik und deren Einflussnahme auf die Expression genetischen Dispositionen. Die Veranschaulichung des „Zuchtverfahrens“ wiederum macht den sensorischen Apparat der Mücken deutlich und stellt somit den tiefen Eingriff in den gesamten Lebensrhythmus der Tiere dar. Sustainable Luminosity greift eine typische Problemlage
auf, indem herkömmliche Technologie durch „natürliche“, da bioorganisch erzeugte und damit „nachhaltigere“ Technologien als Problemlösungspotenzial der Synthetischen Biologie angepriesen werden. Dass hier statt Bakterien, den üblichen Manipulationsorganismen der Gentechnik, höhere Lebewesen eingesetzt werden, ist ein Kunstgriff, der dem Menschen Empathie ermöglichen soll. Gleichzeitig wird die Frage aufgeworfen, inwieweit höhere Organismen in Zukunft durch die Synthetische Biologie den menschlichen Zwecken unterworfen werden.

der Arbeitsprozess
Chironomiden (Zuckmücken) reagieren auf Tonfrequenzen. Mit einem Echtzeit-Sound-Spatialisierungs-System trainieren wir diese Mücken, in Form einfacher LED-Buchstaben zu schwärmen. In aufwändigen Trainingseinheiten lernen Mücken, ein vorher eingeübtes Wort als Flugformation auszubilden. Diese Flugeigenschaften bleiben – einmal erworben – erhalten und werden epigenetisch an die Nachkommen vererbt.
Ausgeliefert werden die Mücken als Larven. Vor Ort ausgesetzt bilden sich nach kurzer Zeit die Schwärme, die sodann die genetisch gespeicherte Information in ihren Flugbahnen nachvollzieht.

die Technik
Natürliche Zuckmücken (Chironomidae) bilden Schwärme in Form von Kugeln. Die Schwärme bestehen aus Männchen, die sich zur Paarung zusammenrotten,
um von Weibchen gesehen zu werden. Sie organisieren sich untereinander über die Flügelschlaggeräusche.
Unser System benutzt diese Sensibilität für Klänge und organisiert die Mücken mit simuliertem Flügelschlagsound. Mücken reagieren auch auf simulierte Töne im Bereich ihres eigenen Flügelschlags. Diese Töne variieren ungefähr ± 50 Hz um die artspezifische Frequenz. Wie aber kann man Zuckmücken Buchstaben beibringen? Wie kann man sie das Alphabet lehren? Wir benutzen ein Echtzeit Sound-Spatialisierungs – System, um Mücken zu unterrichten. Die Mückenschwärme fliegen in Schwärmen in Form von einfachen LED-Buchstaben.
Um die Mücken das Alphabet zu lernen, werden Buchstaben mit neun verschiedenen Tönen innerhalb dieser Bandbreite codiert und in der Buchstabenmatrix ortsspezifisch ausgestrahlt.
für die Ausstellung
wird ein kleines „Paradies“ eingerichtet, ein abgeschirmter Ort, an welchem die Mücken gezüchtet und trainiert werden. Das System muss ein geschlossenes sein, da jeweils eine Generation von Mücken zur Verbesserung der Trainingsergebnisse in die nächste Runde übernommen wird. Das „Paradies“ besteht aus einem Zelt in der Form von drei runden, miteinander verbundenen Räumen. In ihnen werden in Schlammtümpeln Mückenlarven gezüchten. Im größten Raum sind die Lautsprecher aufgebaut, die Zuckmücken anziehen und trainieren. In den anderen Räumen sind die Mücken in jeweils anderen Stadien ihrer Metamorphose zu beobachten. Diese Räume dienen dem Aussortieren,
Trennen der Tiere und letztendlich dem Züchten.
Neben dieser Zuchtstation soll es eine Projektion im Stadtbereich – vorzugsweise einer Ladenstrasse und Fußgängerzone geben. Dort wird in ein Schaufenster eine Projektion gezeigt. In dieser Projektion sieht man eben diese Ladenstrasse, nur anstatt der Leuchtwerbung sind (simulierte) Mückenschwärme eingesetzt. Um diese Mückenschwärme zu ernähren, werden Wassertonnen in urbanem Grün eingerichtet.

paradise_allgemein_ok

I see therefore I am [2011]

points-of-view2Zwei Kameras, mit einem Rechner verbunden.  Die Stelen, die die Kameras tragen, sind mit Rollen versehen, sind beweglich und von Besuchern  zu verschieben. An den Stelen sind Knöpfe, über die die Besucher das Bild der jeweiligen Kamera als Zuspieler für die Software aktivieren können.
points-of-view_1
Im Innern des „Haus am Horn“ – des historischen Bauhaus-Modellhauses – stehen zwei Kameras auf beweglichen Holzstativen. Eine Kamera schaut aus dem Fenster, eine ist auf das Innere des Gebäudes gerichtet. Besucher können diese Kameras bewegen und sie als Zuspieler für ein Bild auswählen, das im Wohnraum als Projektion entsteht: die erste Kamera sensibilisiert für einen Blick, das Bild der zweiten Kamera entsteht in den aktiven Bereichen des ersten Bildes. Sichtbar ist nur dort etwas, wo bereits zuvor etwas passiert ist.
Die Installation operiert mit den Mechanismen der Wahrnehmung und untersucht algorithmische Verfahren auf ihr assoziativ/künstlerischen Qualitäten. Wie wir Menschen nutzt points-of-view zwei „Augen“ um zu sehen. Allerdings „sehen“ diese Kameras nie gleichzeitig, sondern versetzt in Raum und in Zeit. Sie sind montiert auf zwei beweglichen, mit Rollen versehenen Steelen, betrachten den Raum, in welchem sie stehen, aus der Perspektive der Besucher der Ausstellung. Die Besucher können durch Knopfdruck an den Steelen die jeweilige Kamera aktivieren. Dadurch wird das Bild der Kamera zugespielt zu einer Videoprojektion. Diese berechnet ein Bild, das nicht nur die aktuelle Perspektive der aktiven Kamera abbildet, sondern
über ein programmiertes Gedächtnis (eine vereinfachte, neuronale Karte) das aktuelle Bild mit den vorherigen Perspektiven überblendet. Dabei erkennt die Software insbesondere in den Bildbereichen
etwas, in welchen zuvor bereits Bildveränderungen geschehen sind. Es entsteht eine Überblendung von lebendigen „Augenblicken“ und Perspektiven, ähnlich dem Vorgang, wie unsere Wahrnehmung Bilder erkennt – ohne Vorwissen – sondern das jeweils Neue aus dem Vorigen verstehend ableitend. Die Installation der zwei Steelen ermöglich, durch das Bewegen der Kameraständer im Raum und
das wiederholte Aktivieren der jeweils anderen Kamera die Eigenschaften der Software spielerisch zu erforschen. Dabei erlebt der Besucher, wie Technologie sind den Verständnis – und Wahrnehmungsprinzipien
des Menschen immer mehr annähert und die Bildästhetik vergangener Epochen (wie z.B. des Impressionismus oder des Kubismus) in Artefakten programmierter Bildwelten wieder
lebendig wird.

598 [2009]

Installation at “Springhornhof”

teaching the ground



The video installation “598” comprises a high definition video installation, digitally manipulated sound and five mats made of unprocessed sheep’s wool (42′ video stream gained from a custom made software).

“598” shows a landscape of primitive heathland, filtered through computer software operating without our knowledge or powers of interpretation. As if in our mind’s eye, a kind of landscape of the perceived image builds up. In their movements sheep are oriented not just to the group, but also in relation to the supply of food beneath them. The movements of individual animals, as well as of flocks as a whole, thus tell us about the composition of the landscape. The image is in itself already an interpretive vision of the soil beneath their feet which, over the centuries, has been used as pasture in the same way.

The form of the Lüneburg Heath already shows the effects of grazing sheep. Their consumption of grass and young shoots protects the heath from afforestation and creates clearings and open, green spaces; in other words, a heath. The appearance of freely grazed areas has, over the centuries, allowed a landscape to develop whose form derives from this symbiosis. But it is possible to recognize other forms of co-operation as well, and the sheep, which lack a sense of will, are a rewarding example for illustrating the study of behavioural strategies that go beyond egotism.

It is now not enough for us humans simply to observe and to learn to understand. In the computer we have designed a tool which grazing sheep, and a calming landscape, analyse and categorize by means of a structure that is capable of learning. This software learns from the landscape and from the 598 sheep – recorded on a video camera mounted on a crane, as if we humans, looking down from the clouds, were omniscient.

A new, artificial landscape develops which allows the properties of the things observed to be perceived. Due to the movement of the sheep across the heath these properties can be recognised but as individual images they are not evident.
_____________________________________________
In der Arbeit “598” nehmen wir mit einer Software eine urtümliche Landschaft wahr, ohne unser Wissen oder unsere interpretatorischen Fähigkeiten zu benutzen. Wir wollen quasi vor unserem inneren Auge eine Art Landschaft des wahrgenommenen Bildes aufbauen.
Schafe sind in ihren Bewegungen nicht alleine an der Gruppe orientiert, sondern auch an dem Futtervorkommen des Untergrunds. So erzählen die Bewegungen der Einzeltiere wie auch der Herde als Gesamtes und von der Beschaffenheit der Landschaft. Das Bild an sich ist bereits ein interpretierendes Sehen eines Untergrundes, der schon über die Jahrhunderte in gleicher Weise beweidet wurde.

“598” from resoutionable on Vimeo.

Die Lüneburger Heide zeigt in ihrer Gestalt bereits die Auswirkungen der Schafe. Ihr Fressen von Gras und Schößlingen bewahrt die Heide vor der “Verwaldung”, schafft Lichtungen und Grünflächen, eben eine “Heide”.
Das entstehen von freigefressenen Terrains hat so über die Jahrhunderte eine Landschaft entstehen lassen, die ihre Gestalt aus dieser Symbiose herleitet. Aber nicht nur dieses Zusammenspiel ist erkennbar, die willenlosen Schafe sind ein dankbares Anschauungsbeispiel, um kollektive Handlungstrategien jenseits von Egosimen zu studieren.

Nun reicht es uns Menschen ja nicht, einfach hinzuschauen und verstehen zu lernen. So haben wir im Rechner ein Werkzeug entworfen, die weidenden Schafe und die ruhende Landschaft durch eine lernfähige Struktur zu analysieren und kategorisieren. Diese Software lernt von der Landschaft und von 598 Schafen – aufgenommen von einer Videokamera auf einem Kran, so als wären wir Menschen allwissend aus einer Wolke schauend.

Es entsteht eine neue, künstliche Landschaft, die es ermöglicht, Eigenschaften des Gesehenen wahrzunehmen, die aufgrund der Bewegungen der Schafe durch die Heide erkennbar werden, aber im Einzelbild nicht offensichtlich sind.

zur Software:
Die bishergie Software berechnet Differenzbilder vom aktuellen Bild eines Videos zu einem Referenzbild, um so Bewegungen fest zu stellen. Sie enthält weiterhin eine Art neuronales Netz, dessen Neuronen sich einzeln an die Differenzwerte jedes einzelnen Bildpunktes anpassen. Jeder Bildpunkt entspricht einem Neuron. Diese Software soll weiter entwickelt werden, so dass jedes Neuron nicht nur einzeln für sich den ihm zugewiesenen Bildpunkt lernt. Es sollen zusätzlich die Nachbarschaftsbeziehungen sowohl der Bildpunkte als auch der Neuronen mit einfließen. Dadurch entsteht ein selbst organisierendes System, Neuronen, die schon bestimmte Differenzwerte gelernt haben, fühlen sich zu ähnlichen Werten stärker angezogen. Sie lernen diese stärker mit. Somit werden sich Bereiche innerhalb des neuronalen Netzes ausbilden, die einerseits ähnlichen Differenzwerten entsprechen andererseits aber auch immer noch an den Ort der jeweiligen Bildpunkte gebunden sind.
August 2009
Programmierung: Matthias Weber, Sebastian Stang
Sound: Maximilian Netter, Sebastian Stang

double helix swing [2006]

an installation for swarms of midges on the banks of lakes and other bodies of water

Double Helix Swing



idea | Installation | video & images | software | technique | Autoren | doku (PDF, 1MB)


the idea

double helix swing is an installation which investigates the swarms of midges that can be found on the banks of lakes and other bodies of water. Swarms of midges are intriguing entities: without any apparent logic theyform at irregular intervals along the bank: towers of midges flying in circles – although it seems that their flight path is in fact angular. It is as though they fly in one direction, then they suddenly stop and fly off in another. Each swarm develops its own speed and rhythm. And each swarm formsitself into an axis which is circled by the midges in both directions: a flying double-helix.
The swarms are made up of male midges aiming to attract females for mating.Attraction and courtshipoccurs by wing beat which differs between the male and female insects. Based on these characteristics a video and sound installation is to be developed.

In order to observethe swarms they will be attracted by sounds of the female wing beat. Various sound sources (loud speakers) will be distributed at intervals in shallow water. At a suitable distance from the sound source a video camera will be located on the bank to record any swarms which may form. The camera is set up on a platform which can be accessed from the bank.Passers-by can approach thecamera and look for the swarms in the viewfinder. If a camera detects a formation it then remains still, a video image is recorded and is sent to a central computer.

screen prints of the indoor installation:

double helix screen print

doublehelix00

installation view at the Wallraf-Richartz-Museum Cologne, october 2006:

Double Helix Swing

top

the set up

a series of loudspeakers were placed on a platform situated over shallow water to emit the sounds of the beating of a female mosquitos wings. A video camera, located at a certain distance from the source of the loudspeakers, records images of the swarms that are produced. These images are sent to a computer that analyses the movements and visualises them on the screen in the form of traces.

Double Helix Swing

dagmar_hinten_s

dagmar_p_s

The camera ist build to be used by passers-by, they can locate possible swarms by adjusting the camera (turning, zooming etc.).

Double Helix Swing

The loudspeakers around the camera emit the sounds of various insects’ wing beats. Using the switch, passers-by can try attracting insects using the wing beat sounds of the females of different species.

indoor installation

A specially designed console provides different options for interacting with the virtual world.

Double Helix Swing

installation view at the Wallraf-Richartz-Museum Cologne, october 2006:

Double Helix Swing
top

installation (indoor), Linz 2006:

installation (video), Linz 2006:

top

images

Double Helix Swing
top

the software

The virtual world consists of three layers:

  1. the video of the midges, which runs in the background.
  2. the tracks of the midges, which are signified as nutrition (these are green tracks).
  3. the virtual creatures searching for and eating the tracks of the midges

The creatures, survival depends primarily on the video, coming from the outdoor camera.
The form of the tracks generates a structure which serves as “food” to a number of virtual creatures. These virtual creatures are capable of evolving, so adapting better and better to the tracks generated by the movements of the real mosquitoes. The behaviour of the virtual creatures depends, therefore, on the real environment, but is also conditioned internally by a virtual “genetic code” of programming and behaviour. This code can in turn be manipulated by the user of the installation.

Double Helix Swing

The intake of nutrition is regulated by a buffer, which stores all the food (that is the tracks) of the midges. The creatures have a variable food search radius. This radius depends on the velocity of the creatures. If they move fast, the search radius doesn’t have to be so big as when they are more or less immobile.
The creatures never eat all the food at once. As they eat only a certain percentage of the food found, they are able to remember where nutrition is located. The amount of food depends on the number of things moving on the video, coming from the camera. Using the sound, a feedback loop is produced: if there is too much food on the screen, the sound from the loudspeakers is muted or just changed, so that the midges are no longer attracted.

The creatures develop different forms. Every animal has a virtual gene, which defines its movement and its form. The form comprises the number of legs, the density of cells and the density of branches per node.
In the beginning there are three different types of creatures, but later on visitors can define new ones. If a creatures is replicated, a random operation defines whether the animal will be changed or not. Then, another random operation partially changes the animal’s code. While the creature is young, it will only have a few cells. But as it grows, it follows the form which is predefined in the gene. It grows until a certain, adjustable energy level is reached. Then it gives birth to several new creatures.

If the creature becomes big, it is possible that the cells will pull in different directions. Then the creature breaks in two. Both parts survive, one of them with the original code, the other with the new small form gene.

top

technique:

The installation can be shown as an indoor installation with or without the camera. The camera can be shown outdoor during summer closed to rivers or lakes. But it can also be shown indoor. Then, a virtual world is shown on the screen in which the visitors can search for swarms of midges.

For the installation without the camera you need:

  • 1 Linux-PC 3,2 GHz, 1 GB Ram, 3D-accelerated graphic card, soundcard (has to be shiped)
  • 1 stand (has to be shiped)
  • 1 Video Projectore
  • 2 Soundboxes (activ)

For the camera the following material is needed:

  • Camera (special design, has to be shiped)
  • 1 Linux-PC 3,2 GHz, approx. 1 GB Ram, mp2-encoder videocard, receiver
  • Switcher for loudspeakers
  • 5 loudspeakers

top

authors:

Concept: Ursula Damm
Programming: Christian Kessler
Sound: Yunchul Kim

timescape [2005]

Timescape (51° 13.66 north, 6° 46.523 east)
Kunstsammlung NRW K20, Düsseldorf exhibition at the fountain wall
9.9.-9.10. 2005

inter_zeit01



Installation | concept | interface | data processing | neural network | isosurface
sound | architectural context | video | outlook | staff


installation

The ‘Zeitraum’ Installation is the most recent “inoutside”-installation in the series of video tracking installations for public spaces.

One perceives a virtual sculpture space-envelope come into being and vanish. This everchanging sculpture is controlled by the occurrences on Grabbeplatz. Like a naturally grown architecture this form is embedded into the contours of the immediate environment of the current location of the viewer. The positions of the people on Grabbeplatz as well as the position of the viewer of the picture are marked with red crosses. Connecting lines point from marked people approximately onto the place of the space-envelope where the people influence the generated form.
The installation is made up of nightly projections onto the fountain wall in the passageway of K20. On the roof of the art collection two infrared emitters and an infrared camera are installed. The emitters make up for the missing daylight, so that the camera may pick up the motions on the site. The regular light is an important prerequisite to be able to deduct the motions of the people. This is done by a video tracking program. It is determined where on the site movement takes place by comparing the current video image with a previously taken image of the empty site. The results are then sorted by “blobs”. “Blob” signifies a “Binary Large Object” and denotes a field of non-structured coordinates which moves equably. These animated data-fields are passed on to a graphic- and sound program that calculates the virtual sculpture from the abstracted traces. In this the motion traces are interpreted. As in earlier installations such as “memory space” (2002), “trace pattern II” (1998) or “inoutside I” (1998) and “inoutside II” (1999), the supplied video image is visible on the basis of a video texture so that the people who see this image may recognize themselves in the virtual image. So it becomes clear how the virtual forms, the clouds and arrows are calculated.

top

concept

The installation deals with architectural and city-planning concepts. A new, different design technique should be attempted, that is aimed at observing the behaviour of people towards architecture. The goal is to depict the city or architecture as a dynamic organism. To show city and people as something collective, something in constant transformation, is what the installation wishes to convey.
top

interface/surveillance

The system, the way we used it in the six “inoutside”-installations does not allow for humans to be apprehended in their individuality. It is not about recognizing what makes up each person per surveillance system (for this we humans are still better at than machines), but about perceiving the quality specific to the site and to test it on humans and human behaviour. It’s about the cognition of behavioural patterns, but not about a description of the individual. Moreover the site should be individualized to ascertain its character.
top

data processing

With the recording of traces it is so, that the means of computation that is being done in the installation aims to characterise the site, whereby the collecting of the different individual traces becomes essential. The early installations such as “trace pattern II” (1998) are obviously geared towards the interaction amongst the people. The behaviour of the people is recorded, amplified and interpreted: Do they walk close to one another? Are they walking away from each other? Are there “Tracks” – several people walking in step? These observations become part of the interpretation. In the current installation in K20 the space is represented in form of one or multiple bodies. These bodies have entrances and exits, openings and closures, they are being bundled from small units or melt into a large whole or divide. They swell or make holes that enlarge until the bodies dissolve. All these spatial elements are determined by the behaviour of the people. We also map the traces of the visitors of the site according to their different walking-pace and according to their frequency of presence on site onto a mathematical shape-the Isosurface that changes according to behaviour.
top

the neural network

The neural network, here the Kohonen map learns through self-organization. This is a method of learning based on the neighbourhood relations of neurons amongst each other. A network is constructed which depicts that which is on the site. Beyond this the classical SOM is modified in order to solve the problem, which is created when the monitored space is a limited area. For at the edge different conditions for calculation apply than in the middle, because the condition of a site is also rooted in its neighbourhood. In the simulation of physical processes one defines the monitored area simply as a torus or a sphere, whereby that, which for example disappears on the right image border, re-appears on the left. This doesn’t make sense for a real site however, which is why we modified the procedure: in the regions with too great an impulse density we let the neurons descend into a second plane that disperses the neurons onto the nearest places that lack neurons. Through this we created a compact cyclical energy evaluation that prevents that only a cluster of the SOM would stay in the middle of the space (see graphics). This would be nothing other than too high of an information density that could provide no information whatsoever (almost a black “data hole”). For me as an artist the application of neural networks in very exciting, because they demand of me that I restrain my artistic and visual desires for the sake of processes which happen with the help of this camera eye – in reality not by the camera but by the people that are the focus of the camera, so by the image of the site. Since the computer-internal data processing apparatus imitates the human perception apparatus, it demands of me that I be concerned about the limits of perception and the possibilities and impossibilities of projection of knowledge.


Please use the arrows to navigate:
top

the isosurface

As already mentioned the Grabbeplatz is being reproduced in a virtual image through video texture. It marks the area recorded by the tracking-program, over which the calculations generate an abstract space. The nodes of the SOM form trajectories in this space, nurtured by impulses that refer to speed, direction and length of stay of the passers-by. So the level tracking plane is expanded by another dimension. To visualize the relationships of the nodes not only as points in the room but also in relation to one another, we let an imagined potential energy function reign in the surrounding of every which node. This function is represented as a surface in space. This follows the path of every node and adapts its shape to its current role in the SOM. Such surfaces are called Isosurfaces. They are used to represent the homogeneity of energy states (here: potential energy states) within a continuum. All points of this surface represent an equivocal condition. The potential energy functions- one for each node- overlap or avoid another, amplify or cancel each other out. This event visualizes surfaces which comparable to soapy skin melt into or drip off each other or lose themselves in infinitely minute units. In the arithmetic sum of all the individual potentials lies the key to these surfaces. All points in the room are determined whose potential energy function reach a certain numerical value (decided upon by us). The entirety of these countless points (and thereby the virtual sculpture-shape) appears as a smooth or rugged, porous or compact shape whose constantly moving surface proffers a new way of seeing the development of the SOM. In relation to Grabbeplatz the SOM represents the time-space distortion of the site when it is not seen as a continuous space-shape but as a sum of single “Weltenlinien” (world-lines) of the passers-by, so of the paths of the individual people that cross each other here by chance. The SOM also dissolves time, because paths meet that were temporally offset. In this way the SOM is a memory of personal times that give the Grabbeplatz its individual shape. The ISO-surfaces have the function to discern the areas that are much used and walked upon from those that are scarcely used. Bulges mark the frequent presence of people, holes their absence. The dynamism of the ISO-surfaces follows the dynamism of the accumulation of humans, who, when f. ex. they reach a certain density, fall over and dissolve in turn, to avoid too great of a density.
top

The Sound

The sound is a sort of “monitoring” that references the current position of people on the site and that references this position calculated into a long-term organism. Both are graphically represented as positions in the fields which is a visual memory form in a virtual image. The sound will also relate to the current position; a change of position of the viewer consequently changes the sound. When one walks something else is audible on the site than when no one there. On the other hand the sound- and this is the actual quality of noise- transmits information about that, which takes place on the site. And through language it should carry out a categorization.
top

architectural context

Architecture as we know it can’t meet the needs of its users, because it is made of properties, of immobile, modern, maybe 70- or 100-year-old buildings. A flexible building is not imaginable yet. But the lifetimes of buildings are diminishing, if one looks at the results of conferences such as “shrinking cities” the trend is towards planning houses that are construct- and de-constructible, that are transferable and flexible in their functions. This is not only my demand, but also the city planner’s. My approach is to get away from the forms of buildings that set statutes as in the past sculptors have done; but that are thought of as a body, in the way that I experience myself; as a sign, that is aesthetically pleasing to me. Of course the new buildings to be designed have to integrate functionally in the devised use zoning plan. But these buildings should observe their surrounding during their use, to determine if they are fulfilling their purpose. The question arises how such a concept could be formulated if it is not only developed based on the idea of one person or a committee of experts, but if it would be submitted to the evaluation by the public. It is not new to claim that architecture has changed with the introduction of the computer, because the methods of design changed with tools or CAD-programs, etc. “Blob architecture” denotes a young architecture that designs more than builds. Most of the time biomorphous building-forms develop in 3D programs. Forms are developed through for example the observation of environmental factors, that become manifest. In this respect one could see the installation at Grabbeplatz as a representative of this form of architecture. The goal of my projects however is not to merely find a different form of buildings, which in turn will only become static monuments, but to test interactive structures , to see which perceptible traces create sensible structures for buildings. “Blob architecture” will have to prove itself by the meaningfulness of the data exploration to be carried out on the blobs. These methods of processing imply a view of humanity that will possibly be the basis of the future architecture.
top

video

Timescape (51° 13.66 north, 6° 46.523 east) from resoutionable on Vimeo.

top

Outlook

In the next project of the “inoutside” series the return to real space will be targeted. Today a return to static form and to the sculptural would be too early because we haven’t advanced far enough with the evaluation of the interaction to know what to build, especially as long as the building materials aren’t flexible yet. Also more complex interfaces are necessary that could evaluate a greater spectrum of perceptible human expressions. In the virtual today one is much more flexible and less damaging as long as one leaves it at not building yet and checking the results in the visual and acoustic through users. One can cross over to the formation of real space when architecture is as changeable as the interactive system requires.

inter_zeit00
top

staff

Matthias Weber, Freiberg
Peter Serocka, Shanghai
Yunchul Kim, Köln
Ursula Damm, Düsseldorf

on the nature of things [2009]

on the nature of things, 2009
comprises five video projections which atmospherically depict and re-interpret our habitat.
In the exhibition space five projections are set up like a landscape. Each of these projections represents one aspect of the manifestation of nature as altered by civilization, so creating a space where a “third nature”‚ is experienced. The projections are always visible but need to be viewed from some distance so that an audio environment can evolve in front of each them. The areas can be experienced just by strolling through them.
Each projection consists of one video loop lasting between 90 seconds and six minutes. In these projections, weather and global energy balance phenomena are set to the sounds of social events such as football, Formula One racing or group laughter. In these, people develop their desires and visions with collective power.
The installations seek out people’s motivations in designing and running their civilization – sporting and event culture, energy management as the heartbeat of modern society. In the end the dream of cold technology sweeps temperature charts and climate change before it.
In his work On the nature of things Lucretius still pre-supposed a closed cosmos and the presence of natural deities. But in the age of On the Internet of things, the title of his work assumes a new flavour: human artefacts acquire their own nature while moving, once they have been created and located, into a self-explanatory state of existence. Here then, technology thus appears to acquire its own justification for existence, which like the creation of the world, is beyond dispute.
In this sense I now depict the first, original nature as a consequence of human behaviour, as it can certainly now be understood in this the era of climate change.
Social upheaval is exemplified not only climatically, or by way of natural disasters, but also by social phenomena and processes.
Laola

the nature of things (laola) from resoutionable on Vimeo.

On 27 February 1990 I went to the Netherlands to film the sea. I had selected the period of spring tides in the hope of a moderate sea swell. Unexpectedly a storm of historic proportions developed. We had great difficulty keeping the tripod steady and not being blown away ourselves, camera and all. Nineteen years later climate change is on everyone’s lips. Our society has changed a great deal.
I set the surges of enthusiasm from crowds in football stadiums to the waves. With all the drama of the situation – high tides and a rising sea level – a sense of euphoria persisted, stemming as much from the power of the sea as from the naturally synchronised voices of thousands of spectators in a stadium.
quichote

the nature of things (quichote) from resoutionable on Vimeo.

Wind turbines, nothing but wind turbines in the landscape, rotating at slightly different speeds, but not with the local sound of the wind but with the sound of Formula One racing.
We are no longer used to believing that an element in motion does not move “by itself” or “naturally”, we instantly assume it to be driven mechanically, appropriating this with enthusiasm. In the video a sound space develops in which a car is allocated to each wind turbine, but differentiated by its close or distant location. A vision of a technically codified landscape emerges, in which artefacts of civilization appear to be causing natural phenomena such as the wind.