earth sculptures outdoor [1982-1984]

Fernfuehler [2007-]

Interactive furniture for public places

how it shall work

fernfuehler setting

idea | Installation | concept | stools | game | authors | links


enliven public spaces bringing form and structure into the consciousness of the general public. As they are connected with each other, “Fernfuehler” can also play, and can influence the behaviour of these other “Fernfuehler” (or of the people sitting on them). The town-planning interest lies in enlivening urban spaces for passers-by and making these spaces able to be changed. Instead of providing seating in public spaces as permanently fixed architecture, mobile groups of seats are provided which communicate with each other, thereby discovering, through experimentation, the optimal arrangement of elements in the space. Planning from the bottom-up is brought to bear here, instead of planning from on high, so involving the user in the process of shaping public space.



fernfuehler rendering

all possible states of fernfuehlers


sensory system of fernfuehlers

“Fernfuehler” are seating options that can be moved around at will. The seats are modular. They can be brought together to form ensembles, or they can stand alone. By pulling out their backrests they can be transformed into spatial elements, or, with the backrest pushed in, they can just be seats.
“Fernfuehler” detect what other “Fernfuehler” (or the people sitting on them) are doing. And they can react to what the other ones are doing.
They are tough and unpretentious.
They like people’s company as they always move in their direction.
They can hear. When you call them, they come.

Everything that “Fernfuehler” do can be observed in a small computer game. A worm’s eye view displays the area where the “Fernfuehler” are located as a network of nodes.

The birds-eye view of the setting can be made publicly visible for anyone on a hand held computer by passers by who are in the area. The network structure’s nodes, which represent the local arrangement of “Fernfuehler”,
can be manipulated by people playing with the “Fernfuehler” on the handheld computers’ displays or on the projected video screen. In this way people can control the paths that the seats follow in the area where they are located.



Is software art a further stage of conceptual art? Works by Dan Graham (“Poem Schema”, 1966 – 1969) or Sol Lewitts Wall drawings, together with his ‚sentences on conceptual Art‘ support this understanding. Tilman Baumgaertel draws a line in his paper ‚EXPERIMENTAL SOFTWARE’ from the instruction of lewitts concepts (which are meant to be machines) to the computers from today. But the software we are writing today is not looking for a crafts man executing our will, but for users who have more degrees of freedom in their behavior. Software art today is more something inbetween the programmer and the user.
The installation „Fernfuehler“ refers with its aesthetics to Sol Lewitts “Serial Project #1″ oder “Serial Project ABCD“. A programmer today still needs formal systems to allow the computer to make comparisons, differentiations, decisions. As the world of the computer is much smaller than our every day life, we have to offer the computer a smaller version of the latter. The pedestrians, visitors and passengers will break up the initially ordered setting of the „Fernfuehler“. The visitors can move the stools around and extend their backrests. The position of the stools react to the neural network and organize themselves in a bottom-up process, according to the presence and usage of the visitors.

A moderate number of “Fernfuehler” occupy the area. “Fernfuehler” are intelligent. They are Items of furniture with rollers and a motor. They can therefore move on their own. As soon as people arrive in the area, they will move towards them, as they have microphones which listen for their voices.
Now people can take their places on the seats, they can form groups or remain alone. Because “Fernfuehler” make first for wherever people are, the arrangement of furniture elements in the area corresponds to the structure of the area, thereby strengthening it. Now you could just find a spot in the area and watch how the seats move around and how other people react to them. Anyone who finds just watching the seats operating automatically too boring, can get out a handheld computer, load the game over a wireless network and use it to activate the “Fernfuehler”.
On the screen you see a network structure with dots at each node. Each “Fernfuehler” in the area represents one of the nodes on this network.

The network connects each “Fernfuehler” while at the same time acting as a skin lying over the area. At this point there will be several options for determining the behaviour of the “Fernfuehler” in the area by manipulating the graphical interface. The purpose of the installation is to make public space more attractive, especially to young people. By providing networked seating, they experience the area as a place that changes, one that has moved beyond stable architecture. In addition they can themselves try the role of director, either on the hand held computers or, if they prefer, on the big screen, as they can influence the behaviour of passers-by by re-arranging the positions of the items of furniture. They experience what it is like for computer games to have an effect directly on the surrounding physical space and on the other people there.


the stools

move on rollers. When you sit on them, they will be on their frames, which settle down onto the ground on springs. Each seat has two side pieces which can be pulled out and used as backrests or, with both extended, transform the seat into an item that divides physical space.
Each seat is at the same time a node in a virtual network, linking every seat together. The nodes in the network are “neurones”, they learn from the signals which the seats, as it were, receive.
The sounds in the public space, and the use made of the seats for sitting on, are the signals feeding the neuronal network. LEDs inside the seat display the seat’s state of activity within the neuronal network, (with a colour or white light).
Each seat has a controller to which a microphone and a pressure sensor are connected. The pressure sensor can detect whether anyone is sitting on the seat, while the microphone picks up surrounding sounds, filtering human voices. If these sensors detect activity, then the seat “learns” that position as “positive”.




Prototype of a stool with LED’s to indicate the inner state of the neuronal network

the game

A game is available over a wireless LAN, representing the spatial arrangement
of the seats and making it possible to integrate them. In this way it is possible
to use the computer game to instantly intervene not only on the screen, but
also into the immediate surroundings and the situation of other players.


Variations of the spacial order:


Ursula Damm, Matthias Weber (dipl. information science)



the outline of paradise (installation) [2014]

installation view at the Hybris exhibition ACC Weimar

detail of the installation, flowers for midges, food for larvae

visitors observing the midges, observing the air planes

muecken weimar vimeo from resoutionable on Vimeo.

Non-biting midges (chironomidae) are bred in an aquarium. Inside the aquarium mosquito eggs and larvae swim in sand and water. They are ventilated and supplied with abundant artificial daylight. The choice of midges (Chironomus Riparius, a laboratory strain) allows for captive breeding.

midges learning to fly like air planes

airshow versus swarming midges – fly patterns and other forms of swarms

For a performance I invited Christina Meissner  to improvise on the theme of the wingbeat sound. We could experience that the tracks of the midges were visibly influenced especially by dark plucking sounds.

  • "the outline of paradise" - view into the soundbox

This setting allows to find out how swarms develop and how they can be influenced. This installation follows “sustaibable luminosity” and explores the possibilities to train migdes and to  pass this behaviour to the next generations

Work at Halle14, Leipzig

Concept: Ursula Damm
Artistic + scientific consultation: Dr. Klaus Fritze
Cello, Sounds: Christina Meissner
Programming: Sebastian Stang

I am a Sensor [2013]

I am a Sensor from resoutionable on Vimeo.

Since the 90ies many devices and machines have found their place between the artist and its audience. Communication happens in an controlled, planned, downsized way – in short: posthuman. In doing so we accept that every interface deflect the attention from our senses of in favour of technical devices and data.

The video has been produced for the annual meeting of the german association of media sciences, Leuphana University Lüneburg 2013

Seit den 90er Jahren haben sich zwischen den Künstler und sein Publikum jede Menge Apparate und Apparaturen geschoben. Kommunikation geschieht geplant, kontrolliert, reduziert, erweitert – kurz: posthuman. Dabei wird in Kauf genommen, dass jedes Device bereits die Aufmerksamkeit des Performers vom eigenen Körper ablenkt auf technische Artefakte.

In einigen Experimenten wird gezeigt, was passiert, wenn der Körper als ultimative Instanz der Bewertung seinen Platz zurück erhält und die natürlichen Sinne von Mensch und anderen Lebewesen ins Zentrum der Betrachtung gerückt werden.

Das Video wurde uraufgeführt zur Jahrestagung der Gesellschaft für Medienwissenschaften, Leuphana Universität Lüneburg

Chromatographic Ballads [2013]

chomatographic ballads from resoutionable on Vimeo.

The installation received a honorary mention at VIDA 15.0

The Artwork

Chromatographic Orchestra is an artistic installation which allows a visitor to direct a software framework with an EEG device. In an exhibition environment with semi-transparent video screens a visitor is sitting in an armchair and learns to navigate unconsciously – with his/her brain waves the parameter space of our software – Neurovision.

Neurovision interacts with live video footage of the location of the exhibition and its surroundings. By navigating with his/her own brain waves the visitor can define and navigate the degree of abstraction of a generative (machine learning) algorithm,  performed on the footage of different, nearby video cameras.

Lisa training the Interface

Lisa training the Interface

The installation refers back to painting techniques in the late 19th and early 20th century, when painting became more an analysis of the perception of a setting then a mere representation of the latter. Impressionism and Cubism were fragmenting the items of observation while the way of representation was given by the nature of the human sensory system.

The installation “chromatographic orchestra” does not apply arbitrary algorithms to the live footage: we developed a software – the Neurovision framework – which mimics the visual system of the human brain. Thus we question whether our algorithms meet the well being of the spectator by anticipating processing steps of our brain.

Artistic Motivation

How much complexity can our senses endure, or rather how could we make endurable what we see and hear? Many communication tools have been developed, to adjust human capabilities to the requirements of the ever more complex city.

Our installation poses the opposite question: How can information emerging from the city be adjusted to the capabilities of the human brain, so processing them is a pleasure to the eye and the mind?

At the core of our installation is the NeuroVision Sandbox, a custom made framework for generative video processing in the browser based on WebGL shaders.

Martin explaining the Neurovision software

Martin explaining the Neurovision software

Inside this Sandbox we developed several sketches, culminating in the
“Chromatographic Neural Network”, where both optical flow and color information of the scene are processed, inspired by information processing in the human visual system.

We critically assess the effect of our installation on the human sensory system:

  • Does it enhance our perception of the city in a meaningful way?
  • Can it and if so – how will it affect the semantic level of visual experience?
  • Will it create a symbiotic feedback loop with the visitor’s personal way to interpret a scene?
  • Will it enable alternate states of consciousness? Could it even allow visitors to experience the site in a sub-conscious state of “computer augmented clairvoyance”



In a location close to the site a single visitor directs a video-presentation on a large screen with a setup we like to call “the Neural Chromatographic Orchestra” (NCO).
Our installation uses an EEG-Device (Emotiv NeuroHeadset) that lets visitors interact with a custom neural network. The setup allows visitors to navigate through various levels of abstraction by altering the parameters of the artificial neural net.

With the NCO device, a visitor can select and explore real-time views provided by three cameras located in public space with different perspectives on the passer-byes (birds-eye view and close-ups)

The installation is based on the NeuroVision Sandbox used in the development of “transits”.
Other than transits, chromatographic ballads uses multi-channel real-time video-input and enables a visitor to interact with irectly via biofeedback with the neural network.

The Neural Chromatographic Orchestra investigates how human perception reacts to the multifaceted visual impressions of public space via an artistic setting. Using an EEG-Device visitors can interact with a self-organizing neural network and explore real-time views of an adjacent hall from several perspectives and at various levels of abstraction.

Biological Motivation

The Chromatographic Neural Network is a GPU-based video processing tool. It was inspired by parallel information processing in the visual system of the human brain. Visual information processing inside the brain is a complex process involving various processing stages.The visual pathway includes the retina, the Lateral Geniculate Nucleus (LGN) and the visual cortex

Scheme of the optical tract with the image being processed (simplified):

Low-level visual processing is already active at the various layers of the retina. The Interconnection of neurons between retina layers, and the ability to retain information using storage or delayed feedback, allows for filtering the visual image in the space and time domain.

Both image filters and motion detection can easily be achieved by accumulating input from neurons in a local neighborhood, in a massively parallel way.

Our Chromatographic Neural Network uses this approach to cluster colors and to compute the visual flow (or retina flow ) from a video source. The resulting attraction-vectors and flow-vectors are used to transform the memory retained in the memory layer.

The visual output of the system directly corresponds to the state of the output layer of the neural network. The neural layers of the Chromatographic Neural Network, are connected to form a feedback loop. This giving rise to a kind of homeostatic-system that is structurally coupled to the visual input but develops its own dynamics over time.

The set-up


main hall with passangers

A visitor enters the site – a highly frequented passage, a spacious hall or a public place. Two videocameras, mounted on a tripod, can be moved around at will.

Another camera observes the passer-byes – their transits and gatherings – from an elevated location. The video footage from this site is streamed into a neighboring room – the orchestra chamber of the Neural Chromatographic Orchestra.

Here one can see – in front of a a large video wall a monitor displaying the videos from the adjacent room and the “orchestra pit” – an armchair equipped with a touch device and a neuro-headset. The video wall, showing abstract interpretations of the site itsself, should ideally be visible both from the orchestra pit and from the large hall.

The Orchestra Chamber


view from the “orchestra chamber”

Inside the chamber the visitor is seated in a comfortable armchair and an assistant helps her put on and adjust the neuro-headset.

The orchestra chamber should be isolated from the public area as much as possible. A sense of deprivation from outside stimuli allows the visitor to gain control over her own perception and achieve a state of mind similar to meditation or clairvoyance.

The Orchestral Performance

Training Cognitive Control

A performance with the Neural Chromatographic Orchestra starts with a training of up to six mental actions, corresponding to the “push/pull”, “left/right“ and “up/down” mental motions provided by the Emotiv Cognitiv suite. The training typically lasts 10 to 30 minutes.

Playing the Sandbox

After successful training the visitor is asked to sit in front of the NeuroVision Sandbox:

The visitor in the orchestra chamber has three modes of conducting the neural network

  • either the birds-eye view or one of the cameras that take a pedestrian’s perspective
  • A graphical user interface lets her switch between different neural networks and control their parameters
  • A menu lets her choose any of the three cameras as a video source:
  • the NeuroHeadset allows to navigate the parameter space of the selected neural network

Conducting the Orchestra

Once the visitor feels comfortable conducting the NCO on the small screen, she can perform on the large screen, that is also visible from the outside.

On the public screen sliders are not shown, but the conductor may still use a tablet device to access the graphical user interface.

The current position in parameter spaces is represented by a 3d-cursor or wire-frame box, which is very helpful for making the transition from voluntary conduction moves, to a style of conducting that is more directly informed by immersion and interaction with the output of the Chromatographic Neural Network.

The Chromatographic Neural Network

The flow of information is arranged into several processing layers. To realize memory, each processing layer is in turn implemented as stack of one or more memory layers.This allows us to access the state of a neuron at a previous point in time.


The video layer is made up of two layers, so the system can access the state of any input neuron at the current point in time, and its state in the previous cycle.

Processing Layers


The Video layer

The Video layer contains the input neurons. Each neuron corresponds to a pixel of the video source. The Video layer provides the input for the Flow layer.

The Ghost Layer

The Ghost layer represents a haunting image from the past. It implements the long term memory, that interferes and interacts with the current visual input. It does not change over time, and is provided as additional input to the Flow layer


The Flow layer

The Flow layer accumulates the input from the Video layer and the Ghost layer. Each Neuron aggregates input from its neighborhood in the Video Layer at times (t) and (t-1). The computed 2d vector is directly encoded into the the state of the neuron, creating a flow map.

The Blur layers

The Blur layers are used to blur the flow map. While the computation of visual flow is restricted to a very small neighborhood, the blur layer is needed to spread the flow information to a larger region, since flow can only be detected on the edge of motion.

For efficiency reasons the blur function is split into two layers, performing a vertical and a horizontal blur respectively.


Neuron Processing

The state of each neuron corresponds to an RGB color triplet. Every neuron of the Flow layer gets input from corresponding neurons inside a local neighborhood of the input layers. Each of those input samples corresponds to a single synapse. The vector from the center of the neuron towards the input neuron is referred to as the synapse vector.

Color Attraction

To achieve some kind of color dynamics, colors that are close in color space are supposed to attract each other.

The distance between synapse input and the neuron state in RGB color-space, serves as a weight, which is used to scale the synapse vector. The sum of scaled synapse vectors results in a single color attraction vector.


Color Flow

While color attraction is the result of color similarities or differences in space, color flow is the result of a color changes over time. Rather than calculating the distance of the neuron state to a single synapse input, its temporal derivative is calculated by using input from a neuron and its corresponding memory neuron. This time the sum of scaled synapse vectors results in a flow vector.


Both color flow and color attraction vectors are added up and their components are encoded in the flow layer.


here are various parameters in each layer controlling the amount and direction of color attraction, color flow, the metrics used for calculating color distances, the neuron neighborhood, etc …


All neural computation is performed on the GPU using OpenGL and GLSL shaders. This is the mapping from neural metaphors to OpenGL implementation:

Memory layers → Texture-Buffers
Processing Layers → GLSL shaders
Parameters → GLSL uniforms


In our implementation both color flow and attraction are integrated into a single level flow map. While this generates interesting local interactions, there is little organization on a global level. The work on Multilevel Turing Patterns as popularized by Jonathan McCabe shows that it is possible to obtain complex and visually interesting self organizing patterns without any kind of video input.

Our future research will combine several layers of flow maps, each operating on a different level of detail. Additional directions include alternate color spaces and distance metrics.
In the current model input values are mixed and blurred, resulting in a loss of information over time. We have also been experimenting with entropy-conserving models and are planning to further investigate this direction.

This project is based on two recent artworks, “transits” and “IseethereforeIam”

Conceppt: Ursula Damm
Programming: Martin Schneider

urban development kit [2014]

The Urban Development Kit  provides tools to ameliorate the atmosphere in contemporary cities. The Urban Development Kit is a collection of tools. Over the time the website aims to become a resource of ideas, concepts and tools for a citizen-driven urban design.

Our first kit supports watchful citizens and plants to compete with pavement, concrete and asphalt. It helps plants to interact with modern cities, to prevail against soil sealing. A website and an interactive map enables the people to collect photos of “asphalt flowers” in Helsinki and other cities and to monitor the progress of the “cultivation”. With respect to urban environmental research, the urban development kit is a statement about the importance to counteract the sealing of the surfaces in the city. Accordingly, in the exhibition can be seen design for urban surfaces which are based on the geometry the plants themselves.

The work has been developed for the Art&HENVI project, organized by the finish Bioart Society.

counteracting soil sealing

In 2014, a new version of the urban development kit was presented at a creative cloud workshop, organized from ars electronica (see the fotos from the workshop)

Plese look at the Presentation Workshop Osaka!

cosmical sperm osaka

transits [2012]

For the work Transits, Ursula Damm filmed the Aeschenplatz in Basel during a period of 24 hours. Every day, thousands of cars, pedestrians and trams pass Basel’s most important traffic junction. “Transits” captures and alienates the recorded stream of motion with an intelligent algorithm that was developed by the artist and her team: It evaluates the audiovisual data and categorizes the patterns of movement and the color schemes. Based on the human memory structure and the visual system, the artificial neuronal network integrated in the software – where every pixel corresponds to one neuron – computes the visual flow. Various perceptional image layers overlap to generate an intriguing visual language in which stationary picture elements compete against the color scene. This begins at night and leads via dawn and noon to dusk; at the same time it is pervaded by arbitrary passersby, by cars, trams and people in the streets. The generative video interprets movements as atmospheres and eventually throws the viewer back to an individual perception of the city.

A detailed description of the algorithms and a further development of a interface for the installation you may find here

Transits has been produced for the exhibition sensing place of the  House of Electronic Arts Basel and is part of the collection of the museum.

Die Installation Transits verwendet statische Videoaufnahmen des Aeschenplatz in Basel, um Spuren von Passanten auf einem städtischen Verkehrsknotenpunkt aufzuzeichnen und in ihren Charakteristiken sichtbar zu machen. Eine speziell neu entwickelte Software (Autor: Martin Schneider) begreift das gesamte Videobild als neuronale Merkmalskarte (kohonen map). Jeder Pixel des Videobildes wird gespeichert und in der Folge mittels spezieller Algorithmen “erinnert” oder verarbeitet. Einerseits wollten wir, dass lang verweilende Elemente sich ins Bild einschreiben, zum anderen aber hat diese Bildverarbeitung auch eine eigene Dynamik: Farben ziehen sich an und Bewegungen schieben Pixel in die erkannte Richtung. Das Bildergebnis schwankt zwischen Beschreibung von ortsspezifischen Ereignissen und der Eigendynamik der Bildverarbeitung, wie sie unserem Gehirn zugeschrieben wird, hier vorweggenommen von einer Software.

Für eine ausführliche Beschreibung der Algorithmen und ein neu entworfenes Interface zur Steuerung der Bildwahrnehmung schauen Sie bitte hier.

Transits wurde produziert für die Ausstellung sensing place des Hauses für Elektronische Kunst Basel und wurde in die Sammlung des Museums aufgenommen.


Konzept der Installation:
Ursula Damm
Vorarbeiten: Matthias Weber
Software: Martin Schneider
Sound: Maximilian Netter

the outline of paradise (sustainable luminosity, video) [2012]

exhibition at ACC Weimar

the outline of paradise – the video

still from the video - a swarm looses its shapeWhat would our cities look like if advertising messages are produced not from artificial lighting but from swarming midges, glowing like fireflies? The “outline of paradise” explores the promises and capabilities of technoscience and developes and installation out of this narratives.

the outline of paradise from resoutionable on Vimeo.

This artwork has initially been produced as a produkt provided from, a supermarket offering speculative products related to Synthetic Biology:

For our experiment, we train non-biting midges (Chironomidae) to fly in a way that their swarm takes the shape of advertisement messages. The insects are genetically modified to glow in the dark and alter their genetic make-up according to the training and sound input we provide. This initial training will be inherited over generations and keeps the swarm in shape.

How can letters be tought to insects? How can we teach the alphabet to midges?
As chironommidae are sensitive to sound, we use a real-time sound spatialisation system to teach the midges. Until now we are only able to produce clouds of midges forming a simple LED font

Natural midges (chironomidae) form swarms with the shape of a circulating sphere. The swarms consists of male adults congregating for courtship. They are organized through the sound of the wingbeats of the male midges. Our system uses the sensitivity of chironomidae for sound and organize them with synthetic wing beat sound.

Midges are sensitive to sounds within the range of the wing beat of their species. This sounds are normally ± 50 Hz around the specific frequence.
To teach midges the alphabet, letters are coded with nine different sounds withing this range. Through the spatial placement of the loudspeakers midges learn to react in a certain manner to polyphonic tones by memorizing sound frequencies and the letter-related collective behaviour of their swarm.

Die Idee
Es gibt Tiere, die im Dunkeln leben. Einige von ihnen haben ihr eigenes Lichtorgan ausgebildet. In der Tiefsee ermöglicht es die Orientierung – in der Nacht hilft es einem Glühwürmchen beim Werben um einen Partner. Von der Natur möchte ich die Anregung aufnehmen und mir das Werben der Mücken um einen Partner zunutze machen. Wie sähen unsere Städte aus, wenn die Werbebotschaften der Unternehmen nicht mit Leuchtstoffröhren oder LED‘s, sondern direkt von leuchtenden Mücken produziert würden?
Wäre diese natürliche Lichtproduktion nicht auch sinnlich viel ansprechender als die technoide Ästhetik herkömmlicher Werbung? Ganz nebenbei spart diese Werbung Energiekosten und löst das ökologische Problem der „Lichtverschmutzung“.
Wie es anfing…
Mit entstand 2010 der erste online-supermarkt der synthetischen Biologie. Er wurde geschaffen von meinen Studenten als spekulatives Designprojekt innerhalb des iGEM-competition,
ein studentischer Wettbewerbs am MIT in Boston. Das Team war eine Kooperation von Studierenden von Prof. Roland Eils, Bioquant/Ruprecht-Karls-University Heidelberg und Mitarbeitern und Studierenden meines Lehrstuhls an der Bauhaus Universität Weimar (
das Projekt
Anders als meine Studenten möchte ich als Künstlerin meine Werke nicht nur virtuell erfahren – deshalb habe ich nun begonnen, Mücken zu züchten, um in einer Soundfeedbackinstallation zu trainieren sie zu trainieren, so dass ihre Schwärme in Formationen fliegen. Parallel beginne ich, in Kollaboration mit der Uni Heidelberg, die Möglichkeiten zu eruieren, Chironomus riparius mit einem Leuchtgen auszustatten. Dabei greifen wir auf Vorarbeiten zurück, die vom iGEM Team Cambridge 2010 im Rahmen des E.Glowli Projektes ( bearbeitet wurden. Gleichzeitig können wir auf die Erkenntnisse zugreifen, welche in Forschungsprojekten um die Kodierung von Organen bei Drosophila (Fruchtfliege) erarbeitet wurden.
Ferner liegen Untersuchungen vor zur Lernfähigkeit von Insekte, die vielversprechend sind. Ob das Projekt am Ende die vorgesehene Form annimmt oder ob die Umstände (Sicherheitsbestimmungen, technische Machbarkeit, ökologische Problemstellungen, Grenzen der Natur) die Realisation ermöglichen oder ein Scheitern herbeiführen bzw. ein Umdisponieren notwendig machen, ist der eigentliche Prozess, den es zu veranschaulichen gilt. Es ist Bestandteil des Projekts, Methoden und ihre Artefakte auf dem Weg der Umsetzung sinnlich nachvollziehbar zu machen, indem sie zum Gegenstand einer prozessualen künstlerischen Installation werden, die über die ökologischen wie gesellschaftlichen Implikationen der Biotechnologie berichtet.

das Konzept
„Sustainable Luminosity“ ist eine Beleuchtungsmethode für die Stadt, die nachhaltig und natürlich ist. Luciferin, ein Naturstoff, sorgt bei Glühwürmchen für ein Licht, das in seiner Intensität und seinem Nutzungsgrad jeder technischen Leuchtquelle überlegen ist. Das Produkt schlägt vor, das Werben der Glühwürmchen um einen Partner als Vorbild für die Werbeträger in den Städten zu nehmen. Leuchtreklame wird anstatt elektronisch gesteuert, von Mücken geflogen, die mit einem Leuchtgen
ausgestatteten werden und ein spezielles Flugtraining erhalten haben. „Zucht“ tritt hier als Form der Gestaltung der Natur neben das Engineering und verweisen auf die Epigenetik und deren Einflussnahme auf die Expression genetischen Dispositionen. Die Veranschaulichung des „Zuchtverfahrens“ wiederum macht den sensorischen Apparat der Mücken deutlich und stellt somit den tiefen Eingriff in den gesamten Lebensrhythmus der Tiere dar. Sustainable Luminosity greift eine typische Problemlage auf, indem herkömmliche Technologie durch „natürliche“, da bioorganisch erzeugte und damit „nachhaltigere“ Technologien als Problemlösungspotenzial der Synthetischen Biologie angepriesen werden. Dass hier statt Bakterien, den üblichen Manipulationsorganismen der Gentechnik, höhere Lebewesen eingesetzt werden, ist ein Kunstgriff, der dem Menschen Empathie ermöglichen soll. Gleichzeitig wird die Frage aufgeworfen, inwieweit höhere Organismen in Zukunft durch die Synthetische Biologie den menschlichen Zwecken unterworfen werden.

der Arbeitsprozess
Chironomiden (Zuckmücken) reagieren auf Tonfrequenzen. Mit einem Echtzeit-Sound-Spatialisierungs-System trainieren wir diese Mücken, in Form einfacher LED-Buchstaben zu schwärmen. In aufwändigen Trainingseinheiten lernen Mücken, ein vorher eingeübtes Wort als Flugformation auszubilden. Diese Flugeigenschaften bleiben – einmal erworben – erhalten und werden epigenetisch an die Nachkommen vererbt.
Ausgeliefert werden die Mücken als Larven. Vor Ort ausgesetzt bilden sich nach kurzer Zeit die Schwärme, die sodann die genetisch gespeicherte Information in ihren Flugbahnen nachvollzieht.

die Technik
Natürliche Zuckmücken (Chironomidae) bilden Schwärme in Form von Kugeln. Die Schwärme bestehen aus Männchen, die sich zur Paarung zusammenrotten,
um von Weibchen gesehen zu werden. Sie organisieren sich untereinander über die Flügelschlaggeräusche.
Unser System benutzt diese Sensibilität für Klänge und organisiert die Mücken mit simuliertem Flügelschlagsound. Mücken reagieren auch auf simulierte Töne im Bereich ihres eigenen Flügelschlags. Diese Töne variieren ungefähr ± 50 Hz um die artspezifische Frequenz. Wie aber kann man Zuckmücken Buchstaben beibringen? Wie kann man sie das Alphabet lehren? Wir benutzen ein Echtzeit Sound-Spatialisierungs – System, um Mücken zu unterrichten. Die Mückenschwärme fliegen in Schwärmen in Form von einfachen LED-Buchstaben.
Um die Mücken das Alphabet zu lernen, werden Buchstaben mit neun verschiedenen Tönen innerhalb dieser Bandbreite codiert und in der Buchstabenmatrix ortsspezifisch ausgestrahlt.


I see therefore I am [2011]

points-of-view2Zwei Kameras, mit einem Rechner verbunden.  Die Stelen, die die Kameras tragen, sind mit Rollen versehen, sind beweglich und von Besuchern  zu verschieben. An den Stelen sind Knöpfe, über die die Besucher das Bild der jeweiligen Kamera als Zuspieler für die Software aktivieren können.
Im Innern des „Haus am Horn“ – des historischen Bauhaus-Modellhauses – stehen zwei Kameras auf beweglichen Holzstativen. Eine Kamera schaut aus dem Fenster, eine ist auf das Innere des Gebäudes gerichtet. Besucher können diese Kameras bewegen und sie als Zuspieler für ein Bild auswählen, das im Wohnraum als Projektion entsteht: die erste Kamera sensibilisiert für einen Blick, das Bild der zweiten Kamera entsteht in den aktiven Bereichen des ersten Bildes. Sichtbar ist nur dort etwas, wo bereits zuvor etwas passiert ist.
Die Installation operiert mit den Mechanismen der Wahrnehmung und untersucht algorithmische Verfahren auf ihr assoziativ/künstlerischen Qualitäten. Wie wir Menschen nutzt points-of-view zwei „Augen“ um zu sehen. Allerdings „sehen“ diese Kameras nie gleichzeitig, sondern versetzt in Raum und in Zeit. Sie sind montiert auf zwei beweglichen, mit Rollen versehenen Steelen, betrachten den Raum, in welchem sie stehen, aus der Perspektive der Besucher der Ausstellung. Die Besucher können durch Knopfdruck an den Steelen die jeweilige Kamera aktivieren. Dadurch wird das Bild der Kamera zugespielt zu einer Videoprojektion. Diese berechnet ein Bild, das nicht nur die aktuelle Perspektive der aktiven Kamera abbildet, sondern
über ein programmiertes Gedächtnis (eine vereinfachte, neuronale Karte) das aktuelle Bild mit den vorherigen Perspektiven überblendet. Dabei erkennt die Software insbesondere in den Bildbereichen
etwas, in welchen zuvor bereits Bildveränderungen geschehen sind. Es entsteht eine Überblendung von lebendigen „Augenblicken“ und Perspektiven, ähnlich dem Vorgang, wie unsere Wahrnehmung Bilder erkennt – ohne Vorwissen – sondern das jeweils Neue aus dem Vorigen verstehend ableitend. Die Installation der zwei Steelen ermöglich, durch das Bewegen der Kameraständer im Raum und
das wiederholte Aktivieren der jeweils anderen Kamera die Eigenschaften der Software spielerisch zu erforschen. Dabei erlebt der Besucher, wie Technologie sind den Verständnis – und Wahrnehmungsprinzipien
des Menschen immer mehr annähert und die Bildästhetik vergangener Epochen (wie z.B. des Impressionismus oder des Kubismus) in Artefakten programmierter Bildwelten wieder
lebendig wird.

U-Bahnhof Schadowstrasse [2016]

‘Turnstile’ – an interactive installation

mit freundlicher Genehmigung von

Turnstile, generative Installation von Ursula Damm (Foto Thomas Mayer)

On the front wall of the Schadowstrasse underground station, an LED wall displays a generative video. In front of the wall, a light shaft extends to the surface of the plaza where a video camera is set up. The camera continuously films passing pedestrians on the plaza and streams the feed to a specially developed generative software application (coded by Felix Bonowski) which derives proposed geometries for structures based on the movement patterns of the pedestrians. These interpretations of the real-time video generate new geometries for the location and propose axes and parcels.

Turnstile, Ursula Damm, Schadowstrasse Düsseldorf 2016 (Foto by Thomas Mayer)

Two elevators, to the left and right of the large video image, lead from the plaza to the rail platform.

Pattern drawings on aerial photos of Düsseldorf, Schadowstrasse

Turnstile (Drehkreuz) from resoutionable on Vimeo.
On the platform, the geometric structures can be heard as a sound interpretation (by Yunchul Kim). At the centre of the artistic intervention is the video image and its artistic concept. The concept is reflected in the design of the entrance areas. Plates are inserted in the blue glass of the underground station at 21 locations, which display geometries over districts of Düsseldorf.


Schadow 04web

pattern drawing on aerial photo of Düsseldorf, photo by Achim Kukulies

In the east concourse is the aerial image of the city of Düsseldorf that was analysed according to the geometric concept.


Geometric pattern generation on the spacial structure of Duesseldorf

As excerpts from this aerial picture, 16 locations in Düsseldorf were interpreted at the level of a local aerial image. These urban areas were described with regular polygons as energy centres which fitted themselves together through the development of the city architecture (see the text on the concept of the generated patterns).


Plates at the west concourse

The fine structure of the patterns juxtaposes both the sensibility of nature and the human, formative gestures against the massive edifice, calling to mind a mode of formation that creates sweeping interconnections through the symbiotic organisation of a multitude of individual elements. In doing so, this formative process completes the social principle through which individuals experience their effect on the whole.


The pattern drawings are generated in slow steps: First a line drawing is created over the image of the city. As this progresses, important motion axes of traffic and pedestrians are emphasised. The areas these axes enclose become polygons. At this point, the angles of the lines and axes are examined in the search for whole-number fractions of regular polygons.

The smallest polygon integrating all of the symmetries at the location (for instance, five-angled and four-angled fragments would be assembled into a 20-sided polygon) is then used to describe an intersection.

A subsequent step is the search for connections (network) between large neighbouring polygons.

Work with the aerial images revealed that the city centre has very small polygons, while outer areas have a significantly more expansive structure. Thus, density is indicated by the presence of small polygons and complex symmetries. Often, the transition from non-rectangles to rectangles can indicate historical breaks in the urban landscape. In this way, the interpretations represent a study of the settlement and planning history of the city.

18_catalogue 27photoshop 10_photoshop 24R_HGVf_12<-6-F_eingang_u_West-nord_5_HGV

The sound installation

The generative video installation interprets traces of movement created by geometric “agents.” The activity of these agents is translated into sounds which track the visual artefacts. As such, the sounds form the noise that the virtual artefacts generate in their world, and thus represent and extended artistic “level of reality” of the installation.


  • Select a location (origin)
  • Determine the movement axes of people and traffic
  • Look to see if these axes are at angles to one another, which when mirrored and rotated can form a polygon, the sides of which all extend outward equally
  • Draw this polygon to approximate the natural geometry of the location
  • Look to see if, starting from this, the intrinsic geometries of the location can form a surface structure, (tessellation) that periodically repeats the original geometries
  • Determine whether and how, in the aerial image of the location, the areas fit together in the revealed geometry of  the place
  • Enhance existing structures by developing their geometries
  • Connect existing structures into the logic of the original geometry



TURNSTILE – eine Installation für die U-Bahn Station Schadowstrasse Düsseldorf

Auf der Stirnwand des U-Bahnhofs Schadowstrasse zeigt eine LED Wand ein generatives Video. Vor dieser Wand strebt ein Lichtschacht an die Platzoberfläche, wo eine Videokamera aufgebaut ist. Diese Kamera filmt unablässig Passanten auf dem Platz und leitet das Bild an eine eigens entwickelte generative Software (programmiert von Felix Bonowski), die aus den Bewegungsspuren der Passanten geometrische Vorschläge für Konstruktionen ableitet. Diese Interpretationen des Echtzeitvideos entwerfen neue Geometrien für den Ort, schlagen Achsen und Parzellen vor. Zwei Aufzüge rechts und links des großen Videobilds führen vom Platz auf den Bahnsteig. Dort kann man eine Soundinterpretation (komponiert von Yunchul Kim) der geometrischen Konstruktionen hören.

Zentrum der künstlerischen Intervention ist das Videobild mit seinem Konzept.Dieses Konzept findet sich auch in der Gestaltung der Wände der Schnitträume wieder. In das blaue Glas des U-Bahnhofs sind an 21 Stellen Platten eingefügt, welche Geometrien über Stadtteilen von Düsseldorf aufzeigen.

In der Verteilerebene Ost findet sich das Luftbild der Stadt Düsseldorf, das entsprechend des geometrischen Konzepts analysiert wurde.

Als Auszüge aus diesem Luftbild wurden 16 Orte in Düsseldorf auf der Ebene eines lokalen Luftbildes interpretiert: die Stadträume werden mit regulären Polygone beschrieben als Energiezentren, die sich über die Entwicklung der Stadtarchitektur aneinander angepasst haben (siehe Text zum Konzept der Mustergenerierungen).

Die feine Struktur der Muster setzt dem massiven Bauwerk die Sensibilität der Natur, aber auch der menschlichen, gestalterischen Geste gegenüber und ruft eine Form des Gestaltens in Erinnerung, die durch symbiotische Organisation von vielen Einzelelementen und -energien großräumige Zusammenhänge erzeugt. Damit vollzieht diese Gestaltung das Prinzip des Sozialen, bei welchem das Individuum seine Auswirkung auf das Ganze erfährt.

Der Entwurf der Musterzeichnungen geschieht in langsamen Schritten: Zunächst erfolgte eine Linienzeichnung über dem Stadtbild. Dabei sollten wichtige Bewegungsachsen von Verkehr und Passanten hervorgehoben werden. Die von den Achsen eingeschlossenen Flächen werden zu Polygonen. Nun folgt ein Untersuchen von Winkeln der Linien und Achsen, auf der Suche nach ganzzahligen Brüchen von regulären Polygonen. …

Das kleinste, alle Symmetrien vor Ort integrierende Polygon (beispielsweise bei Bruchstücken von 5-Ecken und 4 Ecken wäre das ein 20-Eck) wird dann zur Beschreibung der Kreuzung herangezogen.

In einem weiteren Schritt wird nun nach Verbindungen (Netzwerk)zwischen den großen Polygonen in der Nachbarschaft gesucht.

Es ergab sich bei der Arbeit an den Luftbildern, dass die Innenstand sehr kleinräumige Polygone aufweist, während Aussenbezirke deutlich großräumiger strukturiert sind. Verdichtung lässt sich also an kleinteiligen Vielecken und komplexen Symmetrien ablesen. Häufig kann auch der Übergang von nicht-vierecken zu Vierecken auf historische Brüche im Stadtbild hinweisen. So stellen diese Interpretationen Untersuchungen dar über die Siedlungs- und Planungsgeschichte der Stadt.

Die Soundinstallation:

Die generative Video-Installation interpretiert Bewegungspuren, die von geometrischen „Agenten” erzeugt wird. Die Aktivität dieser Agenten wird in Töne übersetzt, den visuellen Artefakten folgend. Der Sound ist somit wie das Geräusch, das die virtuellen Artefakte in ihrer Welt generieren und stellt eine erweiterte künstlerische „Wirklichkeitsebene“ der Installation dar.


  • wähle einen Ort (Ursprung)

  • bestimme die Bewegungsachsen von Personen und Verkehr

  • schaue, ob diese Achsen in Winkeln zueinander stehen, die durch Spiegeln und Drehen ein Polygon bilden, das sich nach allen Seiten gleichmässig ausdehnt

  • zeichne dieses Polygon, um die natürliche Geometrie des Ortes nährungsweise zu bestimmen

  • schaue, ob ausgehend von diesen, dem Ort eigenen Geometrien, eine Flächenstruktur (Parkettierung) möglich ist, die die Ursprungsgeometrien rhythmisch wiederholt.

  • untersuche, ob und wie im Luftbild des Ortes vorhandene Areale sich in die gefundene Geometrie des Ortes einfügen

  • verstärke vorhandene Strukturen durch Entwicklung ihrer Geometrien

  • verbinde vorhandene Strukturen in die Logik der Ursprungs-Geometrie

  • schaue in die Umgebung: wie können Orte miteinander vernetzt werden über Einsatz von geometrischen – also mathematisch beschreibbaren – Generierungsschematas?

  • Welche Proportionen bauen diese Geometrien zueinander auf?

 Konzept: Ursula Damm
Programmierung: Felix Bonowski
Sound: Yunchul Kim

Text von Georg Trogemann über den Besuch der Eröffnung

official website Stadt Düsseldorf