Kontinuum is a generative installation based on observation of the stream Chriesbach and analyses of its water. It is a public art work for a laboratory facility of the EAWAG (Swiss Federal Institute of Aquatic Science and Technology) Zürich. Set-up effectuated in July 2021.
The installation deals with the Chriesbach. The study of water is the starting point of the work. Through continuous viewing we gain views that tell of the essence of the water, its nature, the environment through reflection. In the laboratory building, projections can be seen that report to the viewers in real time on the condition of the Chriesbach in two aspects: from a visual and data analysis point of view. The viewers are able to experience the state of the stream and the weather through the projections, as in a kind of “atmospheric weather report”. While one projection transforms moving images from 3 cameras – located at the banks of the stream – into a machine-learned impressionistic abstraction, the other uses abstract measurement data and translates them into a moving image language recognizable to humans as water dynamics. With these two views, Kontinuum develops a continuous transition from an abstract representation via numbers and signs to the photographic image.
For this work we use algorithms that temporally condense a current video recording. This creates images that tell of the passage of water, its fleetingness, its adaptability and its changing colors through the light reflections of the environment. The videos of the Chriesbach stand at one end of the continuum, whose other end is determined by data representations of its water. The image moves between these poles, allowing the viewer to experience which aesthetics nature offers and which are the expression of human knowledge systems. In this way, the work reflects the work of a research institute and allows a reconnection to the object of study in its original form and its original form as well as its transformation through the elements of (digital) processing. The work, however, provides yet another continuum: it shows the Chriesbach in its fluctuations, changes, moods and extremes throughout the year. Thus, the work can be seen as a form of barometer for the state of the stream and informs the residents of the institute about the “outdoors” as an aesthetic manifestation.
Membrane is an art installation which was produced as the main work of a similarly named exhibition at the Kunstverein Tiergarten in Berlin early 2019. It builds on a series of generative video installations with real time video input. Membrane allows the viewer to interact directly with the generation of the image by a neural network, here the so-called TGAN algorithm. An interface allows to experience the ‘imagination’ of the computer, guiding the visitor according to curiosity and personal preferences.
The images of Membrane are derived from a static video camera observing a street scene in Berlin. A second camera is positioned in the exhibition space and can be moved around at will. Two screens are showning both scenes in realtime.
In my earlier artistic experiments within this context we considered each pixel of a video data stream as an operational unit. One pixel learns from colour fragments during the running time of the programme and delivers a colour which can be considered as the sum of all colours during the running time of the camera. This simple method of memory creates something fundamentally new: a recording of patterns of movement at a certain location.
On a technical level, Membrane not only controls pixels or clear cut details of an image, but image ‘features’ which are learnt, remembered and reassembled. With regards to the example of colour: we choose features but their characteristics are delegated to an algorithm. TGANs (Temporal Generative Adversarial Nets) implement ‘unsupervised learning’ through the opposing feedback effect of two subnetworks: a generator produces short sequences of images and a discriminator evaluates the artificially produced footage. The algorithm has been specifically designed to produce representations of uncategorised video data and – with the help of it – to produce new image sequences. (Temporal Generative Adversarial Nets).
We extend the TGAN algorithm by adding a wavelet analysis which allows us to interact with image features as opposed to only pixels from the start. Thus, our algorithm allows us to ‘invent’ images in a more radical manner than classical machine learning would allow.
In practical terms, the algorithm speculates on the the basis of its learning and develops its own, self organised temporality. However, this does not happen without an element of control: feature classes from a selected data set of videos are chosen as target values. In our case, the dataset consists of footage from street views of other cities from around the world, taken while travelling.
The concept behind this strategy is not to adapt our visual experience in Berlin to global urban aesthetics but rather to fathom the specificity and to invent by associations. These associations can be localised, varied and manipulated within the reference dataset. Furthermore, our modified TGAN algorithm will generate numerous possibilities to perform dynamic learning on both short and long timescales and ultimately to be controlled by the user/ visitor. The installation itself allows the manipulation of video footage from a unchanged street view to purely abstract images, based on the found features of the footage. The artwork wants to answer the question of how we want to alter realistic depictions. What are the distortions of ‘reality’ we are drawn to? Which fictions are lying behind these ‘aberrations’? Which aspects of the seen do we neglect? Where do we go with such shifts in image content and what will be the perceived experience at the centre of artistic expression?
From an artistic point of view, the question now arises, how can something original and new be created with algorithms? This is the question behind the software design of Membrane. Unlike other AI-Artworks we don’t want to identify something specific within the video footage, but rather we are interested in how people perceive the scenes. That is why our machines look at smaller, formal image elements and features which intrinsic values we want to reveal and strengthen. We want to expose the visitors to intentionally vague features: edges, lines, colours, geometrical primitives, movement. Here, instead of imitating a human way of seeing and understanding, we reveal the machine’s way to capture, interpret and manipulate visual input. Interestingly, the resulting images resemble pictorial developments of classical modernism (progressing abstraction on the basis of formal aspects) and repeat artistic styles like Pointilism, Cubism and Tachism in a uniquely unintentional way. These styles fragmented the perceived as part of the pictorial transformation into individual sensory impressions. Motifs are now becoming features of previously processed items and are successively losing their relation to reality. At the same time, we question whether these fragmentations of cognition are proceeding in an arbitrary way or whether there may be other concepts of abstraction and imagery ahead of us?
From a cultural perspective, there are two questions remaining: – How can one take decisions within those aesthetic areas of action (parameter spaces)? – Can the shift of the perspective from analysis to fiction help to asses our analytical procedures in a different way – understanding them as normative examples of our societal fictions serving predominantly as a self-reinforcement of present structures?
Thus unbiased artistic navigation within the excess/surplus of normative options of actions might become a warrantor for novelty and the unseen.
We are using Max/MSP) to indentify the pitch of natural fly songs and modulates them with their own tones. Naturally occuring harmonics are shown in the real-time illustration. First shown at the Mo Museum in Vilnius, May 2019.
Project Description in German With our Drosophila Karaoke Bar, we want to look at one of the most widely used model organisms in medicine and brain research: the fruit fly, Drosophila. While humans in their everyday life keep away from flies, science uses these creatures for experiments. Drosophila are cheap, they reproduce quickly, have enough genetic resemblance to humans to study genetic diseases and a brain small enough for us to study.
One striking and little-known behaviour of flies are their mating songs. Fly males sing to females by vibrating their wings in rhythmical patterns. With our karaoke bar we want to offer a possibility to sing with flies, to experience their nature and culture in a shared sensual experience.
Can our karaoke bar bring items from our high-tech culture back to our environment? Does it allow the audience to immerse themselves into science? Our attempt to invite people to sing with flies offers a performance to experience a holistic approach to scientific investigations. The setup discusses ecological questions: to which degree do we need to separate the habitats of humans and flies to feel comfortable? Which measures are necessary to make their faint songs audible to humans? How does a laboratory environment affect the behaviour of flies? Under which conditions are we able to enjoy their presence?
The installation invites visitors to establish a direct exchange with fruit flies through a technical interface. A software is transforming human speech into signal that can be perceived by flies. It allows auditory feedback between people and animals. For blending human and fly songs we will use a special signal processing vocoder provided by Berd Edler of the Fraunhofer Institute for Integrated Circuits.
Visitors are requested to talk and sing with flies. Birgit Brüggemeier, Neuroscientist and fly researcher explains in a video the meaning of the separate constituents of fly song. She informs listeners about the syntax and semantics of Drosophila songs, in order to give the visitors a better understanding of fly communication. The video encourages the visitors to sing and speak to flies.
A sound visualization in 2D enhances the auditory perception of the sphere of the flies with a visual monitoring of fly songs on a screen: the location, amplitude and pattern of the sound sources shall help the performer to identify its influence on the fly behaviour.
A large pile of sand covers with its weight the habitat of flies to isolate their buzzing against the noise of humans shows. The massiv sand pile repesents the sensual and the semantic gap between a fly and a human.
In a future version, another set of headphones offer an ‘anthropocentric’ view point on flies: we track the occurring frequencies of our fly community, constituting in courtship songs and flight sounds (differing from about 1 octave). A specially designed software enhances the real-time sounds by modulating them with previously found chords. This software raises the question: are there more hidden patterns of communication within fly songs than described by science yet?
With Karaoke Bar we learn to become silent and careful to be able to hear the voice of Drosophila. Our setting offers a possibility to communicate with Drosophila at eye (ear) level. By concentrating on Drosophilas proper way of expression (what kind of signals are they sending to their surroundings? How are they communicating? How does it sound if they are approaching their comrades? What do they want to negotiate? What are our common windows of perception?), we want to circumvent an anthropocentric world view. The installation not only translates the signals of drosophila (sonifying and visualizing, as it was done earlier), but allows a shared practice in a direct feedback situation, offering a novel sensual experience.
Ursula Damm (Artist, Project Lead)
Birgit Brüggemeier (Neuroscientist, former Fly Researcher)
When I left the county side and moved to a city I begun to miss the sound of the fields and the forrest. And when I later returned to the small village in the middle of vineyards, called Diedesfeld, something was gone. I took me a while to figure out that I missed the sounds of insects. And that this sound was like a confirmation of a strong, ecological balance. Science proofed only years later that insecticides diminished insects up to 80 % of their former presence.
Christina Meissner, Teresa Carrasco and me gathered in Weimar to experience the singing of Chironomid midges (Chironomus riparius, commonly used in ecotoxicology) and their aibility to react to our music. In a direct feedback situation between humans and animals technology should be used only to adapt our senses and make it easier to understand the message of the other. In a first performace we noticed that Christina Meissner with her Cello was able to stimulate lazy midges (stimulation – Soundexample from our first session ) to start swarming intensively (swarming in dialogue). We were thrilled to notice that it was so easy and obvious that humans and midges interfere. In our second concert, it was no longer necessary / voluntary to force midges into swarming, but instead to develop a kind of q/a, to listen and to respond to the phrases of the mosquitoes. Our first performance ready for publishing (in full length) is here.
Our concerts can be seen as a call for the subtle atmosphere which allows insects to stay in our neighbourhood. And our readiness to listen to them.
If you appreciate midges, you might also look at the following video, showing synchronized swarms at Taubensuhl, Palatinat, Germany. They can be observed only some days per year – I was very happy to record them. When I came back with my sound equipment, they were gone.
On the front wall of the Schadowstrasse underground station, an LED wall displays a generative video. In front of the wall, a light shaft extends to the surface of the plaza where a video camera is set up. The camera continuously films passing pedestrians on the plaza and streams the feed to a specially developed generative software application (coded by Felix Bonowski) which derives proposed geometries for structures based on the movement patterns of the pedestrians. These interpretations of the real-time video generate new geometries for the location and propose axes and parcels.
Two elevators, to the left and right of the large video image, lead from the plaza to the rail platform.
Pattern drawings on aerial photos of Düsseldorf, Schadowstrasse
Turnstile (Drehkreuz) from resoutionable on Vimeo. On the platform, the geometric structures can be heard as a sound interpretation (by Yunchul Kim). At the centre of the artistic intervention is the video image and its artistic concept. The concept is reflected in the design of the entrance areas. Plates are inserted in the blue glass of the underground station at 21 locations, which display geometries over districts of Düsseldorf.
In the east concourse is the aerial image of the city of Düsseldorf that was analysed according to the geometric concept.
As excerpts from this aerial picture, 16 locations in Düsseldorf were interpreted at the level of a local aerial image. These urban areas were described with regular polygons as energy centres which fitted themselves together through the development of the city architecture (see the text on the concept of the generated patterns).
The fine structure of the patterns juxtaposes both the sensibility of nature and the human, formative gestures against the massive edifice, calling to mind a mode of formation that creates sweeping interconnections through the symbiotic organisation of a multitude of individual elements. In doing so, this formative process completes the social principle through which individuals experience their effect on the whole.
The pattern drawings are generated in slow steps: First a line drawing is created over the image of the city. As this progresses, important motion axes of traffic and pedestrians are emphasised. The areas these axes enclose become polygons. At this point, the angles of the lines and axes are examined in the search for whole-number fractions of regular polygons.
The smallest polygon integrating all of the symmetries at the location (for instance, five-angled and four-angled fragments would be assembled into a 20-sided polygon) is then used to describe an intersection.
A subsequent step is the search for connections (network) between large neighbouring polygons.
Work with the aerial images revealed that the city centre has very small polygons, while outer areas have a significantly more expansive structure. Thus, density is indicated by the presence of small polygons and complex symmetries. Often, the transition from non-rectangles to rectangles can indicate historical breaks in the urban landscape. In this way, the interpretations represent a study of the settlement and planning history of the city.
The sound installation
The generative video installation interprets traces of movement created by geometric “agents.” The activity of these agents is translated into sounds which track the visual artefacts. As such, the sounds form the noise that the virtual artefacts generate in their world, and thus represent and extended artistic “level of reality” of the installation.
Select a location (origin)
Determine the movement axes of people and traffic
Look to see if these axes are at angles to one another, which when mirrored and rotated can form a polygon, the sides of which all extend outward equally
Draw this polygon to approximate the natural geometry of the location
Look to see if, starting from this, the intrinsic geometries of the location can form a surface structure, (tessellation) that periodically repeats the original geometries
Determine whether and how, in the aerial image of the location, the areas fit together in the revealed geometry of the place
Enhance existing structures by developing their geometries
Connect existing structures into the logic of the original geometry
Konzept: Ursula Damm Programmierung: Felix Bonowski Sound: Yunchul Kim
The Urban Development Kit provides tools to ameliorate the atmosphere in contemporary cities. The Urban Development Kit is a collection of tools. Over the time the website aims to become a resource of ideas, concepts and tools for a citizen-driven urban design.
One of our kits supports watchful citizens and plants to compete with pavement, concrete and asphalt. It helps plants to interact with modern cities, to prevail against soil sealing. A website and an interactive map enables the people to collect photos of “asphalt flowers” in Helsinki and other cities and to monitor the progress of the “cultivation”. With respect to urban environmental research, the urban development kit is a statement about the importance to counteract the sealing of the surfaces in the city. Accordingly, in the exhibition can be seen design for urban surfaces which are based on the geometry the plants themselves.
The work has been developed for the Art&HENVI project, organized by the finish Bioart Society.
In 2014, a new version of the urban development kit was presented at a creative cloud workshop, organized from ars electronica (see the fotos from the workshop)
Non-biting midges (chironomidae) are bred in an aquarium. Inside the aquarium mosquito eggs and larvae swim in sand and water. They are ventilated and supplied with abundant artificial daylight. The choice of midges (Chironomus Riparius, a laboratory strain) allows for captive breeding.
The scientific paper
the text we highlighted
aggression and genes
about swarms and the few we know about them
field observations - scientific practices beyond the laboratory
scientific practices beyond the laboratory
Ingrid Bergmann about trained flies in Hollywood
For a performance I invited Christina Meissner to improvise on the theme of the wingbeat sound. We could experience that the tracks of the midges were visibly influenced especially by dark plucking sounds.
"the outline of paradise" - view into the soundbox
computer control (traces) of sound input
video camera, loudspeaker
Christina Meissner playing
playing with the swarm
the midges, loudspeakers
Christina Meissner with control monitor
traces of midges on monitor
"the outline of paradise" - the soundbox
This setting allows to find out how swarms develop and how they can be influenced. This installation follows “sustaibable luminosity” and explores the possibilities to train migdes and to pass this behaviour to the next generations
Chromatographic Orchestra is an artistic installation which allows a visitor to direct a software framework with an EEG device. In an exhibition environment with semi-transparent video screens a visitor is sitting in an armchair and learns to navigate unconsciously – with his/her brain waves the parameter space of our software – Neurovision.
Neurovision interacts with live video footage of the location of the exhibition and its surroundings. By navigating with his/her own brain waves the visitor can define and navigate the degree of abstraction of a generative (machine learning) algorithm, performed on the footage of different, nearby video cameras.
The installation refers back to painting techniques in the late 19th and early 20th century, when painting became more an analysis of the perception of a setting then a mere representation of the latter. Impressionism and Cubism were fragmenting the items of observation while the way of representation was given by the nature of the human sensory system.
The installation “chromatographic orchestra” does not apply arbitrary algorithms to the live footage: we developed a software – the Neurovision framework – which mimics the visual system of the human brain. Thus we question whether our algorithms meet the well being of the spectator by anticipating processing steps of our brain.
How much complexity can our senses endure, or rather how could we make endurable what we see and hear? Many communication tools have been developed, to adjust human capabilities to the requirements of the ever more complex city.
Our installation poses the opposite question: How can information emerging from the city be adjusted to the capabilities of the human brain, so processing them is a pleasure to the eye and the mind?
At the core of our installation is the NeuroVision Sandbox, a custom made framework for generative video processing in the browser based on WebGL shaders.
Inside this Sandbox we developed several sketches, culminating in the “Chromatographic Neural Network”, where both optical flow and color information of the scene are processed, inspired by information processing in the human visual system.
We critically assess the effect of our installation on the human sensory system:
Does it enhance our perception of the city in a meaningful way?
Can it and if so – how will it affect the semantic level of visual experience?
Will it create a symbiotic feedback loop with the visitor’s personal way to interpret a scene?
Will it enable alternate states of consciousness? Could it even allow visitors to experience the site in a sub-conscious state of “computer augmented clairvoyance”
In a location close to the site a single visitor directs a video-presentation on a large screen with a setup we like to call “the Neural Chromatographic Orchestra” (NCO). Our installation uses an EEG-Device (Emotiv NeuroHeadset) that lets visitors interact with a custom neural network. The setup allows visitors to navigate through various levels of abstraction by altering the parameters of the artificial neural net.
With the NCO device, a visitor can select and explore real-time views provided by three cameras – located in public space – with different perspectives on the passer-byes (birds-eye view and close-ups)
The installation is based on the NeuroVision Sandbox used in the development of “transits”. Other than transits, chromatographic ballads uses multi-channel real-time video-input and enables a visitor to interact with irectly via biofeedback with the neural network.
The Neural Chromatographic Orchestra investigates how human perception reacts to the multifaceted visual impressions of public space via an artistic setting. Using an EEG-Device visitors can interact with a self-organizing neural network and explore real-time views of an adjacent hall from several perspectives and at various levels of abstraction.
The Chromatographic Neural Network is a GPU-based video processing tool. It was inspired by parallel information processing in the visual system of the human brain. Visual information processing inside the brain is a complex process involving various processing stages.The visual pathway includes the retina, the Lateral Geniculate Nucleus (LGN) and the visual cortex
Low-level visual processing is already active at the various layers of the retina. The Interconnection of neurons between retina layers, and the ability to retain information using storage or delayed feedback, allows for filtering the visual image in the space and time domain.
Both image filters and motion detection can easily be achieved by accumulating input from neurons in a local neighborhood, in a massively parallel way.
Our Chromatographic Neural Network uses this approach to cluster colors and to compute the visual flow (or retina flow ) from a video source. The resulting attraction-vectors and flow-vectors are used to transform the memory retained in the memory layer.
The visual output of the system directly corresponds to the state of the output layer of the neural network. The neural layers of the Chromatographic Neural Network, are connected to form a feedback loop. This giving rise to a kind of homeostatic-system that is structurally coupled to the visual input but develops its own dynamics over time.
A visitor enters the site – a highly frequented passage, a spacious hall or a public place. Two videocameras, mounted on a tripod, can be moved around at will.
Another camera observes the passer-byes – their transits and gatherings – from an elevated location. The video footage from this site is streamed into a neighboring room – the orchestra chamber of the Neural Chromatographic Orchestra.
Here one can see – in front of a a large video wall a monitor displaying the videos from the adjacent room and the “orchestra pit” – an armchair equipped with a touch device and a neuro-headset. The video wall, showing abstract interpretations of the site itsself, should ideally be visible both from the orchestra pit and from the large hall.
The Orchestra Chamber
Inside the chamber the visitor is seated in a comfortable armchair and an assistant helps her put on and adjust the neuro-headset.
The orchestra chamber should be isolated from the public area as much as possible. A sense of deprivation from outside stimuli allows the visitor to gain control over her own perception and achieve a state of mind similar to meditation or clairvoyance.
The Orchestral Performance
Training Cognitive Control
A performance with the Neural Chromatographic Orchestra starts with a training of up to six mental actions, corresponding to the “push/pull”, “left/right“ and “up/down” mental motions provided by the Emotiv Cognitiv suite. The training typically lasts 10 to 30 minutes.
Playing the Sandbox
After successful training the visitor is asked to sit in front of the NeuroVision Sandbox:
The visitor in the orchestra chamber has three modes of conducting the neural network
either the birds-eye view or one of the cameras that take a pedestrian’s perspective
A graphical user interface lets her switch between different neural networks and control their parameters
A menu lets her choose any of the three cameras as a video source:
the NeuroHeadset allows to navigate the parameter space of the selected neural network
Conducting the Orchestra
Once the visitor feels comfortable conducting the NCO on the small screen, she can perform on the large screen, that is also visible from the outside.
On the public screen sliders are not shown, but the conductor may still use a tablet device to access the graphical user interface.
The current position in parameter spaces is represented by a 3d-cursor or wire-frame box, which is very helpful for making the transition from voluntary conduction moves, to a style of conducting that is more directly informed by immersion and interaction with the output of the Chromatographic Neural Network.
The Chromatographic Neural Network
The flow of information is arranged into several processing layers. To realize memory, each processing layer is in turn implemented as stack of one or more memory layers.This allows us to access the state of a neuron at a previous point in time.
The video layer is made up of two layers, so the system can access the state of any input neuron at the current point in time, and its state in the previous cycle.
The Video layer
The Video layer contains the input neurons. Each neuron corresponds to a pixel of the video source. The Video layer provides the input for the Flow layer.
The Ghost Layer
The Ghost layer represents a haunting image from the past. It implements the long term memory, that interferes and interacts with the current visual input. It does not change over time, and is provided as additional input to the Flow layer
The Flow layer
The Flow layer accumulates the input from the Video layer and the Ghost layer. Each Neuron aggregates input from its neighborhood in the Video Layer at times (t) and (t-1). The computed 2d vector is directly encoded into the the state of the neuron, creating a flow map.
The Blur layers
The Blur layers are used to blur the flow map. While the computation of visual flow is restricted to a very small neighborhood, the blur layer is needed to spread the flow information to a larger region, since flow can only be detected on the edge of motion.
For efficiency reasons the blur function is split into two layers, performing a vertical and a horizontal blur respectively.
The state of each neuron corresponds to an RGB color triplet. Every neuron of the Flow layer gets input from corresponding neurons inside a local neighborhood of the input layers. Each of those input samples corresponds to a single synapse. The vector from the center of the neuron towards the input neuron is referred to as the synapse vector.
To achieve some kind of color dynamics, colors that are close in color space are supposed to attract each other.
The distance between synapse input and the neuron state in RGB color-space, serves as a weight, which is used to scale the synapse vector. The sum of scaled synapse vectors results in a single color attraction vector.
While color attraction is the result of color similarities or differences in space, color flow is the result of a color changes over time. Rather than calculating the distance of the neuron state to a single synapse input, its temporal derivative is calculated by using input from a neuron and its corresponding memory neuron. This time the sum of scaled synapse vectors results in a flow vector.
Both color flow and color attraction vectors are added up and their components are encoded in the flow layer.
here are various parameters in each layer controlling the amount and direction of color attraction, color flow, the metrics used for calculating color distances, the neuron neighborhood, etc …
All neural computation is performed on the GPU using OpenGL and GLSL shaders. This is the mapping from neural metaphors to OpenGL implementation:
In our implementation both color flow and attraction are integrated into a single level flow map. While this generates interesting local interactions, there is little organization on a global level. The work on Multilevel Turing Patterns as popularized by Jonathan McCabe shows that it is possible to obtain complex and visually interesting self organizing patterns without any kind of video input.
Our future research will combine several layers of flow maps, each operating on a different level of detail. Additional directions include alternate color spaces and distance metrics. In the current model input values are mixed and blurred, resulting in a loss of information over time. We have also been experimenting with entropy-conserving models and are planning to further investigate this direction.
Since the 90ies many devices and machines have found their place between the artist and its audience. Communication happens in an controlled, planned, downsized way – in short: posthuman. In doing so we accept that every interface deflect the attention from our senses in favour of technical devices and data.
An experiment shows what happens when the body is given back its place as the ultimate instance of evaluation and the natural senses of humans and other living beings are brought into the center of consideration.
The video has been produced for the annual meeting of the german association of media sciences, Leuphana University Lüneburg 2013