/ Digitising Contemporary Art ↑
An Interview with Paweł Janicki by Agnieszka Kubicka-Dzieduszycka

[AKD] When you got involved with the Moving Stories project – which is (as the title indicates) closely tied to the context of film – your inspiration came from a completely different direction: sea voyages.

[PJ] That came about as a result of the software I was using. That’s often the way it works for me – the technology determines the content. When I got the invitation to propose an idea for the Moving Stories project, I realized I had tools that I could use to arrange or construct a story. That technology had taken shape over several months, while I was working on various other projects. Conceptually it’s a database whose contents you can arrange relationally – defining dependencies among the contents. In technical terms it’s software, which is one of my customary modes of expression.

Another thing was that up until then I’d never done any narrative stuff. But I’d begun to get interested in it. Mainly because I simply felt like creating a work that tells a story, but also because I was getting fed up with some of the technology I was working with. Mainly I mean software for creating electronic music, but also for composing in other media – different kinds of object systems, which for me are like the “Power Point” of media art. It had really started to bother me, because using that kind of thing means that in the finished work the technology itself is more evident that any of the artist’s ideas. So I had started writing my own program, which I called The Map, which combines features of both object systems and linear systems. And linear media-processing systems  have always been used to create narratives …

[AKD] Yeah, linearity sounds like a timeline in an editing program, or a film or video and the sequence of events on it …

[PJ] Right. And that’s the first part of the story. The second is connected with the fact that when I started thinking of The Map in terms of building a narrative, I decided – in order to avoid a total reversion to linearity, which wasn’t what I had in mind – I decided to look for a story that could be modular, not just linear. And I found that a lot of maritime literature can be treated that way.  If the story involves the heroes sailing from island to island, the voyage just naturally breaks up into modules that can be arranged in a alternative, nonlinear way.

[AKD] And at the same time a sea voyage practically has to involve unexpected events that can disrupt the linearity.

[PJ] There were also lots of other things that were hugely inspiring – maritime maps, for example, which are full of really interesting visual ideas. And in the visual layer of the work, right from the start I was anxious to avoid creating anything too literal – scenes with actors in them, for example. I just felt that wouldn’t work (I even did some test shots, but that just convinced me I was right). Finally I filmed a collection of visual samples which I then put through a lot of major alterations – a lot like in music sampling, which I do a lot of: If you listen to any single sample outside of the context of the arrangement, it tells you nothing about the whole composition – it can even mislead you totally. And the processing of the video material is really powerful in that same way, so what we finally end up with has a non-literal quality, but it still forms a story.

[AKD] What was the source of the video images?

[PJ] All sorts of things – some of them I don’t want to talk about! But basically I filmed everything I could. I taped a lot of stuff at home, for example, but in some kind of unusual way – an offbeat perspective, or using wide-angle optics, so the original images are hard to recognize.

[AKD] I was thinking about how much sea voyages have in common with media art. The urge to set off into the unknown, beyond the horizon, is like an analogy for media art’s search for innovation. In both fields, you’re looking for something new and you have to be ready for anything – and that’s the essence of experimentation. And it’s the same with the instruments – in sailing and in media art, you have to know how to use them. Early navigational instruments were based on observations of nature; later they were improved and got more complex, but the progress wasn’t so much a process of replacing the old equipment – it built on what already existed. Just like in media art. So we’ve got all these analogies, even on the most basic levels.

[PJ] Those analogies definitely exist – sometimes they’re deeply intuitive. It’s hard to say where they come from – maybe from our unconscious yearnings. For example, one of the first Internet browsers was called the Netscape Navigator – navigation again! And now a really popular function built into MP3 players, tablet PCs and smart phones is a compass. It’s really hard to say why – after all, in cities and “civilized” areas we don’t need compasses, but there have been whole advertising campaigns for these gadgets based on the fact that they include built-in compasses. I wish my own phone had a compass – but I don’t know why.

While working on Oceanus I realized that Ping Melody, which is in a way my “flagship piece” – oh look, another sea metaphor! – was also inspired by a navigational device: sonar. (The network command “ping” works in a similar way, by sending a data packet that’s “echoed” back.) Sonar was also part of the conceptual and visual layers of the installation Generator III (1) that Lech Twardowski and I created – radar as well, which was originally invented for maritime navigation. So seafaring clearly provides us with a lot of interesting metaphors and technology, and also gets us thinking in ways that relate to our needs and feelings. It’s a lot of fun to get into it.

[AKD] The aim of the Moving Stories project was to look at new narrative possibilities using moving images in contemporary art. Oceanus is the only work out of the six pieces in the project in which “moving images” are only a part of the whole. Your work is a complex, multilayered installation – a physical presence.

[PJ] That’s really atypical for me. Usually I work on nonmaterial interfaces – and this time it’s in the form of a table – a map table, to be exact.

[AKD] Oceanus very clearly processes the original images; at the same time it avoids linearity and introduces “cinematic” elements, forming a complex media structure. How do the visual, audio and text elements co-exist in this hybrid?

[PJ] The narrative structure isn’t really connected with its multimediality – a similar structure could be based on just one medium. But it’s very firmly rooted in literature: The Voyages of St. Brendan the Abbot [Navigatio Sancti Brendani Abbatis], a mysterious anonymous text that’s usually dated around the 9th or 10th century. As a story line it was the prototype for lots of other works describing “maritime fantasies”, shall we say.  It’s got the whole gamut of creatures, beings and situations that you find in this type of story: sirens, islands that turn out to be whales or “giant fish”, and assorted other sea monsters. Of course in The Voyages of St. Brendan these motifs are borrowed from earlier writings, like the Odyssey; this kind of literature has a natural tendency toward compilation. A whole slew of fantastic events is packed into a few dozen episodes, but the cause-and-effect relations among them aren’t at all evident. The fact that Event B follows Event A isn’t always due to something in the contents of A and B – why, for example, should a whale necessarily follow  a siren?

[AKD] Especially when the second episode isn’t a narrative consequence of the first, but a self-contained unit.

[PJ] In simplified terms the link between episodes is almost always just an escape from one strange place to another, so the cause and effect relations can be pretty loose. That means we can create fairly independent episodes, and alternatives to the classic story lines – thousands or hundreds of thousands of story lines. Some of them will be nonsensical, but at least a few hundred of them will be coherent. And if we also include the stylistic figures used in interactive narratives, like various forms of loops, then the number of possible story lines is even greater. It’s an extremely open structure.

If you work with your own software, which is what I do, you can have different media functioning in one framework, using the same algorithm. The processing works a but differently, but – except for a couple of technical details – the structuring is very similar for each of the media. It doesn’t matter what form a given piece of information takes. I’m interested in things that are simultaneously interactive and generative, and those are the kinds of structures I used in Oceanus. The generative parts come from the fact that certain processes we encounter were initiated by people who used the interface before us, and they affect our own choices. So Oceanus is a generative-interactive work.


[AKD] I’m interested in The Map program and how you created it. Even the term to write software alludes to literature – but when you write software you’re writing something that allows you to structure very different media and to transpose a narrative into a completely different space. In order to use a piece of literature, you had to write another text, which records totally different “voyages”.

[PJ] Code is a text that determines the physics of the whole situation – it’s a kind of environment where processes we initate are carried out. Besides creating my own works, I also write code for other artists’ projects.  That’s what I mainly do at the WRO Art Center – it’s a bit difficult to define but you could call my job “techno-curatorial”: it combines purely technical work with discussions of the creative concept of a given piece. Even though every project is completely different, sometimes you have very similar experiences working on them. So I thought that instead of an initial discussion of what communication strategies, interfaces and technology a given work needs, it would be interesting to offer artists an a more open system that lets them do something on their own. That didn’t work at all – you just can’t skip that much technological training – but when I still believed it could work, I started creating a system like that. And The Map is a result of my experience working on several projects – especially Dawid Marcinkowski’s Sufferrosa and Infiltron by Alicja Żebrowska and Jacek Lichoń – and my own ideas. The Map is a pretty universal software tool that I decided to use independently, and it’s surprised me more than once.

Internally The Map is pretty complicated. Writing a program like that alone is hard – teams usually work on that kind of thing. So I used a programming language called ActionScript – it’s part of the Adobe suite, but there’s also an open source form of it for programmers. The cool thing about ActionScript is that it includes a lot of high-level components – mostly elements of the user interface – that in other languages you’d have to write from scratch. It’s good for quick application prototyping. And while I was working on Oceanus, a lot of touchscreen devices came out – so Oceanus could function as an application on those. ActionScript was supposed to be implemented in those new devices – but it turned out it wasn’t, so I’ll have to think about what language I use to write the next versions. Writing The Map was like sailing on a sea of possibilities, complete with the risk of running aground on a reef.

[AKD] And what about the audio? Did you work on that in your usual way, or look for something new?

[PJ] I looked for new directions, although some older concepts just naturally came into it, like with the video processing. But I decided not to use any sampling, which I usual do use. At first I was thinking about some kind of collection of sequences, but in the end I created a sound bank from which I generate different structures. I wanted to get rid of sequneces and that whole “stiffness” that goes with them. That’s the way the video is processed too – it stops being sequential. The Oceanus sound track also includes synthesized voices, not samples or recordings. It was really important to me to have the original Latin in the sound track as well as in the textual layer of the work, so I decided to use synthesized Latin singing. So even though you don’t need to know Latin in order to understand the work, it’ll be cool for people who do know Latin.


[AKD] You deal with interactivity, which is pretty complex. Oceanus has two screens: one (the table) is a touchscreen for multiple users, and the other is a vertical screen showing the results of decisions made on the map table. So several people actively using the interface have to cooperate in some way. So whose voyage are we in fact watching?

[PJ]  I’m not interested in installations for just one person. I’ve always gotten into interactions among a number of people, and that’s taken me lots of different places. In 2001 when Anna Płotnicka and I did Performance on Demand (2) during the WRO 01 Media Art Biennale it was a really extreme situation: The interaction was between the performer and a few thousand people. Another form of this kind of thing is the Interactive Playground (3) – an exhibition mainly for kids. From the start we wanted to create a space where several participants would interact with each other, instead of just one person at a time interacting with the technology. During the Mediation/Medialization conference (4) that WRO organized in 2000, Jaron Lanier expressed this idea really clearly, saying that digital technology can be regarded as a tunnel between the people who use it. It’s not interaction between a person and a machine – it’s interaction between two people. In some ways the interaction is mediated by technology, which can enhance the communication value of the situation, or just make it  possible in the first place.

In group interaction of course there’s always the question of the meanings that manage to come through. Like I said, Performance on Demand was a really extreme situation that sense.  The number of messages that can be received and understood within a given timespan is limited, and if we go over that number, what comes through becomes a question of statistics. As the interface for  Oceanus the table has the advantage that a limited number of people can be at it at one time. I can imagine a few people navigating Oceanus achieving a level of understanding that lets them construct a narrative together – I’d really like that.

[AKD] Let’s go back to the function of the big vertical screen.

[PJ] I got the idea for that kind of interactive/passive model for constructing a narrative while I was working on Performance on Demand. In that project, along with the possibility of participating in the artist’s activity in person and on line, the audience could also watch a transmission of it all on a big screen in the screening room next door – like a film being edited live. Oceanus is the first time I’ve had a chance to go back to that: The map table with the touchscreen governs the narrative structure and movement from one episode to another, while the big screen above it presents that structure as a timeline – travel back and forth along the temporal axis. The analogy to the cinema is appropriate here too: The big screen lets us follow a certain story line – but if we don’t like it, in Oceanus we can use the table to change it.

But it’s also important to bear in mind that there are different reactions to this kind of narrative. Sometimes we feel like sailing toward what we see, and sometimes we don’t. A lot of interactive projects are less successful than they could be because they don’t accept passive reception – they get pushy, insisting on interaction. Accepting both models of receptiveness creates great possibilities. That’s true of interactive cinema as well: Most ideas for interactive cinema just don’t work, because the cinematic model of passive reception can’t cope with interaction. At the same time interactivity has problems with narrativity – the two can’t coexist without conflict, despite tons of concepts and technologies that have been invented to try to make interactive narratives work. Computer games are an exception – most of them are definitely narrational and interactive. But I really didn’t want to make a computer game. I wanted to create a situation where two modes of perception – interactive and passive – not only coexist, but are actually interdependent. To achieve a linear “result” we have to do something with the interactive structure, and at the same time the interactive structure leads to linearity.

Translation from Polish: Sherill Howard-Pociecha

NInA NInA MKiDN Wrocław Wrocław Wrocław Wrocław