Adam Nash is a new media artist, composer, programmer, performer and writer. He works primarily in networked real-time 3D spaces, exploring them as live audiovisual performance spaces. His sound/composition and performance background strongly informs his approach to creating works for virtual environments, embracing sound, time and the user as elements equal in importance to vision. Adam’s work has been presented in galleries, festivals and online in Australia, Europe, Asia and the Americas, including SIGGRAPH, ISEA, and the Venice Biennale. He also works as composer and sound artist with “Company in Space” (AU) and “Igloo” (UK), exploring the integration of motion capture into real-time 3D audiovisual spaces. He is currently undertaking a Master of Arts by Research at the “Centre for Animation and Interactive Media” at RMIT University, Melbourne, researching multi-user 3D cyberspace as a live performance medium; and he’s a Lecturer in “Computer Games and Digital Art” in the School of Creative Media at RMIT University.
You will need to download the free Second Life client to access Adam’s work in Second Life. Or you can see video documentation of some of his works. URLs can be found at the end of this interview.
Adam will be answering reader’s questions in the comments section below until January 31, 2008.
Helen Thorington: I understand that you do not think of yourself as a sound artist in Second Life. I wonder if you would explain why?
Adam Nash: I think of a realtime 3D multi-user environment (3D MUVEs), like Second Life, as a post-convergent medium. This means that no single media-element (sound, vision, sociality, network, time, etc) takes precedent, rather they all exist equally in a symbiotic relationship, without which none of them could exist.
[Image: Unsung Song #16: Blue Sound Ground]
Helen: Do you have any musical training? Do you play any musical instruments? Does this help or hinder your explorations?
Adam: I don’t have any formal musical training, but I do play a few instruments badly, chiefly the drums and keyboards. I have many years’ experience playing in bands and making music for soundtracks and performances. I also have quite a lot of experience as a live performer in performance art, dance and movement. Like all experience, it both helps and hinders my explorations in 3D MUVEs. While I am able to build and expand upon musical performance techniques, I assume that the same experience severely hampers my ability to see potential in a new environment. I really love music, but I think new environments like this reveal music as an outdated concept. I still think music is useful – indeed I release a lot of my own music under a Creative Commons license via my net-label at www.concentrated-sound.net – but anachronistic. I was first drawn to realtime 3D back in 1997, when I first encountered VRML, and it struck me as a very similar environment to the inside of my own head when I was creating music for performances. It is a spatial environment in which sounds can be animated in a way that is easy to visualize but impossible to achieve in the physical world. It is a logical next step to see the environment as the performance environment as well as the composition environment, and from there quickly grows the concepts that I explore in 3D MUVEs, basically audiovisual environments that users navigate within to create their own unique experience from the elements provided by me. It’s like the composer’s mind, the instruments and the venue all rolled into one.
Helen: Tell us about composing sound for Second Life. You have called it a “technically very limited and frustrating environment.” What are the limitations and frustrations? Are there redeeming features?
Adam: Composing sound for Second Life, or any 3D MUVE, is fun, because of this ability to provide the basic audiovisual elements and then leave the user to arrange (ie, navigate) the elements as they please. This is an extremely exciting and satisfying way of working, because it removes the need for arrangement – a skill, different from composition, that is absolutely crucial in linear music. There’s nothing wrong with arrangement (often in linear music it is the thing that turns something great), but often there are an unlimited number of potential ways of arranging a piece of music and the musician is forced to choose only one.
Also, with this idea of the melding of the composition environment and performance environment, the act of creating work is often enormously enjoyable because you get to fly around and through your ideas, trying out different ways of navigation that you may never have realized were possible when conceiving of the piece. It’s like a slightly more concrete iteration of the limitless imagination scape in which all these ideas are found.
The technical limitations of Second Life are significant and many. The main limitations, for me, are the lack of a proper modeling hierarchy, and a few things to do with sound, like the 10-second limit per file and lack of control over falloff. There is also an undocumented limit to the number of simultaneous sounds that can be played. On the other hand, there are a lot of positives about working within limitations, as the artist is forced to be creative and come up with novel solutions. It also means many formal decisions are made prior to starting work, which in some ways makes things easier. Like most things, it is both blessing and curse.
[Image: Unsung Song 2: Crescent]
Helen: Avatars play an important role in your work by activating the sound. And yet you have “core problems” with them. “The avatar concept”, you say in July’s empyre discussion “is the one I find the most troubling, and it also grows from the 3d-space-as-physical-simulation misassumption. There is no need to concentrate presence into one cohesive point (an avatar).” I wonder if you would explain what you mean by this, and perhaps suggest alternatives.
Adam: Well, if avatars play an important role in my work, it’s because they play a very important role in Second Life itself. The problems I refer to are both technical and conceptual. First, the analogy of a single point of presence, from which the rest of the world is perceived, and in which the rest of the world perceives you, arises directly from our physical world, where our sensory organs are coalesced in a single unit and cannot be separated. Recently, humans have been able to spread out perception and presence through technological mediation, for example cameras, telephones, radio and the internet, and I think we are certainly slowly moving away from the concept of a single point of perception and presence, but mostly it is still how we negotiate our physical existence.
But, it is a very underexamined concept in realtime 3D, and particularly in Second Life. This is true of the entire physical world analogy that controls the working concept of Second Life. Even though it may seem natural to use 3D space to recreate physical space, that is only one possibility, and certainly not the easiest, because it can never recreate physical space, only represent it. Once we move into the sphere of representation, different modes of perception are required (one never actually walks on a map).
Because the system to which our bodies are subject (ie, physical space) is now being represented, we need also to represent our bodies, not recreate them, otherwise things quickly get confusing and the representation becomes limited in usefulness. This happens as soon as we move our ‘camera’ away from our avatar – we are no longer seeing and hearing via our avatar’s eyes and ears, rather we are perceiving from whatever point in the 3D space that our ‘camera’ is at. Yet, within this synthetic space it is perfectly feasible that we could perceive from both the position of the camera and the position of our avatar. This is not difficult or unusual, in fact we are already doing it twice simply by having a default avatar in Second Life. The first, significantly, is the physical/virtual superposition, where my physical body is seeing and hearing my avatar see and hear – already I have two points of perception (literally and conceptually). Then there is the ‘over the shoulder’ point of view that SL avatars default to, behind and above the head of your own avatar, really a camera that is following your avatar. It is seeing and hearing your avatar see and hear. So now I am seeing and hearing my camera seeing and hearing my avatar seeing and hearing. I am simultaneously perceiving from three different points, literally and conceptually. I think this is one of the reasons so many people feel so disoriented when first encountering realtime 3D space.
Since it is possible, indeed common, to perceive from two or three points, then it’s a small step to expand the number of points of perception arbitrarily, both in space and in time (lag and multiple private chats are both examples of multiple points of perception in the temporal dimension that all SL users are comfortable with).
Practicing the agency of presence via multiple points perhaps seems a more subtle or difficult concept, but again SL users constantly deal with others via multiple points of presence. For example, most users quickly become comfortable with the idea that another user may not be seeing and hearing the scene from their avatar, or that they may be simultaneously dealing with the physical world and the synthetic world and the mediation device itself. Indeed, SL specifically acknowledges this via the device of having the avatar’s eyes and head follow the user’s mouse pointer when dealing with the user interface. This means that others’ avatars are, variously, a presence notifier (the person is logged in), a mouse, a representation, none of these things, all of these things and potentially many more things besides.
[Image: Unsung Song #9:Corona]
Helen: I can fly alone through your installations and activate sounds. I can get friends to move through them with me and produce different sounds. I can play with the work and it changes. Isn’t it in fact important for your work to have the avatars’ presence concentrated in one space?
Adam: In that sense, the avatar is serving the standard function of a mouse pointer for 3D space. Again, this is mainly because of the restrictive working analogy of Second Life itself, which enforces this role for the avatar, and it’s true that some of my works are a specific comment on, and working within, that restriction. But, it is not necessary for the user’s avatar to be concentrated in one space. Ideally, for many of the works, the user would be able to branch off avatars and move spatially through works in different ways simultaneously. Similarly for time. Or, to be able to interact with different works simultaneously in space and time.
Certainly, I consider all the pieces in, say, Seventeen Unsung Songs to be all parts and aspects of the same work, quite literally. Sonically, they are all constructed from the same rational scale that I devised, based on a fundamental tone of 77Hz then proceeding in intervals of ratios over 7. All of the pieces use this scale, and one of the pieces (Blue Sound Ground, which users pass through at the entrance) contains all of the sounds used in all the other pieces, both as a conceptual readying and also a technical device to load as many sounds into the user’s cache as possible. Visually, also, all the pieces are clearly very strongly related, sharing colours methods of distributing colour across hue, saturation and opacity spectra. It would be ideal if they could be experienced in multiple modes over space and time.
[Image: Anahata,The Mute Swan]
Helen: Have you considered what kind of work you might produce if in fact presence were not concentrated in one point? If presence were distributed over time, location, data and media?
Adam: I think it implies a more involved work, a work where the user experience becomes extremely important to the work. The extent of the user interaction over multiple points determines, to large extents, how the work develops and emerges. Works could take dynamic notions much further. For example, currently we can trigger a certain sound or animation based on sensed data about an avatar’s position and other metrics – this could be expanded to include many different aspects of the nature of the user’s engagement with the work. It suggests work that exists across environments, building on gameplay techniques to build a performative and experiential vocabulary cooperatively between artist and user. This is tremendously exciting and suggests a kind of work that could accompany users through time and space, growing and changing together. This kind of thing would start to approach the mechanics of true non-linear interactivity.
Helen: It seems to me that your work adds new parameters to sound/musical composition. In most of the networked musical pieces I’ve heard or seen described, this has not been true. Music remains music, separate or separable from other things, like the space in which it is played and its audience. And while I find this very difficult to talk about, what you introduce has to do with audience immersion and presence in the space; and audience activation of the work as a result. Thinking of the participant, I think of words like “experiential” (experiencing through the movement of my avatar-body as it explores the space you have created), the bringing into existence of music/sound. Thinking from the point of view of the music/sound, it’s not like filling a space with pre-determined sound (as so many of us have done in RL), but rather creating a dimensional space with potential… And that the two constitute a unique approach to creating and experiencing music.
I’m reminded of NOX Son-O-House, a public pavilion that is both an architectural and a sound installation that allows people to not just hear sound in a musical structure, but also to participate in the composition of the sound. It is an instrument, score and studio at the same time. A sound work, made by composer Edwin van der Heide, it is continuously generating new sound patterns activated by sensors picking up actual movements of visitors.
Is this similar to the work you’re doing?
Adam: Oh, well, I certainly hope so. I’m not familiar with that work, but it sounds very similar conceptually to the process I touched on earlier, where the compositional environment, the performative environment and the experiential environment converge, and the resulting symbiotic relationship reverberates back and forward throughout the previously distinct stages, merging them into a new, post-convergent environment of interactive, emergent, audiovisual experience.
Helen: Given the desire for multiple avatars to simultaneously/collectively activate your installations, how do you reconcile the absence of avatars or the single avatar interacting with the piece with your intentions?
Adam: I’m not sure I fully understand this question, but most of my pieces can be experienced at multiple levels in terms of number of avatars, length of time spent, familiarity with 3D space, etc. Again, this is related to my desire for an approach to the medium that is not tied to a physical world analogy of a single person with a single body. Even though SL is a multi-user space, it doesn’t preclude single users, and this is true of my work too, I hope. Some works are probably more satisfying aurally when used with other people (eg, Rarer Air), but other works are designed for individuals to interact with different elements of the SL experience, besides the social, in which case the number of avatars using it doesn’t really matter too much (eg, The Space Between). Yet others are unaffected by the number of avatars accessing them (eg, Appolinarium). I really try to explore many different aspects of the realtime 3D MUVE environment in all my different works, so its difficult to align all the work with an over-riding desire on my part.
From: Bell Garden
Helen: Have you created sound installations in other virtual worlds? If yes, can you talk about the similarities and differences, pros and cons?
Adam: Again, I really don’t think of them specifically as sound installations, but yes I have worked in many different virtual worlds/environments over the past 10 years or so, including VRML/X3D, ActiveWorlds, Blaxxun/Contact, Unreal, Torque, Quest 3D, Multiverse and even GEM in Pure Data. Differences are mainly technical, with VRML/X3D being by far the freest and most able to accommodate large scale, unrestricted concepts. In practice, it’s always had some problems dealing with lots and lots of simultaneous sounds, but I think that Niall Moody has solved that with his Helian browser, though I haven’t had a chance to use it – I’d like to but SL has got the mindshare at the moment, so that’s where curators want you to work. It’s a shame VRML/X3D never gained wide acceptance in the media arts community. As for the other environments I mentioned, they’re all commercial products to greater or lesser extents, except for Pure Data, so they all have significant technical restrictions that arise as a function of the commercial aims. Multiverse looks interesting in terms of extensibility and freedom, but again I haven’t had a real chance to properly check it out. I’m trying to at the moment with my colleague John McCormick, but again we’ve been commissioned to do a mixed reality piece using Second Life, so that takes up most of our time. Pure Data (known as pd) is the opposite, it’s open source and specifically designed for audio. With the GEM library in pd you can use OpenGL to create responsive 3D environments, and John and I have been working with that a little, with promising results. Most of these environments have things that they do better than others and things they do worse. SL does a lot of things poorly and a few things well, with its popularity being its chief advantage at the moment.
From: A Rose Heard at Dusk
Helen: You refer to your SL pieces as “audiovisual sculpture” and “site-specific installations.” Can you talk about the difference, and what makes Seventeen Unsung Songs site-specific, but not A Rose Heard at Dusk?
[Image: A Rose Heard at Dusk]
Adam: I guess “audiovisual sculpture” refers to all my work in 3D environments, whereas something like Seventeen Unsung Songs is a collection of inter-related audiovisual sculptures that were commissioned by Sugar Seville specifically for an island that already existed, therefore it is “site-specific”. It wouldn’t be possible to recreate Seventeen Unsung Songs in its entirety without having an island that was very similar to East of Odyssey, but it would of course be possible to install individual pieces from within that show in different places.
Helen: What do virtual worlds offer you as an artist that real world spaces don’t?
Adam: To me, this comes back to my concept of the post-convergent medium. The physics of realworld spaces make it impossible to attempt such things as continuous realtime dynamic animation of arbitrary numbers of sound and vision sources based on continuous realtime sensing of presence and other metrics. However, the comparison still considers the primary role of virtual spaces to be a recreation of physical space, which is not what I think. The kind of art that I have ever attempted in real world spaces has always been primarily performative and very different from virtual work. I guess there was a point of crossover when I was still working with The Men Who Knew Too Much and looking to combine real world and virtual art, but since 2002 any work I’ve done that involves so-called mixed reality has chiefly been in the service of others like Igloo, but then I tend to do the music/sound and some performance. I don’t see virtual spaces as a separate reality, I very much see virtual space as wholly contained within the real world.
Helen: We’re seeing more and more artists combining sound/music and moving images/video, referring to themselves as a/v artists and VJs. Why do you think this is?
Adam: I guess it’s a natural progression from a past that had discrete partitions between all sorts of experience, as a result of both technical and conceptual limitations. As media starts to converge, and access to both the means of production and means of distribution becomes easier, it becomes more viable technically to enact the kind of concepts that naturally emerge. In particular, two generations of music video and clubbing combine with more meme-like concepts of emergence and networks to create a desire to operate across a range of media. Most people’s media vocabulary is of a sufficient level of sophistication that practitioners are driven to explore new modes of expression to engage meaningfully with an audience.
Helen: Are there any other artists working in the same vein as you?
Adam: Plenty of really interesting artists operating in Second Life, many of whom share aspects of exploration and practice with each other, myself included. Some who come to mind are Gazira Babelli, Annabeth Robinson/AngryBeth Shortbread, Christopher Dodds/Mashup Islander, Bingo Onomatapoeia and the Avatar Orchestra Metaverse, DC Spensley/Dancoyote Antonelli, Brad Kligerman, Juria Yoshikawa, Keystone Bouchard, Daruma Picnic, Christine Webster/Wildo Hofmann and Andrew Burrell/Nonnatus Korhonen. That’s just a short list, there are lots of people doing lots of interesting work all over Second Life.
Helen: Who are some of the artists you most admire?
Adam: John Power, John McCormick, Burno Martelli and Ruth Gibson (Igloo), Bruce Mowson, Melinda Rackham, George Clinton, Prince, Greg Egan, Yoko Ono, Morton Feldman, Brian Eno, Mark Rothko, Laszlo Moholy-Nagy. There are so many artists whose work I really appreciate, but those are the ones I genuinely admire.
Helen: Do you have predictions for sound art trends, developing technologies, the 3-D web? Have you any thoughts on what the future impact of immersion/presence might be? Do you think it might make “play” and “fun” more important to our lives.
Adam: I think we’re entering the post-convergent era, where distinctions between sound, vision and other media elements will cease to be meaningful. I definitely think play and fun will become more important as 3D environments grow in acceptance, alongside the growth of computer games as a medium. I certainly think that games, in the broadest sense, are the artistic medium of this century. Simulation and modeling will be of enormous importance to society and we will learn a lot from artists and practitioners of games and virtual worlds, and vice versa. The distinction between real world and virtual world will cease to be meaningful. We’ll see a convergence of networked experience via 3D, something like a 3D web but much deeper and more enjoyable than that phrase suggests. I definitely think we’ll see a move beyond the use of 3D space as just for representing physical spaces. The multiple points of perception and presence that we’ve already talked about will grow in acceptance and utility, along with an expectation that art will manipulate this.
Helen: Thank you, Adam, for this great interview.
Visit the following URLs for more information on Adam’s work: