Here is a little fragment from wandering around the streets of Marrickville last friday night.
Updates from April, 2010 Toggle Comment Threads | Keyboard Shortcuts
I’m organising a performance night for the Golden Eyes festival next Tuesday night with some fantastic performers, come along if you are in Sydney town. See blurb below for details.
Golden Live – A performance night for the Golden Eyes Festival
8pm Tuesday the 20th of October 2009. Bon Marche Studio. UTS
The bi-annual Golden Eyes Festival of Media Arts has been held at UTS for over 20 years. This year it will also include a performance night showcasing the very active AV performance scene among undergrads, postgrads and staff at UTS. Three groups of performers will be presenting on the night in the wonderful purpose built Bon Marche Studio at UTS with 9.1 sound and high resolution digital cinema projection. Performers will be: Emily McDaniel & Emma Ramsay, Roger Mills & Neil Jenkins as well as Nick Wishart & Miguel Valenzuela.
Entry to the Bon Marche Studio is on Harris St, Ultimo on the ground floor of UTS Building three. http://www.uts.edu.au/about/mapsdirections/citymap.html
8pm Tuesday the 20th of October – The event is Free. Duration approx 80 mins.
Emily McDaniel is an Aboriginal artist, curator and educator from the Wiradjuri Nation. Her work traverses performance, new media, film and sound installation. This year she has begun curating and coordinating Refraction, a UTS Media Arts performance night that encourages the next wave of artists to get amongst it. She is currently completing her BA in Media Arts and Production.
Emma Ramsay works across many platforms of art including sound, video and installation. She is a founding director of Sydney based ARI Quarterbred, that promotes cross disciplinary practice and developmental support for emerging artists. She is currently completing a Masters in Media Arts and Production.
Emma and Emily have been collaborating for over a year and have recently exhibited and performed at Electrofringe. Their practice tries to achieve a good feeling through sonic spirituality, by fusing installation with performance and lo-fi with hi-fi. Although they can be placed under the umbrella of new media, sometimes, they just like to leave the brolly at home. Golden Live will see the two of them collaborate with stunning visuals and epic sounds to create a cockle warming sound that will make the little hairs on your spine stand on end.
Idea of South – Roger Mills & Neil Jenkins
Exploring ontological notions of southernness, Idea of South is a three part radiophonic composition combining live networked terrestrial radio and Internet streaming. It is a musical sound journey integrating spoken word, live processed trumpet, violin and location recordings contributed by sound artists and phonographers throughout the southern hemisphere. It was originally broadcast simultaneously over Radio 2SER, FBi Radio and Shoutcast stream in June 2009, and will be performed for Golden Live as a six channel mix with a live visuals by artist Neil Jenkins.
Performers are: Roger Mills – Trumpet, Hogi Tsai – Violin, Bernie Maier – Spoken word, Visual Mix – Neil Jenkins.
Roger Mills is a composer, sound artist and writer whose practice focuses on networked collaborations, internet performance and radio. He has worked internationally as a composer & sound designer, and is editor of the online sound art magazine and net label Furthernoise.org. Roger is currently an HDR student at UTS, researching improvisation in remote online collaborations and founder of the Ethernet Orchestra.
Neil Jenkins is an artist whose practice is heavily engaged with
electronic media and the Internet. He creates highly interactive works
that often require a live internet connection and the participation of
its audience to function and exist.
Alphabet Soup, is a new audio/visual performance by Nick Wishart + Miguel Valenzuela
Using a hacked Speak n Spell and other circuit bent alphabet toys, animations,a cube, some code and an asprin, Nick & Miguel will attempt to reassemble language into a sonic & visual feast.
Think R2D2 on acid!
Texas Instruments released the Speak n Spell toy in the late 70’s and are now the holy grail for circuit benders. Containing one of the 1st commercially available speech chips, these wonderful toys can be retro fitted with a MIDI input kit allowing the phonetic sounds to be triggered by keyboards and sequencers.
Nick Wishart – Working in music, sound and multimedia his main area of artistic practice is in the development of Physical Interactive Systems. Combining his skills in electronics, tactiles, MIDI, interactive devices, audio production and circuit bending techniques, Nick creates interactive multi-media installations and circuit bent instruments that form the basis of the all toy band Toydeath.
Miguel Valenzuela has been making video art since 1996. He has exhibited at Artist Run Spaces such as Mekanarky studios and Newman Lane Gallery, as well as in various public spaces around Sydney. He is currently researching multidimensional interactive video/sound art and its associative dimensions with regard to social norms, laws, formalities and their relative elasticity.
I love bits of gear & software that you can work with like an instrument. By that I mean, fluidly and fast and where time spent honing some proficiency and familiarity is repaid with more interesting and unique results. Aside from the sort of relationship you can have with a video or stills camera you use over a long period this isn’t a common way of working in film or video. Editing software and compositing, animation and effects packages lean toward a complex detail oriented non realtime workflow that while it can of course be engrossing, doesn’t lead to the sort of play you can experience with a softsynth or something like Ableton Live.
When I started out making video one of the tools I had was a Fairlight Computer Video Instrument, the much much cheaper visual partner to the famous pioneering CMI sampling keyboard. What was most amazing about the CVI was that it really was an instrument, the tactile sliders and buttons allowed for live play with digital video in ways that haven’t been equaled till recent years.
While most of the presets were cheesy as hell, the synthesis model it was based on meant you could develop your own patches and save them to memory. And if you were like me, then laying the treatments out to tape, layering them up later in expensive online suites. While the sampling inspired by the CMI of course became foundational to modern sound and music production, live video instrumentation and manipulation as a performance or compositional tool has only slowly developed in response to the rise of VJ culture in clubs and in live music. There are now a range of decent software tools and importantly control surfaces like this nanoKontrol (note the similarities to the CVI surface) are becoming widespread amongst the live visuals fraternity.
While live visualism is certainly diversifying into live cinema, the architectural and harking back to ideas of expanded cinema, the tools and techniques still fall into the domain of the eye candy techno Vj style aesthetics that completely dominated not so long ago. One thing holding back integration into more established (and high resolution) video workflows is the lack of recording options in many of the key software packages, the emphasis is all on the moment. While in some ways this is laudable, it is also a stumbling block for allowing live visual “playing” to become part of the process of image generation workflow for high resolution video and for the sort of visuals unable to be generated in realtime. This is only partly about the computer speeds & video hardware improvements required, it is most importantly about recognising what this sort of instrumental play brings to moving image practice in general, both how we produce and read video.
Never have I used so much gear for a performance, this Liquid Architecture gig is turning into a Pink Floyd style equipment fest. We got the streaming working the other day, its a bit of a complicated thing, but luckily we have the resources to make it happen.
The chain goes like this:
These are mixed together in a edirol HD video mixer and output to the projector, the streaming encoder and a HDV tape deck for recording the performance.
Shannon has his laptop/controller sound rig with a submixer that as well as outputting to the main 7.1 PA also sends feeds to both Jes and I so we can use audio frequencies to control some of the visual FX we have going on. The visual and audio feeds are fed to the encoder machine a Tricaster, which is a mixer in itself and a bit overkill for this job but happily sends out a flash video stream to the flash media server sitting elsewhere on the campus. We got it working the other day and the Tricaster is now sitting down it its room sending out some random stock footage to the server, you can see it working here.
The only thing that remains to be worked out re: the streaming I think is how we might send a different high quality stream out to the screens in the main campus tower and a lower quality one out to the pages it will be embedded in, Kat Baird & I are going to sort this through the week hopefully. A lot of setup for a short performance yes?
A few weeks back I drove down to the snowy mountains with the boot full of cameras. It was gorgeous as it usually is with a suprising amount of snow, Charlotte Pass is my favourite spot, something so still and otherworldly about it.
Going back through the archives, found some fireworks.
This is more what I meant, continuous motor drive mode on a Dslr smeared together. In other production process news it turns out that there is a 2GB file size limit on working in filmstrip animation mode in Photoshop even in CS4. I’ve gone back to filmstrip format recently because I like being able to paint big wodges of brush stamps across frames with the wacom. I’m sure the limit was always there, but I didn’t notice working at SD res, at HD 2GB of filmstrip is only around 8 seconds.
We came back across on the ferry last night trying out the new XDCAM EX3 all the way, which I’m very impressed with. On getting home moving the footage into Final Cut was less than straightforward. Turns out apart from the latest version of FCP (6.05 which I had) you also need a variety of software addons from Sony to get it to work. None too clear from any of the included docs or from anything other than deeply forensic googling for that matter. The real bits and pieces required are listed here. I was filled with dread at the thought of installing Sony driver software which is usually made, as they say, of fail. My fears were misplaced however and the files transferred flawlessly.
Footage below is originally HD at 25p. I’ve slowed it to 50% and crushed the blacks slightly in FCP before running it through the FLV compressor. I’m keen to try the overcranking to 50p that the camera will do if you shoot at 720 rather than 1080.
admin, and Emmetron are discussing. Toggle Comments