A while back I posted about the POLYTIK synths when they were crowdfunding on Kickstarter. I was personally very drawn to these synths and as soon as I’d found out that they were made by the Dirty Electronics ensemble I knew that they’d be good. The POLYTIK synths are not on Kickstarter any more, they’ve moved to Bleep, which is a probably a good place for them.
What was the initial inspiration for the POLYTIK synths?
A friend of mine, Joana Seguro, who was at Artists & Engineers at the time, introduced me to Jack Featherstone. Joana had helped organize a number of Dirty Electronics events in the past and said “Fancy doing a collaboration with graphic designer and artist Jack Featherstone?” I met with Jack, and our relationship began to develop. I’d done a lot of artwork printed circuit boards (PCBs) in the past, and it was a case of Jack and I developing a specific visual language for the project. In fact, the initial idea came together very quickly. From a sound and operational perspective, I wanted to do something with a sequencer that also felt organic and analogue; so I looked at programmable ICs (integrated circuits) that could be used to control analogue sound modules. I really like what this combination of approaches and technology brings. I had no idea of how to program this stuff to start with. To me it was all esoteric technology and code and, looking back on it, it was painful getting it all working. But it was done, and I’m extremely proud that artists and not engineers have designed this!
The name Polytik also relates to the overall inspiration for the synths. Jack’s four limited edition record covers for Border Community, Another land, where all four designs fit together to form a larger picture, was an idea we wanted to explore. Hence ‘polytik’ which is a play on the word polyptych (multi-panelled artwork). We also discussed the visual qualities of technical drawings and how this aesthetic could be incorporated into the design. We wanted to break the rules with electronic component layout and think of these components in a more visual way rather than just for their function.
Originally the project was launched with Artists and Engineers (as part of an exhibition I think), how did it spin out of that?
The plan was to always work with Artists & Engineers to develop a hand-held synth/artwork. And bleep as a distributor was considered right from the outset. The public may have got little windows into the development of Polytik. Prototypes would occasional appear alongside me on stage, with showed the synths at the opening of Limewarf, Machine Rooms in London, and Jack and I did a AV performance with them at the roBOt Festival in Bologna. It’s taken a few years to get them to this stage!
Is your plan to develop new modules following the kickstarter, if so can you give any clues as to future plans?
We considered a Kickstarter campaign and crowdfunding, but this did not sit too well with retailers. Polytik is being retailed through bleep.com. It’s a great home for it. There will be a stripped-down ‘workshop’ core for participatory and building events but for the time being there is plenty there to get stuck into! Jack and I have been developing an AV performance with Polytik that will get a run out at festivals this year, such as Sonar Hong Kong. I’m always working on new designs and commissions as Dirty Electronics. There’ll be a new very limited artwork PCB for an event at Café Oto, London in May.
What is it that appeals to you in designing modular synth systems, and why do you think they have grown so popular in recent years?
A few words that come to mind: play, experimentation, ephemerality, community, sound, nostalgia, collectable, independent, post-digital
Would you consider creating eurorack versions of the POLYTIK system?
No. I’ve always been interested in standalone hand-held objects and synths. There are benefits of a standard system and format, but everything should not conform to this. With a Eurorack system you can collect an amazing array of modules, and there is already enough people designing and making this stuff; but ultimately, I’m interested in restricting choices and limiting the possibilities available. How many oscillators do we really need? For example, with Polytik, can I work with and enjoy what I have in front of me? It’s about being creative within boundaries. And of course, Polytik is not necessarily designed to be used in the studio. It encourages a more mobile practice. It can be used on your desk, kitchen table, or even rug in front of the fire.
I have known Intermorphic and what they do for a very long time now. In fact, almost as long as I’ve been running PalmSounds. I’ve known them in a variety of iterations, there’s more about that later, but what’s really interesting is the journey they’ve come through and where they’re going.
Tim and I actually started this interview quite a long time ago, so it’s good to finally get it out. Especially as I think we’re going to be seeing a lot more about generative music in the coming weeks.
With that, let’s get going …
PalmSounds: Tim, you’ve been involved in Generative music for just over 25 years now and that’s an amazing achievement in itself. Can we start off by talking about what it was that started you off on that journey? Can you tell us what made you want to be involved in generative, what sparked your imagination?
TC: On my 14th birthday my parents gave me a classical guitar and I’ve played and created music ever since – that guitar was what kick started my love affair with music making that has lasted to this day. My personal site (http://colartz.com) is now home to some of my musical creations.
Looking back, I guess the original inspiration for our approach to generative music engines came from the classic Foundation series by Isaac Asimov, which I probably started to read some time in the 70s. In it, Asimov introduced the Visi-Sonor, a device which could be manipulated by a skilled operator to create amazing multimedia entertainments combining both music and visuals. Today I imagine it might be called a kind of hyper-instrument. I thought no more of it until 1986 when, for some reason, I started deliberating about a sphere-like composition and performance instrument and madly figured it would be fun to try to create one. The slight problem was that I had no idea where to start and I certainly knew I could not do it on my own!
In 1989 I found the courage to take myself off to business school where I hoped I would learn how to start and grow a business and maybe meet some like minded people, too. It turned out it was exactly the right thing to do and SSEYO (https://intermorphic.com/sseyo) was born soon after. Lot’s of ideas were swirling around at that time and it is a bit tricky to unpick what happened when and how we ended up where we have.
One of the ideas came from noticing that a ticking clock seemed to provide a focus for the mind, somehow helping it to filter out extraneous sound and foster the establishment of calm and stillness. In this situation the brain is obviously doing an incredible amount of real-time audio processing work, totally automatically. This led me to start thinking of a listener’s brain as the ultimate creative instrument. The brain is, after all, what takes music input and interprets it in the context of a unique set of historical and emotional occurrences to create an experience that is unique for every person.
Another idea came through wondering if a chance-based engine with sufficient boundaries and rules might be able to assemble something music-like and interesting enough to engage the brain and stimulate it to fill in the gaps and make interesting and serendipitous connections. I imagined a set of ball-bearings traveling down a chute, in a way akin to a Bagatelle or Pachinko game. Each time the ball bearings would make their journey, they would combine to travel a different set of paths, bouncing off each other as they would go, but the available paths would be constrained by the internal physical design and boundaries of the game – an envelope of musical possibilities as it were.
At some point we did consider algorithmic composition, and even tried neural nets , etc., but we slowly realised that style-based music systems were not what we wanted to build – we would leave that to others to explore. The path that just felt right for us was in designing engines that used stochastic/aleatoric techniques to compose within some overall boundaries and then letting the listeners brain do the rest. We did not yet understand the pivotal role that the sounds themselves would have to play, but that would come later.
We had to start somewhere, so we kicked off with a concept we called an “Environmental Management System”. It was meant to be a device that would act as a kind of sonic buffer to a world of external sensory “overload”, working along the lines of what was discussed above. However, to get anything working we needed a suitable music engine! After a few initial explorations it did not take long to realise the real key to any musical device or instrument would be in the software smarts that powered it, so we started in earnest developing what was to become the SSEYO Koan Music Engine, releasing the first beta in 1992. And, of course, that was the start of the journey we have been on since.
PalmSounds: What was the reaction like to the SSEYO Koan music engine, and how did you deal with what users wanted?
TC: Over the years we have been very lucky to have worked with many really helpful beta testers, early adopters and musicians, including the genius Brian Eno whose “Generative Music 1” with SSEYO Koan software (https://intermorphic.com/sseyo/koan/generativemusic1) definitely upped our profile and helped popularise the SSEYO Koan system. We love communicating with our users, and, as there are only two of us again, we reserve an especial place in our hearts for those that can also be constructively critical :).
The reaction back in 1992, when we got out the first betas of SSEYO Koan (https://intermorpihc.com/sseyo/koan), was very encouraging so I guess we decided it was worth continuing. I seem to recall that people felt that the output could be interesting and felt organic; this was promising and indicated we might be doing something right.
We then just got on and did the things we wanted to do, listening to feedback along the way. Early development of a system like Koan was quite open ended in scope so we were always short of time (still so very true today). We experimented with all sorts of ideas, including music rules which changed or became active according to the time of day. Our approach just meant we tended to hone in on feedback concerning bugs and features that just didn’t work quite right or that could work better. Of course, over time we picked up a number of helpful suggestions concerning potential improvements to what we were doing, but as it is now so long ago I just cannot recall specifics.
We have always had a long, long list of related things *we* want/plan to do (time willing) and a pretty clear idea of where we want to go, so we don’t tend to look around much at what others are doing. This is no doubt a failing of ours, but, as people are now finding with Facebook, it is easy to get envious of what others are doing – we would rather just get on with what we want to do. That doesn’t mean we do not read, listen to and sometimes take on board feedback or keep a loose eye on the market and market developments; it just means that reported “crashers” and “bugs” tend to get the most attention.
By way of analogy, we tend to feel like we are in a small boat on a big sea. For us to have any chance at all of surviving the storms and the swell until reaching the next port we have to make sure we are taking account of prevailing conditions or those we can see coming. That means we are always re-checking the horizon and adjusting our jib. This in turn affects the order of things on the long list of things to do. You can imagine, therefore, that getting new stuff on to that list can be quite tricky, which is why we often find it easier to deal with feedback that matches up with something already on it!
PalmSounds: After SSEYO was acquired by the Tao Group things didn’t go too well, what happened and how did that impact your work and how you felt about it.
TC: I tend to believe that there are always 2 sides to every story, that if you are trying to make the future then time is too short to dwell unduly on the past, and that no thing worth doing is ever going to be easy. It is the nature of life that it exists only now, so rather than looking back I prefer instead to try to find the good in any situation and then move forward from there. It is a positive approach because to build anything you need to stay positive.
In many ways things did actually go well for us at Tao Group and we got to work with many really nice, very clever people.
We were able to keep a smart audio team together and undertake the challenge of developing the Tao intent Sound System (iSS). The iSS was a collection of audio technologies that was deployed as required in the Tao intent software platform, a true multi-threaded operating system and virtual machine primarily for mobile devices. Tao Group had a stellar set of investors and Tao intent was licenced to a number of big name manufacturers.
It was through what we did on the iSS and our employment at Tao Group that we got to experience the mobile and device world from the inside out. This was to our long term benefit as we got to understand more about the mobile ecosystem, its workings and relationships. Plus, we made some good friends and connections, too!
Whilst at Tao we also worked on two “products” that were released. The first was the Advanced Polyphonic Ringtone Engine, and the second was SSEYO miniMIXA. Launched in 2004, SSEYO miniMIXA was, for its time, a very advanced mobile music mixer app that ran on Symbian, Windows Mobile and intent (and wherever that could be deployed).
Although for financial reasons it was a dreadful day for all involved when Tao Group folded in 2007, it also finally set us free to start over with Intermorphic: a door was opened that let us set out once again on our generative journey, but from a new vantage point.
In 2008 Intermorphic managed to secure the rights to the iSS, SSEYO miniMIXA and all past SSEYO IP. Much of what we did at Tao is now in the Intermorphic Sound System. The ISS has been put to good use and underpins all our current apps, including Mixtikl (https://intermorphic.com/mixtikl) which we evolved from SSEYO miniMIXA.
So, just as without the SSEYO years there would not have been the Tao years, without the Tao years there would not have been the Intermorphic years – that is, in terms of how they have turned out, of course. There would still have been Intermorphic years, but just different years with different generative apps.
Of course, running a small, niche software business is always going to be hard work and require heaps of passion and bags of luck. We are truly grateful to and for the amazing people we have worked with along the way and appreciate how fortunate we are to still be standing. Onwards!
PalmSounds: So, that brings us up to the present day, or rather, up to the Intermorphic years. How have things been different for you now that you make the decisions on direction and new products? Has it been a creative release, or, do you think that being part of a commercial environment with all its pressures actually helps the creative choices you make?
TC: When we first started SSEYO we were like a young new band; we had total creative control over what we did and bags of energy and time. We were in a niche area and as SSEYO developed we had to find a way to feed it – which at the time meant Venture Capital. So, when we raised Venture Capital (VC) there rightly came with it some big, new and important VC-related factors to address. Then, after SSEYO was acquired by another VC-backed company there were yet more VC-related factors and we were then of course working for someone else with their own requirements and direction.
When we took the decision to start Intermorphic it was to regain creative control over “something” and we knew we would, as before, have to live on our wits and organic growth. It was different with Intermorphic, though, as we were older, with families, and had less time and more pressures. As well as that there were some tight resource constraints. All of these continue to this day to bring their own challenges.
Creative endeavour, if it is to be sustained, doesn’t exist in a vacuum: it has to be fed and supported. A lot of our creative effort at Intermorphic, as it did at SSEYO, goes into trying to figure out how to survive. There is always a lot of tyre kicking and iterative thinking around what we could and even should do in the context of our audience and our resources, which have always been very constrained. In the light of that and in the early days of Intermorphic, the first app we built was a cut-up text generator desktop app called “Liptikl” (https://intermorphic.com/liptikl), a useful non-music tool which was spurred by me having trouble coming up with new word associations/lyrics for my songs. Once that was out of the way we decided we were ready to start work on a clean room build of a generative music engine we called the Noatikl Music Engine (NME) (https://intermorphic.com/nme/3/guide), together with a generative music composer desktop app and interface to it that we called “Noatikl” (https://intermorphic.com/noatikl).
Those early days for Intermorphic feel like a long time ago; today there is way more competition and ever more user expectation as the app market is now global and there are also many more business models to consider and factors to deal with – all generally set against diminishing returns. It is a perfect storm in many ways, and whilst things are whirling around us it can often seem if we are moving very slowly. But, strangely, this situation is also a stimulus, in that it forces us to focus on what we do best, what our USPs are and what it is we can do that can bring real value to our customers and fans.
So, it is bit of a dichotomy really. On one hand we have more creative freedom (no one now tells us what to do) but on the other we have less creative freedom (surviving in the current app marketplace). It requires a good deal of effort to counter inertia and to keep innovating, but we know that nothing stands still and it is in our DNA have to innovate, whether that is through new features for existing apps or new apps altogether. In many ways, the biggest creative challenge of all is to discover and agree on interesting green pastures to farm and explore and that fit our current resource profile. In our new developments we try to keep a sense of our history, identity, domain, audience and competitive environment and look to find creative ways to combine our existing code base with new ideas, novel approaches and simple interfaces. We are excited therefore about the potential for our Wotja Reflective Music System (https://intermorphic.com/wotja), first released in 2014, which ticks all the boxes for us, and it is an app that we want to do much more with. More on that in a bit.
PalmSounds: People find generative music difficult to get to grips with, why do you think that is, and where do you see tool makers like yourselves going in the future to help this, and what is it about generative that you think people struggle with?
TC: I think to get some perspective on this it helps to consider Computer Generated Music (CGM) in general. By this I mean music that a computer generates, composes or mixes itself, whether by rules, chance, algorithms, AI or whatever. It could be assembled from any size of building block, from a sample to a loop to a recording whether made by human, animal, nature or computer. Unless a human has a clear hand in controlling, directing and imbuing meaning to it, to my way of thinking any of the foregoing is Computer Generated Music. I am sure it is not a perfect description, but it will suffice for now and in the context of my thinking outlined here.
Over the years I have enjoyed hearing CGM, principally the Generative Music that our systems have generated, and I have often pondered on the nature of it. My thoughts change and have kept changing, and I am not sure there are any hard and fast answers as it depends on the person and a number of factors. I think some of the (by no means exclusive) factors to consider relate to how a creator’s or listener’s tastes or needs vary for the following: 1) Compositional control (e.g. what control do I need to have over what happens?); 2) Intrinsic meaning (e.g. do I need it to have any emotional, personal meaning?); 3) Context (e.g. do I need the music somehow fit the context of where/when/how it is experienced?); 4) Musical style & structure (e.g. do I need it to have a feeling of style or for it to have a directional vector?); 5) Foreground/Background music (e.g. do I want to actively listen to it); 6) Interactivity (e.g. do I need to be created live, in the moment; as in “inmo” (https://intermorphic.com/inmo) ?); 7) Sounds used (e.g. do I want natural or synthetic sounds, created live or sampled?); 8) Ideas (e.g. am I wanting to use it to create ideas for me to later use?); 9) Ease of creation (e.g. how much time do I need to set aside to make it; how hard is it to do well?); 10) Sharing (e.g. as a creator, do I want to share what I have made?); 11) Turing test (e.g. can you be certain it was/was not created by computer?) etc.
Every creator/listener will have different needs and taste profiles for all of the above, and these are likely to change according to mood, context, day, season , etc. So, as you can imagine, asking why “people” find generative music difficult to get to grips with is a difficult one to answer; the answer is a very personal.
As I said in a previous answer, we are primarily concerned with music that is created stochastically/aleatorically. It might be easy to think that that this kind of music can have little emotional impact on a listener. However, I am reminded of a time back I think in 1996 when I was in the zone listening to Timothy Didymus’ “Float” (https://www.intermorphic.com/sseyo/koan/float – Timothy is an amazing generative musician who we still have the honour of working with). I think I was listening to “Midheaven” at the time (https://www.intermorphic.com/sseyo/koan/float/#float-audio) and I distinctly recall entering some kind of quasi state where I felt I could “taste” the music; it was a remarkable and moving experience I think related to the subtle changes in the music. As far as I know it was something peculiar to me and only happened the once, and I have not experienced it with other non-generative music either, but it emphasised to me the personal impact that CGM could have in the right context.
In the context of the music our engines make I have been pondering quite a bit about “Intrinsic meaning”. It seems to me that everyone has a “music player“ in their head. However, experiences and appreciation can be different as each person’s “music player” has associations specially keyed with their age, culture, demographic, personal/shared memories (e.g. concert) etc. And, even then, to really appreciate music, repeated listenings are often required. I love playing my guitar and writing music and, at least to me being a creator and songwriter, there is a fair amount of meaning in what I make – my music is distilled from my thoughts and emotions and is played with passion. However, if I share a recording of my music for someone to listen to, then what meaning can they, as a listener, extract from my recording (listening to it at a distance as it were)? They do not have my memories or context to unlock or quickly interpret it, to hear it as I hear it. I find it interesting to then consider CGM in this kind of sharing context, and it raises all kinds of questions related to meaning.
One other factor that plays a part in all this deliberation is that over the years we came to understand that generative music played against an image seemed to elicit a deeper reflection on the image, unlocking thoughts and memories. As a result, way back in 2010 we had decided to run with the term “Reflective Music” as a descriptor for the effect that the output of our generative music system could engender, and secured the reflectivemusic.com domain. In hindsight it was a good move, as the descriptor was to become even more apt…
So, trying to make sense of it all, I stood right back and got to thinking about text. Everyone has both a text player and writer in their head (language aside) so people can both easily and quickly create it and assimilate/understand it, it. Text has meaning and a reader’s imagination puts flesh on the bones; in a musical analogy it is a bit like a visual MIDI score played through different MIDI synths. Unlike music, though, text it is not particularly temporal and you can also quickly change it or respond to it. We figured it might be fun to play around with a kind of music messaging where a creator’s text could be used convey any meaning required and the text itself could be used by our engines, as a seed, to generate melodies / music as an accompaniment to it. This is the general idea we are currently exploring with the Wotja Reflective Music System (https://wotja.com).It quickly became clear that “Reflective Music” was the perfect descriptor because the meaning of the text can be reflected upon and the generated melody is, in turn, a reflection of the text.
PalmSounds: Can you talk about how your products have evolved in the time you’ve worked with generative music?
TC: We started out by building content-open and content-extendable creativity tools that let others make and record things of their own, to use however or wherever they wish. Even after all this time we see no reason to change our direction and, besides, we really enjoy seeing and hearing what other people can create with our tools and apps!
In the intervening years many, many external factors have changed, none more important than the emerging importance and capability of mobile devices. We first started work in mobile in 1998, in “Mobile Music Making” as it were, so mobile thinking has been in our DNA for many, many years.
However, we really started thinking in earnest about “mobile first” back in the noughties, sometime before I started claiming in 2004 (with good reason, and prior to the release of SSEYO miniMIXA), that “the mobile phone is the next electric guitar”.
There have been lots of stops, starts and dead ends along the way for us, and it took us a long time to get there, but we now think mobile first for everything and have done so for some time now – be that apps, business models, websites or anything else. They all have to work and work well on mobile; everything then scales up from there.
Part of the issue with mobile is that it is mobile – screen size, performance, input mechanisms, app inter-operatibility, UIs, usability, complexity, operating system, sharing, social, marketplace, business models etc. all brings constraints and opportunities. All these have to feed into our app planning, design and thinking as we evolve our apps.
One of the big things we learnt very early on, and alluded to earlier, was that it was all very well having a powerful generative music engine, such as our Noatikl Music Engine (NME), but if you want someone else to hear and in the same way experience the generative music you have made, then the music has to be *portable*, i.e. there has to be a player for any desired listening device. That means you generally either need to A) restrict your compositions to rely on the use of audio samples (these are big and have issues if you want to share them so are generally inbuilt), maybe with some pitch shifting or post-FX; or B) you need to build or licence and include some kind of good quality, flexible, MIDI-like modular synth sound engine that allows real-time polyphonic sound generation with sound shaping; or C) some combination of both of those. We chose the latter route, C), so over the years we have spent a good deal of time evolving our integral Partikl Sound Engine (PSE) (https://intermorphic.com/pse/3/guide). This is now a powerful and customisable modular/SF2 MIDI synth with live FX that is included in all our music apps and we recently updated it to allow stereo synth sound design. Other than trying as best we can to keep up with advances in sound generation, in so far as it is relevant to portable generative music of course, the main problem with something this flexible relates to the interfaces you choose to provide to it to allow complex sound design, especially on mobile, and how they can be accessible. This is an area we have made some progress in, but we still have a long way to go and are still mulling over exactly what to do next.
Aside from the ongoing development of our essential core technologies such as the NME and PSE, we have found it is the mobile factors that have shaped our thinking the most with respect to our products over the last few years. So, although we started out in 2007 with desktop versions of Noatikl and Liptikl we now have mobile versions of both of those. Mixtikl is the evolution of SSEYO miniMIXA and at the time of Mixtikl’s first release (2008) we built it with a scalable XML front end so the same UI would work on mobile (Pocket PC) and desktop (Windows and Mac), meaning it started out as a hybrid. Apps that came after that, such as Tiklbox Ambient Generative Music Player (https://intermorphic/tiklbox) and have been totally mobile-centric designs, using native controls where we can, and that we are looking to develop Desktop versions for as we’re adopting Swift where we can.
Business models really do have a major impact on what developers can do and how it is done, so we experimented with many different approaches trying to find the right balance for our offerings. After much trial and error over a few years we have now pretty well settled on a model the best allows us to move forwardsWe have built Wotja to be not only a creativity tool, but also a generative music publishing system in its own right in that totally custom Noatikl pieces can be imported into Wotja and saved or exported as wotjas and then played, for free, in Wotja.
It is not easy these days surviving as a niche app developer, what with VC money continuing to pump into apps resulting in market consolidations, a global app market with ever more noise, shifting customer expectations and everyone suffering increasing pressures on their time. However, we love doing what we do, we love our customers, fans and friends and, for those interested in the areas we work in, we expect to continue trying to innovate in our own niches and even to find a few new ones!
That’s the end of the interview that Tim and I worked through earlier in the year. To finish off this post Tim has added a few words about what’s happening now at Intermorphic.
A lot has happened at Intermorphic since we started this interview. We have some exciting new developments in the works and which are not far off now, hence this “end of year” addendum.
In the last year we focused hard on stability and released a number of related updates for Noatikl 3 and Mixtikl 7. We wanted to be in a good position for the long journey to our next major milestone in Reflective Music, namely Wotja 4. This has been a major undertaking for us as we are gradually consolidating the best of Noatikl 3, Mixtikl 7, Tiklbox 1 and Wotja 3 into one app, Wotja 4. We expect to release the first iOS version and macOS Safari App Extension in January 2017 followed by desktop versions sometime later in the year.
Wotja 4.0 is actually the start of a whole new journey for us, a new stage of evolution if you like, and there is a lot to be excited about. Being able to focus on just one reflective AND generative music app with full editing means we will be able to move it forward faster with improvements and extensions and it also means we are better able to evolve our music and sound engines. To that end the Noatikl Music Engine is evolving into the Intermorphic Music Engine (IME) and the Partikl Sound Engine is evolving into the Intermorphic Sound Engine (ISE). Both of these engines are at the heart of the new Wotja Reflective & Generative Music System. Of course we will also be extending the Intermorphic Text Engine, too, in due course.
Although we don’t feel it appropriate this close to release to pre-announce the details of Wotja 4 :), one thing we can say is that the IME in it will itself support multiple Text To Music (TTM) Pattern voices to allow the creation of richer reflective music tapestries that can sit on a gorgeous bed of generative music – with everything being deeply and totally customisable. TTM has been in Wotja since its first release, but prior to Wotja 4 it was not in the engine itself so only one was allowed in a wotja – no longer!
If you have read this far then many thanks for your interest in what we are doing and we hope that you might decide to get and try out Wotja 4.
Especial thanks to Ashley at Palmsounds who does more than anyone else we know – and we have no idea how he does it unless there are 10 of him.
Our season’s wishes to all! Intermorphic.
So that’s all for 2016! Have a lovely New Year and we’ll be back again in a day or so.
As a long time fan of Intermorphic’s work it’s nice to see the development of their apps into Wotja (pro and subscription). With version 4.2 they’ve brought in the functionality from Liptikl (an app I had a lot of fun…
As a long time fan of Intermorphic’s work I’ve been waiting for this for a very long time. I didn’t get it initially, but when the guys at Intermorphic explained it to me I got there, and now it’s here.…
In case you did miss this the first time around, here’s my interview with Adrian Belew. We talked about his iOS apps (FLUX:FX and FLUX by Belew) and his soundtrack for the Pixar short ‘Piper’. Hope you find it interesting.
I’ve been a fan of Adrian Belew’s work for a very long time, and when he started releasing iOS apps it was a great opportunity to understand more of his creative process. So it was amazing to get the chance to interview him and talk about his soundtrack for Pixar’s new short ‘Piper’, his unique ‘Flux by Belew’ app which contains a wealth of original content both audio and visual, and his ‘FLUX:FX’ apps which I’ve used extensively and have been very well received by the mobile music community.
Having never spoken to Adrian before it became apparent almost immediately that here was an artist who’s creative vision was truly phenomenal. Adrian’s ideas for ‘Flux by Belew’ span more than 3 decades, and when you use the app you can really appreciate that.
Adrian’s ‘FLUX:FX’ apps have expanded the creative horizons for a lot of mobile musicians, myself included. It’s an app with a huge amount of depth and superb possibilities, which Adrian described …
“There’s just nothing else that does what FLUX:FX does”,
“…an amazing set of miracles in our pocket!”
“I love that idea, when you do the impossible”
And for the record I think he’s right. But it was interesting to know that “(he) was really enthralled by the idea that so much technology and ability could be put into an app and you could charge such a low price for something that would take you thousands of dollars to do if you had all the gear, and even then there’s no way to do it!”
But don’t just take my word for it, listen to the interview and hear what he has to say on all of these subjects:
If you don’t know these apps you should definitely take a look Flux by Belew (his app of constantly changing music, sounds and visuals), FLUX:FX (iPad version) and FLUX:FX play, his excellent FX apps for iOS. They’re all really worth checking out.
I hope you enjoy the interview, and a massive thank you to Adrian Belew for making time available to talk, it was a lot of fun for me!
It’s no secret that I have a little bit of love for tapes so I thought it would be great to round off today with an interview of one of my favourite tape labels, Chemical Tapes. Here’s what Rob from Chemical has to say about the revival of the cassette.
For years people have said that the cassette is dead, but it seems to be making a resurgence. What factors do you think are driving the popularity of tape?
Certainly cost, I know in my case I wasn’t very interested in releasing CDr’s or being a netlabel and vinyl was too expensive to just jump straight into.
You have a fairly granular control over just how much you want to spend, personally from a design and aesthetic point of view we wanted pro-dubbed cassettes and pro-printed covers but you could choose to home dub and home print if you want so it becomes very feasible to have your own label or as an artist to self release a physical product and it not costing you a fortune.
There is also that nostalgic retro vibe that comes with cassettes for my generation, Much like vinyl it has that charm of analogue imperfection and the tangible listening experience that engages you, giving you time to properly connect and digest the music, perhaps the resurgence of the format is also a reaction to the insta-access of the digital age and the limited edition nature of cassette releases probably helps, small limited edition runs of artists that people are really into helps to generate that buzz.
Given that it is becoming more popular, why do you think more artists are looking to make their releases on cassette and what sort of artists do you think work best with the medium?
The sheer number of cassette labels out there gives lesser known or up-and-coming artists a chance to get a release out as opposed to a larger vinyl or pressed cd label. Due to the reduced costs tape labels can take more of a gamble. Ever since the cassette ‘died’ its still been a valid format in underground/experimental music scenes, it’s generally these artists that embrace the format.
What makes a really good cassette release in terms of the content from the artist and the overall design of the package?
Well first and foremost its about the music, but many tape labels really push the boat out on the art and design front with super custom handmade packaging, when you only have for example 75 copies to worry about you can really go for it. Also these days you tend to see tapes released alongside the digital version, which is something I do and something I look for when buying from other labels myself. Its nice to have the limited edition physical object with the flexibility of the digital version.
What kind of artists are best suited to cassette releases and why?
As I mentioned before its generally the more experimental DIY music scenes who have enjoyed the format in recent years, it’s these niche genres where artists, labels, and listeners are really open to ideas both old and new and are willing to try something different. But who am I to say, the beauty of the format is that anyone can do it, just know your listeners and make sure they’re as into it as you are!
What’s been your most successful release and why?
I think each release finds it’s little corner of appreciation, obviously the more well known names tend to sell quicker and are more likely to get accepted by mail-order shops. I will only release music that I personally love, if it happens to sell well thats great, but it doesn’t dictate what I put out.
What advice would you give to any artists looking to start releasing their material on cassette?
Don’t expect to make a shedload of cash, oh and think about balancing your tracks between 2 sides of a tape, thats always helpful!
Do you think other more esoteric formats will start to re-emerge as cassette and vinyl become more popular?
I’d be surprised if any reach the level that those two have, it’s cool to see the odd floppy disk or VHS release though. It’s nice to know that people are still out there experimenting and doing their own thing with various formats from the past.
What’s coming up next and what does the future hold for Chemical tapes?
We have two new releases that are almost ready, ‘Sima Kim – Whatever’ and ‘Bastian Void – Phonics’. Two quite different tapes, Sima Kim’s being lovely delicate post-classical ambience and includes two remixes from Wil Bolton and Hakobune. Bastian Void’s is a serious retro dream-state synth album. We’ve also just had our first batch of t-shirts made up which I’m really happy with. We have more releases planned for the future and will continue to provide various doses for research purposes. We also hope to launch our ‘sister’ label Chemical Beats soon as well, this will be a vinyl based label for deep psychedelic techno experiments, both Andrew and myself have had a love of techno for 20 odd years so it will be nice to have a platform for that form of music too, we are currently in discussions with artists and are looking forward to seeing how it goes.
If you want to know more about Chemical tapes then you can find out all about their releases right here. It’s well worth a look.
I was really pleased when Derek contacted me about publishing an interview with Sam Tarakajian about the development of Mira and it’s great to see the app arrive on the app store today. It’s an exciting step forward in bringing not only the desktop and mobile closer together, but also in extending the creative reach of Max. I hope you enjoy the interview and enjoy Mira.
Derek Piotr: An Interview with Sam Tarakajian.
Sam and I have been conversing and collaborating a lot recently. In February he produced a music video for one of my tracks, and in March he made a visit. This was just days before he was to travel to Leicester to give his first presentation on “Mira”; the iPad app he’s spent the past few years developing for Cycling74. During his visit with me he rehearsed the presentation, which led to a discussion about music in general, where it’s headed, and what he anticipated being the biggest change for electronic music: tactile methods of working with otherwise abstract sounds. Sam’s main goal with Mira seems to be to introduce some “love” and excitement into the often-austere world of producing and programming, as well as implementing a more “hands-on” environment, the depths and features of which are nothing short of mind-blowing. He took time to answer questions via email while presenting Mira in Daejeon at NIME. Derek: When did you begin work on Mira? Where did the impulse stem from and how has the project changed since you first started working on it? Was the idea always to have a Max app for touchscreens, or were your aims different at first? Sam: So, I’m not sure that I should mention it, since it is sort of the most embarrassing video of me on the internet, but if you trace Mira all the way back to its roots, you’ll find this project called Gelie. The aforementioned video can be watched here. Gelie was the codename for my undergraduate thesis project (never completed), which was trying to use an iPhone to make a gestural musical controller. I’ve always been really interested in gesture (probably because I’m such a lousy dancer), whether musical or intentional but especially physical. Gelie was my attempt to create an interface for defining instruments that you would play with gesture. At the time I was getting really inspired by the Reactable, so I thought, “What if you used a Reactable-like language to define gestural instruments?” The result was Gelie, where you drag effects, gestural modalities and sound generators together to build instruments. The interface was all on the iPhone but the backend was a PureData patch that would receive updates from the iPhone and use those to dynamically build a separate Pd patch. So, I had an iPhone frontend and a protocol for synchronising with a desktop backend. Obviously, when I made my pitch to Cycling ’74, there was a lot of previous work to build on. D: What does ‘Mira’ mean, stand for? S: Mira is how my aunt, who was born and raised in Rhode Island, would have said the word “mirror”. You make a patch, Mira “mirrors” that patch, lots of people in Providence use Max–I guess it’s a bit of a stretch but there it is.
D: What haven’t you been able to do with Mira that you hope to be able to add in the future, or develop within a different context? S: Loads of stuff. As you can well imagine, one of the biggest problems we have working on Max is too many ideas, not enough time. Mira is no different. The two main things I’d like to add to Mira would be dynamic visuals and haptic feedback. Both are forms of primary feedback, a concept from musical interface design which refers to anything the interface does other than make sound. If you’ve got a virtual slider, then your primary feedback would be visual, and if you were playing a guitar the feeling of strings vibrating under your fingers would be tactile primary feedback. Anyway, both of these are extremely important for designing good interfaces. So I want to bring dynamic OpenGL to Mira, followed by some way of adding vibrotactile feedback. Obviously that second one’s going to be pretty tricky, though. D: How important is it for you to let elements of chance or “subconscious” processes kick in while developing? Was the development of Mira driven mainly by linear, logical development or sudden swoops? S: Oh, equal parts both, I’d say. If you look at any long term project as a kind of exploration, then inspiration might be like climbing to the top of the mountain, surveying the landscape and developing broad insight, and then the dedicated, focused work you have to do later involves hiking down into the valley and wading through the undergrowth. Developing a strong understanding of the territory requires both. I guess what’s special about Mira, as a software project, is that some of the coolest moments of inspiration come from building and deploying internal builds. It’s not something you can really do with a piece of music or a painting–release a demo version, get feedback, and then go back and make changes. With Mira for example, one of the big questions we had early on was whether the iPad should mirror patching mode or presentation mode. So we made a test build that let you jump back and forth between the two options. Well, it turns out that being able to go back and forth was itself a cool feature, so we left it in. D: What was the biggest challenge in developing Mira? What was easiest? Of those two, which is more important? S: The biggest challenge was and continues to be networking. Turns out automatically setting up a connection between two computers is hard. There’s a lot of moving parts and a whole lot of abstraction between you and what’s actually going on at the socket level. It definitely makes me much more forgiving when it comes to streaming movies on the subway. That’s some hard as old toenails stuff–next time you have to wait a few minutes for Netflix to buffer try to have some sympathy for the engineering genius who got that junk working. Easiest? Maybe working with the Max API. At a deep level Max is powered by a very solid, cross platform core that the other geniuses at C74 were nice enough to write for me. Having that available made programming Mira on the Max end much, much easier. But as for which one is more important, I’d say that robust and zero configuration networking is about 60% of what makes Mira cool. It’s the most important part of the app by far.
D: What do you hope the dialogue will be once Mira is released? Do you anticipate it being a frontrunner of some musician’s processes in 5 years’ time, or simply an alternative way to interact with max? S: When I first met with Joshua to talk about working at Cycling ’74, he told me that the best thing about Max was that it didn’t make any assumptions about what you, the artist, wanted to make with it (in the interest of full disclosure, he also said that this was the worst thing about Max). On the other hand, you have software like Live or Traktor, software that makes it very easy to do certain things by restricting what the user can do. Mira is the first software, as far as I know, that makes it easy to experiment with interface design in the same way that Max makes it easy to experiment with sound. Whether or not Mira sells a million billion copies, I am hoping that Mira will challenge the expectation that interface is something that a developer sells to you, rather than something you build yourself. If more developers start feeling the pressure to provide configurable interfaces, then I will be a happy man.
Sonoma have been a big part of the mobile music world for some time now in terms of software and hardware, so it was interesting to see their latest development into the Android world. I was able to put some questions about this to Doug Wright, Sonoma’s CEO. Here’s what he said …
What was it that helped you decide to target the Android music market with your new Android solution?
Users have been asking us for Android versions of our products since 2008. We didn’t move in that direction due to Android latency issues, and stuck with iOS as exclusively. When GarageBand was released on iOS in 2011, we saw a marked reduction in our recording app sales. Android is a market that doesn’t have a recording app competitor with prime time TV ad money. We realized we had the right team of engineers to solve the latency problem and musicians were contacting us with more requests for Android solutions, so we got to work.
What is your view on Android as an OS in comparison to iOS?
Development is harder on Android. The platform is more fragmented. The audio APIs are not usable for our purposes, but we see that as an opportunity to make improvements.
Do you think that the market for music applications on iOS has become saturated, and where do you think it will go next?
GarageBand provides most of features that the majority of users are looking for, and sets users’ expectations very high for a low price of $5. That said, even when FourTrack was a top 50 music app at $10, those sales alone could not fund a dedicated development team.
Why did you decide to bring a solution that would underpin other applications for Android rather than bring your own music applications to Android?
See the next question…
Do you plan to bring any music apps to Android using your own solution?
Who is your Android solution targeted at? Is it for large scale developers / manufacturers or is it aimed at the mass market including small dev teams?
Sonoma’s LLA solution needs to be loaded with the OS. In order to achieve the lowest latency, we need to work with each manufacturer to tune it for optimum performance on each device.
How do you see your Android solution benefiting the music community?
Once Sonoma’s LLA is delivered on devices, Sonoma’s SDK will make it possible for real-time processing audio apps to run on Android.
What are your views on Android fragmentation?
Since we will know which devices have the LLA, the burden of trying to release an app for every device will be reduced. Trying to view Android as a single platform is unreasonable. It is a collection of platforms that have a common API, but radically different hardware. If 5-10 new devices are equipped with Sonoma’s LLA, then the audio industry will have a good development and sales platform.
Do you plan to continue to support and develop your AudioCopy/Paste solution in the iOS world?
Will your solution bring something like AudioCopy/Paste to Android?
What are your views on Audiobus and do you think it could have a place in Android?
We are investigating adding Audiobus to our iOS apps. Audiobus may have features that make it useful on Android too, but Sonoma’s LLA enables multiple simultaneous audio applications to process and share audio.
Are there any other mobile operating systems that you think could challenge the iOS / Android hold on the market, especially for music?
So it’ll be interesting to see how Sonoma’s Android solution gains traction in the Android world. Time will tell and I’ll be keeping a close eye on how this develops.