Let’s finish off 2016 in generative fashion … An interview with Tim from Intermorphic

I have known Intermorphic and what they do for a very long time now. In fact, almost as long as I’ve been running PalmSounds. I’ve known them in a variety of iterations, there’s more about that later, but what’s really interesting is the journey they’ve come through and where they’re going.

Tim and I actually started this interview quite a long time ago, so it’s good to finally get it out. Especially as I think we’re going to be seeing a lot more about generative music in the coming weeks.

With that, let’s get going …

PalmSounds: Tim, you’ve been involved in Generative music for just over 25 years now and that’s an amazing achievement in itself. Can we start off by talking about what it was that started you off on that journey? Can you tell us what made you want to be involved in generative, what sparked your imagination?

TC: On my 14th birthday my parents gave me a classical guitar and I’ve played and created music ever since – that guitar was what kick started my love affair with music making that has lasted to this day.  My personal site (http://colartz.com) is now home to some of my musical creations.

Looking back, I guess the original inspiration for our approach to generative music engines came from the classic Foundation series by Isaac Asimov, which I probably started to read some time in the 70s. In it, Asimov introduced the Visi-Sonor, a device which could be manipulated by a skilled operator to create amazing multimedia entertainments combining both music and visuals. Today I imagine it might be called a kind of hyper-instrument. I thought no more of it until 1986 when, for some reason, I started deliberating about a sphere-like composition and performance instrument and madly figured it would be fun to try to create one. The slight problem was that I had no idea where to start and I certainly knew I could not do it on my own!

In 1989 I found the courage to take myself off to business school where I hoped I would learn how to start and grow a business and maybe meet some like minded people, too. It turned out it was exactly the right thing to do and SSEYO (https://intermorphic.com/sseyo) was born soon after. Lot’s of ideas were swirling around at that time and it is a bit tricky to unpick what happened when and how we ended up where we have.

One of the ideas came from noticing that a ticking clock seemed to provide a focus for the mind, somehow helping it to filter out extraneous sound and foster the establishment of calm and stillness. In this situation the brain is obviously doing an incredible amount of real-time audio processing work, totally automatically. This led me to start thinking of a listener’s brain as the ultimate creative instrument. The brain is, after all, what takes music input and interprets it in the context of a unique set of historical and emotional occurrences to create an experience that is unique for every person.

Another idea came through wondering if a chance-based engine with sufficient boundaries and rules might be able to assemble something music-like and interesting enough to engage the brain and stimulate it to fill in the gaps and make interesting and serendipitous connections. I imagined a set of ball-bearings traveling down a chute, in a way akin to a Bagatelle or Pachinko game. Each time the ball bearings would make their journey, they would combine to travel a different set of paths, bouncing off each other as they would go, but the available paths would be constrained by the internal physical design and boundaries of the game – an envelope of musical possibilities as it were.

At some point we did consider algorithmic composition, and even tried neural nets , etc., but we slowly realised that style-based music systems were not what we wanted to build – we would leave that to others to explore. The path that just felt right for us was in designing engines that used stochastic/aleatoric techniques to compose within some overall boundaries and then letting the listeners brain do the rest. We did not yet understand the pivotal role that the sounds themselves would have to play, but that would come later.

We had to start somewhere, so we kicked off with a concept we called an “Environmental Management System”. It was meant to be a device that would act as a kind of sonic buffer to a world of external sensory “overload”, working along the lines of what was discussed above. However, to get anything working we needed a suitable music engine! After a few initial explorations it did not take long to realise the real key to any musical device or instrument would be in the software smarts that powered it, so we started in earnest developing what was to become the SSEYO Koan Music Engine, releasing the first beta in 1992. And, of course, that was the start of the journey we have been on since.

PalmSounds: What was the reaction like to the SSEYO Koan music engine, and how did you deal with what users wanted?

TC: Over the years we have been very lucky to have worked with many really helpful beta testers, early adopters and musicians, including the genius Brian Eno whose “Generative Music 1” with SSEYO Koan software (https://intermorphic.com/sseyo/koan/generativemusic1) definitely upped our profile and helped popularise the SSEYO Koan system. We love communicating with our users, and, as there are only two of us again, we reserve an especial place in our hearts for those that can also be constructively critical :).

The reaction back in 1992, when we got out the first betas of SSEYO Koan (https://intermorpihc.com/sseyo/koan), was very encouraging so I guess we decided it was worth continuing. I seem to recall that people felt that the output could be interesting and felt organic; this was promising and indicated we might be doing something right.

We then just got on and did the things we wanted to do, listening to feedback along the way. Early development of a system like Koan was quite open ended in scope so we were always short of time (still so very true today). We experimented with all sorts of ideas, including music rules which changed or became active according to the time of day. Our approach just meant we tended to hone in on feedback concerning bugs and features that just didn’t work quite right or that could work better. Of course, over time we picked up a number of helpful suggestions concerning potential improvements to what we were doing, but as it is now so long ago I just cannot recall specifics.

We have always had a long, long list of related things *we* want/plan to do (time willing) and a pretty clear idea of where we want to go, so we don’t tend to look around much at what others are doing. This is no doubt a failing of ours, but, as people are now finding with Facebook, it is easy to get envious of what others are doing – we would rather just get on with what we want to do. That doesn’t mean we do not read, listen to and sometimes take on board feedback or keep a loose eye on the market and market developments; it just means that reported “crashers” and “bugs” tend to get the most attention.

By way of analogy, we tend to feel like we are in a small boat on a big sea. For us to have any chance at all of surviving the storms and the swell until reaching the next port we have to make sure we are taking account of prevailing conditions or those we can see coming. That means we are always re-checking the horizon and adjusting our jib. This in turn affects the order of things on the long list of things to do. You can imagine, therefore, that getting new stuff on to that list can be quite tricky, which is why we often find it easier to deal with feedback that matches up with something already on it!

PalmSounds: After SSEYO was acquired by the Tao Group things didn’t go too well, what happened and how did that impact your work and how you felt about it.

TC: I tend to believe that there are always 2 sides to every story, that if you are trying to make the future then time is too short to dwell unduly on the past, and that no thing worth doing is ever going to be easy. It is the nature of life that it exists only now, so rather than looking back I prefer instead to try to find the good in any situation and then move forward from there. It is a positive approach because to build anything you need to stay positive.

In many ways things did actually go well for us at Tao Group and we got to work with many really nice, very clever people.

We were able to keep a smart audio team together and undertake the challenge of developing the Tao intent Sound System (iSS). The iSS was a collection of audio technologies that was deployed as required in the Tao intent software platform, a true multi-threaded operating system and virtual machine primarily for mobile devices. Tao Group had a stellar set of investors and Tao intent was licenced to a number of big name manufacturers.

It was through what we did on the iSS and our employment at Tao Group that we got to experience the mobile and device world from the inside out. This was to our long term benefit as we got to understand more about the mobile ecosystem, its workings and relationships. Plus, we made some good friends and connections, too!

Whilst at Tao we also worked on two “products” that were released. The first was the Advanced Polyphonic Ringtone Engine, and the second was SSEYO miniMIXA. Launched in 2004, SSEYO miniMIXA was, for its time, a very advanced mobile music mixer app that ran on Symbian, Windows Mobile and intent (and wherever that could be deployed).

Although for financial reasons it was a dreadful day for all involved when Tao Group folded in 2007, it also finally set us free to start over with Intermorphic: a door was opened that let us set out once again on our generative journey, but from a new vantage point.

In 2008 Intermorphic managed to secure the rights to the iSS, SSEYO miniMIXA and all past SSEYO IP. Much of what we did at Tao is now in the Intermorphic Sound System. The ISS has been put to good use and underpins all our current apps, including Mixtikl (https://intermorphic.com/mixtikl) which we evolved from SSEYO miniMIXA.

So, just as without the SSEYO years there would not have been the Tao years, without the Tao years there would not have been the Intermorphic years – that is, in terms of how they have turned out, of course. There would still have been Intermorphic years, but just different years with different generative apps.

Of course, running a small, niche software business is always going to be hard work and require heaps of passion and bags of luck. We are truly grateful to and for the amazing people we have worked with along the way and appreciate how fortunate we are to still be standing. Onwards!

PalmSounds: So, that brings us up to the present day, or rather, up to the Intermorphic years. How have things been different for you now that you make the decisions on direction and new products? Has it been a creative release, or, do you think that being part of a commercial environment with all its pressures actually helps the creative choices you make?

TC: When we first started SSEYO we were like a young new band; we had total creative control over what we did and bags of energy and time. We were in a niche area and as SSEYO developed we had to find a way to feed it – which at the time meant Venture Capital. So, when we raised Venture Capital (VC) there rightly came with it some big, new and important VC-related factors to address. Then, after SSEYO was acquired by another VC-backed company there were yet more VC-related factors and we were then of course working for someone else with their own requirements and direction.

When we took the decision to start Intermorphic it was to regain creative control over “something” and we knew we would, as before, have to live on our wits and organic growth. It was different with Intermorphic, though, as we were older, with families, and had less time and more pressures. As well as that there were some tight resource constraints. All of these continue to this day to bring their own challenges.

Creative endeavour, if it is to be sustained, doesn’t exist in a vacuum: it has to be fed and supported. A lot of our creative effort at Intermorphic, as it did at SSEYO, goes into trying to figure out how to survive. There is always a lot of tyre kicking and iterative thinking around what we could and even should do in the context of our audience and our resources, which have always been very constrained. In the light of that and in the early days of Intermorphic, the first app we built was a cut-up text generator desktop app called “Liptikl” (https://intermorphic.com/liptikl), a useful non-music tool which was spurred by me having trouble coming up with new word associations/lyrics for my songs. Once that was out of the way we decided we were ready to start work on a clean room build of a generative music engine we called the Noatikl Music Engine (NME) (https://intermorphic.com/nme/3/guide), together with a generative music composer desktop app and interface to it that we called “Noatikl” (https://intermorphic.com/noatikl).

Those early days for Intermorphic feel like a long time ago; today there is way more competition and ever more user expectation as the app market is now global and there are also many more business models to consider and factors to deal with – all generally set against diminishing returns. It is a perfect storm in many ways, and whilst things are whirling around us it can often seem if we are moving very slowly. But, strangely, this situation is also a stimulus, in that it forces us to focus on what we do best, what our USPs are and what it is we can do that can bring real value to our customers and fans.

So, it is bit of a dichotomy really. On one hand we have more creative freedom (no one now tells us what to do) but on the other we have less creative freedom (surviving in the current app marketplace). It requires a good deal of effort to counter inertia and to keep innovating, but we know that nothing stands still and it is in our DNA have to innovate, whether that is through new features for existing apps or new apps altogether. In many ways, the biggest creative challenge of all is to discover and agree on interesting green pastures to farm and explore and that fit our current resource profile. In our new developments we try to keep a sense of our history, identity, domain, audience and competitive environment and look to find creative ways to combine our existing code base with new ideas, novel approaches and simple interfaces. We are excited therefore about the potential for our Wotja Reflective Music System (https://intermorphic.com/wotja), first released in 2014, which ticks all the boxes for us, and it is an app that we want to do much more with. More on that in a bit.

PalmSounds: People find generative music difficult to get to grips with, why do you think that is, and where do you see tool makers like yourselves going in the future to help this, and what is it about generative that you think people struggle with?

TC: I think to get some perspective on this it helps to consider Computer Generated Music (CGM) in general. By this I mean music that a computer generates, composes or mixes itself, whether by rules, chance, algorithms, AI or whatever. It could be assembled from any size of building block, from a sample to a loop to a recording whether made by human, animal, nature or computer. Unless a human has a clear hand in controlling, directing  and imbuing meaning to it, to my way of thinking any of the foregoing is Computer Generated Music. I am sure it is not a perfect description, but it will suffice for now and in the context of my thinking outlined here.

Over the years I have enjoyed hearing CGM, principally the Generative Music that our systems have generated, and I have often pondered on the nature of it. My thoughts change and have kept changing, and I am not sure there are any hard and fast answers as it depends on the person and a number of factors. I think some of the (by no means exclusive) factors to consider relate to how a creator’s or listener’s tastes or needs vary for the following:  1) Compositional control (e.g. what control do I need to have over what happens?); 2) Intrinsic meaning (e.g. do I need it to have any emotional, personal meaning?); 3) Context (e.g. do I need the music somehow fit the context of where/when/how it is experienced?); 4) Musical style & structure (e.g. do I need it to have a feeling of style or for it to have a directional vector?); 5) Foreground/Background music (e.g. do I want to actively listen to it); 6) Interactivity (e.g. do I need to be created live, in the moment; as in “inmo” (https://intermorphic.com/inmo) ?); 7) Sounds used (e.g. do I want natural or synthetic sounds, created live or sampled?); 8) Ideas (e.g. am I wanting to use it to create ideas for me to later use?); 9) Ease of creation (e.g. how much time do I need to set aside to make it; how hard is it to do well?); 10) Sharing (e.g. as a creator, do I want to share what I have made?); 11) Turing test (e.g. can you be certain it was/was not created by computer?) etc.

Every creator/listener will have different needs and taste profiles for all of the above, and these are likely to change according to mood, context, day, season , etc. So, as you can imagine, asking why “people” find generative music difficult to get to grips with is a difficult one to answer; the answer is a very personal.

As I said in a previous answer, we are primarily concerned with music that is created stochastically/aleatorically. It might be easy to think that that this kind of music can have little emotional impact on a listener. However, I am reminded of a time back I think in 1996 when I was in the zone listening to Timothy Didymus’ “Float” (https://www.intermorphic.com/sseyo/koan/float  – Timothy is an amazing generative musician who we still have the honour of working with). I think I was listening to “Midheaven” at the time (https://www.intermorphic.com/sseyo/koan/float/#float-audio) and I distinctly recall entering some kind of quasi state where I felt I could “taste” the music; it was a remarkable and moving experience I think related to the subtle changes in the music. As far as I know it was something peculiar to me and only happened the once, and I have not experienced it with other non-generative music either, but it emphasised to me the personal impact that CGM could have in the right context.

In the context of the music our engines make I have been pondering quite a bit about “Intrinsic meaning”. It seems to me that everyone has a “music player“ in their head. However, experiences and appreciation can be different as each person’s “music player” has associations specially keyed with their age, culture, demographic, personal/shared memories (e.g. concert) etc. And, even then, to really appreciate music, repeated listenings are often required. I love playing my guitar and writing music and, at least to me being a creator and songwriter, there is a fair amount of meaning in what I make – my music is distilled from my thoughts and emotions and is played with passion. However, if I share a recording of my music for someone to listen to, then what meaning can they, as a listener, extract from my recording (listening to it at a distance as it were)? They do not have my memories or context to unlock or quickly interpret it, to hear it as I hear it. I find it interesting to then consider CGM in this kind of sharing context, and it raises all kinds of questions related to meaning.

One other factor that plays a part in all this deliberation is that over the years we came to understand that generative music played against an image seemed to elicit a deeper reflection on the image, unlocking thoughts and memories. As a result, way back in 2010 we had decided to run with the term “Reflective Music” as a descriptor for the effect that the output of our generative music system could engender, and secured the reflectivemusic.com domain. In hindsight it was a good move, as the descriptor was to become even more apt…

So, trying to make sense of it all, I stood right back and got to thinking about text. Everyone has both a text player and writer in their head (language aside) so people can both easily and quickly create it and assimilate/understand it, it. Text has meaning and a reader’s imagination puts flesh on the bones; in a musical analogy it is a bit like a visual MIDI score played through different MIDI synths. Unlike music, though, text it is not particularly temporal and you can also quickly change it or respond to it. We figured it might be fun to play around with a kind of music messaging where a creator’s text could be used convey any meaning required and the text itself could be used by our engines, as a seed, to generate melodies / music as an accompaniment to it. This is the general idea we are currently exploring with the Wotja Reflective Music System (https://wotja.com).It quickly became clear that “Reflective Music” was the perfect descriptor because the meaning of the text can be reflected upon and the generated melody is, in turn, a reflection of the text.

PalmSounds: Can you talk about how your products have evolved in the time you’ve worked with generative music?

TC: We started out by building content-open and content-extendable creativity tools that let others make and record things of their own, to use however or wherever they wish. Even after all this time we see no reason to change our direction and, besides, we really enjoy seeing and hearing what other people can create with our tools and apps!

In the intervening years many, many external factors have changed, none more important than the emerging importance and capability of mobile devices. We first started work in mobile in 1998, in “Mobile Music Making” as it were, so mobile thinking has been in our DNA for many, many years.

However, we really started thinking in earnest about “mobile first” back in the noughties, sometime before I started claiming in 2004 (with good reason, and prior to the release of SSEYO miniMIXA), that “the mobile phone is the next electric guitar”.

There have been lots of stops, starts and dead ends along the way for us, and it took us a long time to get there, but we now think mobile first for everything and have done so for some time now – be that apps, business models, websites or anything else. They all have to work and work well on mobile; everything then scales up from there.

Part of the issue with mobile is that it is mobile – screen size, performance, input mechanisms, app inter-operatibility, UIs, usability, complexity, operating system, sharing, social, marketplace, business models etc. all brings constraints and opportunities. All these have to feed into our app planning, design and thinking as we evolve our apps.

One of the big things we learnt very early on, and alluded to earlier, was that it was all very well having a powerful generative music engine, such as our Noatikl Music Engine (NME), but if you want someone else to hear and in the same way experience the generative music you have made, then the music has to be *portable*, i.e. there has to be a player for any desired listening device. That means you generally either need to A) restrict your compositions to rely on the use of audio samples (these are big and have issues if you want to share them so are generally inbuilt), maybe with some pitch shifting or post-FX; or B) you need to build or licence and include some kind of good quality, flexible, MIDI-like modular synth sound engine that allows real-time polyphonic sound generation with sound shaping; or C) some combination of both of those. We chose the latter route, C), so over the years we have spent a good deal of time evolving our integral Partikl Sound Engine (PSE) (https://intermorphic.com/pse/3/guide). This is now a powerful and customisable modular/SF2 MIDI synth with live FX that is included in all our music apps and we recently updated it to allow stereo synth sound design. Other than trying as best we can to keep up with advances in sound generation, in so far as it is relevant to portable generative music of course, the main problem with something this flexible relates to the interfaces you choose to provide to it to allow complex sound design, especially on mobile, and how they can be accessible. This is an area we have made some progress in, but we still have a long way to go and are still mulling over exactly what to do next.

Aside from the ongoing development of our essential core technologies such as the NME and PSE, we have found it is the mobile factors that have shaped our thinking the most with respect to our products over the last few years. So, although we started out in 2007 with desktop versions of Noatikl and Liptikl we now have mobile versions of both of those. Mixtikl is the evolution of SSEYO miniMIXA and at the time of Mixtikl’s first release (2008) we built it with a scalable XML front end so the same UI would work on mobile (Pocket PC) and desktop (Windows and Mac), meaning it started out as a hybrid. Apps that came after that, such as Tiklbox Ambient Generative Music Player (https://intermorphic/tiklbox) and have been totally mobile-centric designs, using native controls where we can, and that we are looking to develop Desktop versions for as we’re adopting Swift where we can.

Business models really do have a major impact on what developers can do and how it is done, so we experimented with many different approaches trying to find the right balance for our offerings. After much trial and error over a few years we have now pretty well settled on a model the best allows us to move forwardsWe have built Wotja to be not only a creativity tool, but also a generative music publishing system in its own right in that totally custom Noatikl pieces can be imported into Wotja and saved or exported as wotjas and then played, for free, in Wotja.

It is not easy these days surviving as a niche app developer, what with VC money continuing to pump into apps resulting in market consolidations, a global app market with ever more noise, shifting customer expectations and everyone suffering increasing pressures on their time. However, we love doing what we do, we love our customers, fans and friends and, for those interested in the areas we work in, we expect to continue trying to innovate in our own niches and even to find a few new ones!

That’s the end of the interview that Tim and I worked through earlier in the year. To finish off this post Tim has added a few words about what’s happening now at Intermorphic.

A lot has happened at Intermorphic since we started this interview. We have some exciting new developments in the works and which are not far off now, hence this “end of year” addendum.

In the last year we focused hard on stability and released a number of related updates for Noatikl 3 and Mixtikl 7. We wanted to be in a good position for the long journey to our next major milestone in Reflective Music, namely Wotja 4. This has been a major undertaking for us as we are gradually consolidating the best of Noatikl 3, Mixtikl 7, Tiklbox 1 and Wotja 3 into one app, Wotja 4. We expect to release the first iOS version and macOS Safari App Extension in January 2017 followed by desktop versions sometime later in the year.

Wotja 4.0 is actually the start of a whole new journey for us, a new stage of evolution if you like, and there is a lot to be excited about. Being able to focus on just one reflective AND generative music app with full editing means we will be able to move it forward faster with improvements and extensions and it also means we are better able to evolve our music and sound engines. To that end the Noatikl Music Engine is evolving into the Intermorphic Music Engine (IME) and the Partikl Sound Engine is evolving into the Intermorphic Sound Engine (ISE). Both of these engines are at the heart of the new Wotja Reflective & Generative Music System. Of course we will also be extending the Intermorphic Text Engine, too, in due course.

Although we don’t feel it appropriate this close to release to pre-announce the details of Wotja 4 :), one thing we can say is that the IME in it will itself support multiple Text To Music (TTM) Pattern voices to allow the creation of richer reflective music tapestries that can sit on a gorgeous bed of generative music – with everything being deeply and totally customisable. TTM has been in Wotja since its first release, but prior to Wotja 4 it was not in the engine itself so only one was allowed in a wotja – no longer!

If you have read this far then many thanks for your interest in what we are doing and we hope that you might decide to get and try out Wotja 4.

Especial thanks to Ashley at Palmsounds who does more than anyone else we know – and we have no idea how he does it unless there are 10 of him.

Our season’s wishes to all! Intermorphic.

So that’s all for 2016! Have a lovely New Year and we’ll be back again in a day or so.

I don’t know what your favourite apps of the year were, but here are the ones that matter to me

I always think it’s difficult to tell you which were the best apps in any year, and 2016 is no different at all. What works for me as a great app won’t work for other people and vice-versa, so it all seems a little pointless. However, what I can tell you is which apps were important to me this year. I think that might be more interesting (or maybe not), and it’s certainly easier to do from my perspective.

So without further messing around, here are the apps that I used a lot, or found intriguing, or for whatever other reason, mattered.

1. Auxy

Without a doubt Auxy is an app that I can’t do without, at least not currently anyway. I really love it. It works for me and just fits with how I think and work right now. I’m not saying that this will always be the case, but for now me and Auxy, we’re good. I also really like the sound packs that they’ve been releasing. I got them both and love them.

2. Model 15

Moog’s Model 15 is on my list for a totally different reason than Auxy is. Model 15 is here because it’s one of those apps that I keep fiddling with and getting into and then leaving for a bit, then coming back to. I don’t know if you do that, but I certainly do. I like Model 15 and I’d really like to do something useful with it, but so far I haven’t. Who knows, maybe in 2017 I will.

3. NOIZ (and KRFT)

NOIZ you’ll know from Studio Amplify. It’s a great app for making stuff even if you’ve no idea how to make stuff, and I’m all for that. Of course the nice chaps from Studio Amplify now have KRFT in beta and I’ve been playing with that recently. It is going to be awesome. I mentioned it not so long ago here, and I’m hoping to be able to tell you lots more soon enough.

I think that these apps are going to have a really bright future and are going to help users to make things in ways that they hadn’t thought about before.

4. frekvens

I’m a fan of Mr HumbleTune’s apps, music, and design style. I think it’s great, and for good reason. His apps are amazing, and, pretty much everywhere too. I really like two of them though, nils, and frekvens. They really let you mangle sound, but in a good way, in a way that doesn’t hurt. I’m sure that other people find themselves coming back to the same FX apps over and over, and frekvens is one of those for me.

5. All things Korg

I can’t help myself but say that I do love Korg’s apps. They’ve done well this year. We’ve had good updates and new apps like ODYSSEi and iWAVESTATION. My personal favs are Gadget and iDS-10 though. Again I find myself coming back to these time and time again. I bet some of you do too.

6. AC Sabre

I think that Sabre has been a bit overlooked and that’s a shame. The AC Sabre is an amazing gestural performance tool for the iPhone and hasn’t really had the attention it should have had. I’d like to do a bit more with it myself next year as I think I’ve only barely scratched the surface of what it can do for me.


I posted on ROTOR and the tangible controllers yesterday, but it also deserves a mention here. I like modular apps but ROTOR (and Reactable mobile before it) seem to provide a more accessible route into modular than a lot of other apps in that genre. Now that ROTOR has the tangible controllers with it I’m hoping to get a bit more time to devote to it soon.

8. Fluxpad

Unusual apps and alternative interfaces are very important to me. So Fluxpad is assured a place in my list. It gives you a different way to interface with sound and that in itself is important. I like that Fluxpad is playful and easy to use and yet at the same time a highly capable and flexible app for manipulating samples.

9. Cubasis

There had to be a DAW in the list and it’s Cubasis 2.0. It’s been a big help to me on a project that I’m working on so it’s in my list. However, there was stiff competition from n-Track Studio 8 which arrived quite recently. It will be interesting to see how some of the big, and one or two little, DAWs survive in 2017.

10. Patterning

I love drum apps. Patterning is another app that just fits with my workflow. It’s just intuitive and fluid and it makes perfect sense to me. I can’t say that about all drum apps I’m afraid, but Patterning is probably one of the few go to drum apps that stays on my iPad. I’d love there to be an iPhone version too.

11. Wotja

You might find this one a little strange, but more will become apparent soon. For now I’ll tell you that I love Wotja’s ability to create an ambient soundscape from a few words. It’s simple to tailor and tweak to do exactly what you want too.

I’ve also found myself coming back to Mixtikl recently and really getting into that app again. I think that these generative technologies are so deep that it can be easy to get lost. However, I think it’s worth it to dive in and explore and I’d like to do more of that in 2017 with all of Intermorphic’s tools.

12. Skram

Last and by no means least is Skram from Liine. I’m a fan of apps that make the process of creating music simpler and more immediate. To me that’s really important. I thought Skram was great when it first came out and the latest update has made it even more usable. I hope that it keeps going and brings more and more people into making music, and I’d also really like to see an iPhone version of it too.

So that’s 12 apps (more if I’m honest) that mattered to me and continue to do so. I hope you found that interesting. Feel free to ask any questions in the comments.

A first impression … ROTOR’s tangible controllers


Reactable have a long history in creating innovative musical instruments, starting out with their original Reactable, moving to Reactable Mobile, and now with ROTOR and their accompanying controllers.

Of course tangible controllers for an iPad aren’t actually a new thing. In fact, two years ago Tuna DJ brought out their control knobs (you can see them here in this post). Enough of those for the moment.

When Reactable brought out their first mobile app it was a very different beast to the other modular apps around at the time. When they recently followed up with their new ROTOR app it was another big step, but not just a software step, one that they aimed to  provide users with an experience that is somewhere in between using the full hardware version of the reactable and an iPad app.

So the real question is, have they succeeded?

I’d say yes. In many ways. However, I’d also say that this is not a perfect solution, and if that’s what you’re seeking then you’re almost certainly looking in the wrong place. Before deciding whether the ROTOR tangible controllers are for you or not it’s worth understanding what to compare them against. A brand new Reactable will currently cost you 5900€ (that’s with an 800€ discount). A set of ROTOR controllers will set you back 39.90€, which is about 0.7% of the cost of a full Reactable. In my mind that’s a pretty good deal.

Personally, whilst I’d love to spend some time playing with a full Reactable, I’m more than satisfied with the new ROTOR controllers. I think that they represent excellent value for money.

Let’s move on to how they work and what you can do with them

I’ll start by saying that I think that the presentation of these is lovely. They come in a nice little round tin and are cushioned in foam. In my view presentation is important, and even though you’ve only paid less than 1% of the cost of a Reactable I still think that the whole experience is important.

When you get the controllers out they’re simple things, which initially made me wonder if they’d work at all. However, placing them on the ROTOR app, they work immediately. They will control any on screen ROTOR object.

One thing that quickly became apparent was that to use these controllers you absolutely need a flat surface to work from. Whilst I’ve not tried using these in a mobile environment (and by that I mean on a bus or a train), I’m fairly sure that they’re not going to perform at their best. Having said that, for indoor, flat surface use, they work better than you might expect.

But they are not perfect. And I think that it would be wrong to think that these little devices could be. They will slip and can change from controlling an object on screen to moving it around. In my view I think that with practice I could limit a lot of that slippage on the screen and end up being quite deft with these, but that would take a little time, and would be time well spent.

A quick try with the old Tuna DJ knobs

As I had mentioned them earlier I thought I’d give these older knobs a try out on the ROTOR app. Sadly they didn’t work at all which reminded me that I’d had trouble getting them to work originally. I can’t remember how much they cost so I can’t compare them to the ROTOR controllers.

My verdict …

If you’re looking for an inexpensive way to get an experience a little more like the full scale Reactable then the ROTOR controllers are worth it in my view as they cost less than 1% of the full device, and with a little time and practice I think they’ll be really useful.

If you think you’re going to get that full experience for 39€ then that’s a bit unrealistic and you probably shouldn’t bother.

Reactable’s controllers are on sale here, as is the ROTOR app itself:

Quantum VJ HD arrives from Mr NightRadio

I’m a long time fan of all things from Mr NightRadio, from apps to hardware and on an esoteric variety of platforms too. Now he’s brought something from his hardware creations into the world of apps, which is an interesting move.

Quantum VJ HD is a simple glitch-style audio visualizer (video generator). It can receive sound from the microphone or from the Line-in port (depends on the system settings). Sound will be converted to the graphic elements byte by byte. The final video can be mixed with the camera stream in real time.

iTunes File Sharing and Wi-Fi can be used, if you want to upload your own codeset images to the app.

  • Multitouch control – pair of parameters for each new touch
    • 1st Touch – changing the Mode (horizontally) and the Power (vertically) parameters.
    • 2nd Touch – changing the Color (horizontally) and the Noise (vertically) parameters.
    • 3rd Touch – changing the Camera (horizontally) and the Resolution (vertically) parameters.
    • 4th Touch – changing the Brightness (horizontally) and the Speed (vertically) parameters.
  • Press on the top left corner to hide/show the control panel (fullscreen mode ON/OFF).

Video export is temporarily not available in iOS version.

Quantum VJ HD costs $1.99 (£1.49) on the app store now:

LP-5 – Loop-based Music Sequencer v3.0 arrives with loop recording and more

A great update for this loop based sequencer, making it much more powerful and useful. Here’s what’s new:

  • Loop recording from hardware and other apps
  • Time stretching by Superpowered
  • 40 scenes per set
  • iPad Slide Over and Split View Support
  • Audiobus I/O
  • LinkKit 2.0
  • Bug fixes and improvements

SpaceVibe 2.20 brings MIDI and more

I like apps like SpaceVibe as they’re just a bit different, so it’s good to see SpaceVibe get updates, especially like this one bringing a wealth of new MIDI features.

Here’s what’s new:

  • MIDI !!!
  • Added support for MIDI Hardware and Software devices.
  • Play SpaceVibe with a bluetooth controller, Virtual MIDI or use a MIDI interface to connect 5-Pin MIDI equipment.
  • Map your hardware controller knobs and buttons directly to SpaceVibe.
  • Mappings are saved and loaded automatically for every hardware device.
  • Added an option in Settings for hearing Noise only when touching the Noise Pad.

A free MIDI utility arrives to help with monitoring

This looks quite useful, and is a universal app too so should come in handy in all environments. I haven’t checked to see if it’s ad supported or with IAPs, so if you check it out please let me know.

Here’s what app description:

MIDI Scope allows you to monitor MIDI activity using your iOS device. Features include the ability to filter events by individual type or by group as well as MIDI Channel. A comprehensive interpretation of each event packet is displayed. Raw packet data is shown in both decimal and hexadecimal formats. Please send feature requests and trouble reports to the support email listed on our web site.

%d bloggers like this: