0

PixiTracker 1.5 is here (iOS will follow soon)

On the ‘Voices of the Sun‘ blog there’s news of a new version of PixiTracker. Version 1.5 arrives for Raspberry Pi, PocketCHIP and many more operating systems. iOS will come soon.

Here’s what’s new and what platforms are covered:

  • export to XM (eXtended Module of the FastTracker2) – this file can be loaded by any modern music tracker/player (for example, SunVox);
  • sound editor: “reduce size / 2 (lossy)” function has been added;
  • Linux: ARM64 architecture support (tested on PINE64 64-bit Single Board Computer);
  • Linux: Raspberry Pi, PocketCHIP and other ARM(armhf)-compatible devices support;
  • Linux and Windows: multitouch support;
  • Android (4.0 and higher): System Settings -> Interface: new option “Hide system bars” for true fullscreen mode;
  • new sound packs (16bit): pack10_fm, pack11_percussion_2, pack12_orchestra;
  • new song (16bit) – Example12;
  • bugs fixed.

More information can be found here and you can check out the Voices of the Sun blog here.

Advertisements
5

Let’s finish off 2016 in generative fashion … An interview with Tim from Intermorphic

I have known Intermorphic and what they do for a very long time now. In fact, almost as long as I’ve been running PalmSounds. I’ve known them in a variety of iterations, there’s more about that later, but what’s really interesting is the journey they’ve come through and where they’re going.

Tim and I actually started this interview quite a long time ago, so it’s good to finally get it out. Especially as I think we’re going to be seeing a lot more about generative music in the coming weeks.

With that, let’s get going …

PalmSounds: Tim, you’ve been involved in Generative music for just over 25 years now and that’s an amazing achievement in itself. Can we start off by talking about what it was that started you off on that journey? Can you tell us what made you want to be involved in generative, what sparked your imagination?

TC: On my 14th birthday my parents gave me a classical guitar and I’ve played and created music ever since – that guitar was what kick started my love affair with music making that has lasted to this day.  My personal site (http://colartz.com) is now home to some of my musical creations.

Looking back, I guess the original inspiration for our approach to generative music engines came from the classic Foundation series by Isaac Asimov, which I probably started to read some time in the 70s. In it, Asimov introduced the Visi-Sonor, a device which could be manipulated by a skilled operator to create amazing multimedia entertainments combining both music and visuals. Today I imagine it might be called a kind of hyper-instrument. I thought no more of it until 1986 when, for some reason, I started deliberating about a sphere-like composition and performance instrument and madly figured it would be fun to try to create one. The slight problem was that I had no idea where to start and I certainly knew I could not do it on my own!

In 1989 I found the courage to take myself off to business school where I hoped I would learn how to start and grow a business and maybe meet some like minded people, too. It turned out it was exactly the right thing to do and SSEYO (https://intermorphic.com/sseyo) was born soon after. Lot’s of ideas were swirling around at that time and it is a bit tricky to unpick what happened when and how we ended up where we have.

One of the ideas came from noticing that a ticking clock seemed to provide a focus for the mind, somehow helping it to filter out extraneous sound and foster the establishment of calm and stillness. In this situation the brain is obviously doing an incredible amount of real-time audio processing work, totally automatically. This led me to start thinking of a listener’s brain as the ultimate creative instrument. The brain is, after all, what takes music input and interprets it in the context of a unique set of historical and emotional occurrences to create an experience that is unique for every person.

Another idea came through wondering if a chance-based engine with sufficient boundaries and rules might be able to assemble something music-like and interesting enough to engage the brain and stimulate it to fill in the gaps and make interesting and serendipitous connections. I imagined a set of ball-bearings traveling down a chute, in a way akin to a Bagatelle or Pachinko game. Each time the ball bearings would make their journey, they would combine to travel a different set of paths, bouncing off each other as they would go, but the available paths would be constrained by the internal physical design and boundaries of the game – an envelope of musical possibilities as it were.

At some point we did consider algorithmic composition, and even tried neural nets , etc., but we slowly realised that style-based music systems were not what we wanted to build – we would leave that to others to explore. The path that just felt right for us was in designing engines that used stochastic/aleatoric techniques to compose within some overall boundaries and then letting the listeners brain do the rest. We did not yet understand the pivotal role that the sounds themselves would have to play, but that would come later.

We had to start somewhere, so we kicked off with a concept we called an “Environmental Management System”. It was meant to be a device that would act as a kind of sonic buffer to a world of external sensory “overload”, working along the lines of what was discussed above. However, to get anything working we needed a suitable music engine! After a few initial explorations it did not take long to realise the real key to any musical device or instrument would be in the software smarts that powered it, so we started in earnest developing what was to become the SSEYO Koan Music Engine, releasing the first beta in 1992. And, of course, that was the start of the journey we have been on since.

PalmSounds: What was the reaction like to the SSEYO Koan music engine, and how did you deal with what users wanted?

TC: Over the years we have been very lucky to have worked with many really helpful beta testers, early adopters and musicians, including the genius Brian Eno whose “Generative Music 1” with SSEYO Koan software (https://intermorphic.com/sseyo/koan/generativemusic1) definitely upped our profile and helped popularise the SSEYO Koan system. We love communicating with our users, and, as there are only two of us again, we reserve an especial place in our hearts for those that can also be constructively critical :).

The reaction back in 1992, when we got out the first betas of SSEYO Koan (https://intermorpihc.com/sseyo/koan), was very encouraging so I guess we decided it was worth continuing. I seem to recall that people felt that the output could be interesting and felt organic; this was promising and indicated we might be doing something right.

We then just got on and did the things we wanted to do, listening to feedback along the way. Early development of a system like Koan was quite open ended in scope so we were always short of time (still so very true today). We experimented with all sorts of ideas, including music rules which changed or became active according to the time of day. Our approach just meant we tended to hone in on feedback concerning bugs and features that just didn’t work quite right or that could work better. Of course, over time we picked up a number of helpful suggestions concerning potential improvements to what we were doing, but as it is now so long ago I just cannot recall specifics.

We have always had a long, long list of related things *we* want/plan to do (time willing) and a pretty clear idea of where we want to go, so we don’t tend to look around much at what others are doing. This is no doubt a failing of ours, but, as people are now finding with Facebook, it is easy to get envious of what others are doing – we would rather just get on with what we want to do. That doesn’t mean we do not read, listen to and sometimes take on board feedback or keep a loose eye on the market and market developments; it just means that reported “crashers” and “bugs” tend to get the most attention.

By way of analogy, we tend to feel like we are in a small boat on a big sea. For us to have any chance at all of surviving the storms and the swell until reaching the next port we have to make sure we are taking account of prevailing conditions or those we can see coming. That means we are always re-checking the horizon and adjusting our jib. This in turn affects the order of things on the long list of things to do. You can imagine, therefore, that getting new stuff on to that list can be quite tricky, which is why we often find it easier to deal with feedback that matches up with something already on it!

PalmSounds: After SSEYO was acquired by the Tao Group things didn’t go too well, what happened and how did that impact your work and how you felt about it.

TC: I tend to believe that there are always 2 sides to every story, that if you are trying to make the future then time is too short to dwell unduly on the past, and that no thing worth doing is ever going to be easy. It is the nature of life that it exists only now, so rather than looking back I prefer instead to try to find the good in any situation and then move forward from there. It is a positive approach because to build anything you need to stay positive.

In many ways things did actually go well for us at Tao Group and we got to work with many really nice, very clever people.

We were able to keep a smart audio team together and undertake the challenge of developing the Tao intent Sound System (iSS). The iSS was a collection of audio technologies that was deployed as required in the Tao intent software platform, a true multi-threaded operating system and virtual machine primarily for mobile devices. Tao Group had a stellar set of investors and Tao intent was licenced to a number of big name manufacturers.

It was through what we did on the iSS and our employment at Tao Group that we got to experience the mobile and device world from the inside out. This was to our long term benefit as we got to understand more about the mobile ecosystem, its workings and relationships. Plus, we made some good friends and connections, too!

Whilst at Tao we also worked on two “products” that were released. The first was the Advanced Polyphonic Ringtone Engine, and the second was SSEYO miniMIXA. Launched in 2004, SSEYO miniMIXA was, for its time, a very advanced mobile music mixer app that ran on Symbian, Windows Mobile and intent (and wherever that could be deployed).

Although for financial reasons it was a dreadful day for all involved when Tao Group folded in 2007, it also finally set us free to start over with Intermorphic: a door was opened that let us set out once again on our generative journey, but from a new vantage point.

In 2008 Intermorphic managed to secure the rights to the iSS, SSEYO miniMIXA and all past SSEYO IP. Much of what we did at Tao is now in the Intermorphic Sound System. The ISS has been put to good use and underpins all our current apps, including Mixtikl (https://intermorphic.com/mixtikl) which we evolved from SSEYO miniMIXA.

So, just as without the SSEYO years there would not have been the Tao years, without the Tao years there would not have been the Intermorphic years – that is, in terms of how they have turned out, of course. There would still have been Intermorphic years, but just different years with different generative apps.

Of course, running a small, niche software business is always going to be hard work and require heaps of passion and bags of luck. We are truly grateful to and for the amazing people we have worked with along the way and appreciate how fortunate we are to still be standing. Onwards!

PalmSounds: So, that brings us up to the present day, or rather, up to the Intermorphic years. How have things been different for you now that you make the decisions on direction and new products? Has it been a creative release, or, do you think that being part of a commercial environment with all its pressures actually helps the creative choices you make?

TC: When we first started SSEYO we were like a young new band; we had total creative control over what we did and bags of energy and time. We were in a niche area and as SSEYO developed we had to find a way to feed it – which at the time meant Venture Capital. So, when we raised Venture Capital (VC) there rightly came with it some big, new and important VC-related factors to address. Then, after SSEYO was acquired by another VC-backed company there were yet more VC-related factors and we were then of course working for someone else with their own requirements and direction.

When we took the decision to start Intermorphic it was to regain creative control over “something” and we knew we would, as before, have to live on our wits and organic growth. It was different with Intermorphic, though, as we were older, with families, and had less time and more pressures. As well as that there were some tight resource constraints. All of these continue to this day to bring their own challenges.

Creative endeavour, if it is to be sustained, doesn’t exist in a vacuum: it has to be fed and supported. A lot of our creative effort at Intermorphic, as it did at SSEYO, goes into trying to figure out how to survive. There is always a lot of tyre kicking and iterative thinking around what we could and even should do in the context of our audience and our resources, which have always been very constrained. In the light of that and in the early days of Intermorphic, the first app we built was a cut-up text generator desktop app called “Liptikl” (https://intermorphic.com/liptikl), a useful non-music tool which was spurred by me having trouble coming up with new word associations/lyrics for my songs. Once that was out of the way we decided we were ready to start work on a clean room build of a generative music engine we called the Noatikl Music Engine (NME) (https://intermorphic.com/nme/3/guide), together with a generative music composer desktop app and interface to it that we called “Noatikl” (https://intermorphic.com/noatikl).

Those early days for Intermorphic feel like a long time ago; today there is way more competition and ever more user expectation as the app market is now global and there are also many more business models to consider and factors to deal with – all generally set against diminishing returns. It is a perfect storm in many ways, and whilst things are whirling around us it can often seem if we are moving very slowly. But, strangely, this situation is also a stimulus, in that it forces us to focus on what we do best, what our USPs are and what it is we can do that can bring real value to our customers and fans.

So, it is bit of a dichotomy really. On one hand we have more creative freedom (no one now tells us what to do) but on the other we have less creative freedom (surviving in the current app marketplace). It requires a good deal of effort to counter inertia and to keep innovating, but we know that nothing stands still and it is in our DNA have to innovate, whether that is through new features for existing apps or new apps altogether. In many ways, the biggest creative challenge of all is to discover and agree on interesting green pastures to farm and explore and that fit our current resource profile. In our new developments we try to keep a sense of our history, identity, domain, audience and competitive environment and look to find creative ways to combine our existing code base with new ideas, novel approaches and simple interfaces. We are excited therefore about the potential for our Wotja Reflective Music System (https://intermorphic.com/wotja), first released in 2014, which ticks all the boxes for us, and it is an app that we want to do much more with. More on that in a bit.

PalmSounds: People find generative music difficult to get to grips with, why do you think that is, and where do you see tool makers like yourselves going in the future to help this, and what is it about generative that you think people struggle with?

TC: I think to get some perspective on this it helps to consider Computer Generated Music (CGM) in general. By this I mean music that a computer generates, composes or mixes itself, whether by rules, chance, algorithms, AI or whatever. It could be assembled from any size of building block, from a sample to a loop to a recording whether made by human, animal, nature or computer. Unless a human has a clear hand in controlling, directing  and imbuing meaning to it, to my way of thinking any of the foregoing is Computer Generated Music. I am sure it is not a perfect description, but it will suffice for now and in the context of my thinking outlined here.

Over the years I have enjoyed hearing CGM, principally the Generative Music that our systems have generated, and I have often pondered on the nature of it. My thoughts change and have kept changing, and I am not sure there are any hard and fast answers as it depends on the person and a number of factors. I think some of the (by no means exclusive) factors to consider relate to how a creator’s or listener’s tastes or needs vary for the following:  1) Compositional control (e.g. what control do I need to have over what happens?); 2) Intrinsic meaning (e.g. do I need it to have any emotional, personal meaning?); 3) Context (e.g. do I need the music somehow fit the context of where/when/how it is experienced?); 4) Musical style & structure (e.g. do I need it to have a feeling of style or for it to have a directional vector?); 5) Foreground/Background music (e.g. do I want to actively listen to it); 6) Interactivity (e.g. do I need to be created live, in the moment; as in “inmo” (https://intermorphic.com/inmo) ?); 7) Sounds used (e.g. do I want natural or synthetic sounds, created live or sampled?); 8) Ideas (e.g. am I wanting to use it to create ideas for me to later use?); 9) Ease of creation (e.g. how much time do I need to set aside to make it; how hard is it to do well?); 10) Sharing (e.g. as a creator, do I want to share what I have made?); 11) Turing test (e.g. can you be certain it was/was not created by computer?) etc.

Every creator/listener will have different needs and taste profiles for all of the above, and these are likely to change according to mood, context, day, season , etc. So, as you can imagine, asking why “people” find generative music difficult to get to grips with is a difficult one to answer; the answer is a very personal.

As I said in a previous answer, we are primarily concerned with music that is created stochastically/aleatorically. It might be easy to think that that this kind of music can have little emotional impact on a listener. However, I am reminded of a time back I think in 1996 when I was in the zone listening to Timothy Didymus’ “Float” (https://www.intermorphic.com/sseyo/koan/float  – Timothy is an amazing generative musician who we still have the honour of working with). I think I was listening to “Midheaven” at the time (https://www.intermorphic.com/sseyo/koan/float/#float-audio) and I distinctly recall entering some kind of quasi state where I felt I could “taste” the music; it was a remarkable and moving experience I think related to the subtle changes in the music. As far as I know it was something peculiar to me and only happened the once, and I have not experienced it with other non-generative music either, but it emphasised to me the personal impact that CGM could have in the right context.

In the context of the music our engines make I have been pondering quite a bit about “Intrinsic meaning”. It seems to me that everyone has a “music player“ in their head. However, experiences and appreciation can be different as each person’s “music player” has associations specially keyed with their age, culture, demographic, personal/shared memories (e.g. concert) etc. And, even then, to really appreciate music, repeated listenings are often required. I love playing my guitar and writing music and, at least to me being a creator and songwriter, there is a fair amount of meaning in what I make – my music is distilled from my thoughts and emotions and is played with passion. However, if I share a recording of my music for someone to listen to, then what meaning can they, as a listener, extract from my recording (listening to it at a distance as it were)? They do not have my memories or context to unlock or quickly interpret it, to hear it as I hear it. I find it interesting to then consider CGM in this kind of sharing context, and it raises all kinds of questions related to meaning.

One other factor that plays a part in all this deliberation is that over the years we came to understand that generative music played against an image seemed to elicit a deeper reflection on the image, unlocking thoughts and memories. As a result, way back in 2010 we had decided to run with the term “Reflective Music” as a descriptor for the effect that the output of our generative music system could engender, and secured the reflectivemusic.com domain. In hindsight it was a good move, as the descriptor was to become even more apt…

So, trying to make sense of it all, I stood right back and got to thinking about text. Everyone has both a text player and writer in their head (language aside) so people can both easily and quickly create it and assimilate/understand it, it. Text has meaning and a reader’s imagination puts flesh on the bones; in a musical analogy it is a bit like a visual MIDI score played through different MIDI synths. Unlike music, though, text it is not particularly temporal and you can also quickly change it or respond to it. We figured it might be fun to play around with a kind of music messaging where a creator’s text could be used convey any meaning required and the text itself could be used by our engines, as a seed, to generate melodies / music as an accompaniment to it. This is the general idea we are currently exploring with the Wotja Reflective Music System (https://wotja.com).It quickly became clear that “Reflective Music” was the perfect descriptor because the meaning of the text can be reflected upon and the generated melody is, in turn, a reflection of the text.

PalmSounds: Can you talk about how your products have evolved in the time you’ve worked with generative music?

TC: We started out by building content-open and content-extendable creativity tools that let others make and record things of their own, to use however or wherever they wish. Even after all this time we see no reason to change our direction and, besides, we really enjoy seeing and hearing what other people can create with our tools and apps!

In the intervening years many, many external factors have changed, none more important than the emerging importance and capability of mobile devices. We first started work in mobile in 1998, in “Mobile Music Making” as it were, so mobile thinking has been in our DNA for many, many years.

However, we really started thinking in earnest about “mobile first” back in the noughties, sometime before I started claiming in 2004 (with good reason, and prior to the release of SSEYO miniMIXA), that “the mobile phone is the next electric guitar”.

There have been lots of stops, starts and dead ends along the way for us, and it took us a long time to get there, but we now think mobile first for everything and have done so for some time now – be that apps, business models, websites or anything else. They all have to work and work well on mobile; everything then scales up from there.

Part of the issue with mobile is that it is mobile – screen size, performance, input mechanisms, app inter-operatibility, UIs, usability, complexity, operating system, sharing, social, marketplace, business models etc. all brings constraints and opportunities. All these have to feed into our app planning, design and thinking as we evolve our apps.

One of the big things we learnt very early on, and alluded to earlier, was that it was all very well having a powerful generative music engine, such as our Noatikl Music Engine (NME), but if you want someone else to hear and in the same way experience the generative music you have made, then the music has to be *portable*, i.e. there has to be a player for any desired listening device. That means you generally either need to A) restrict your compositions to rely on the use of audio samples (these are big and have issues if you want to share them so are generally inbuilt), maybe with some pitch shifting or post-FX; or B) you need to build or licence and include some kind of good quality, flexible, MIDI-like modular synth sound engine that allows real-time polyphonic sound generation with sound shaping; or C) some combination of both of those. We chose the latter route, C), so over the years we have spent a good deal of time evolving our integral Partikl Sound Engine (PSE) (https://intermorphic.com/pse/3/guide). This is now a powerful and customisable modular/SF2 MIDI synth with live FX that is included in all our music apps and we recently updated it to allow stereo synth sound design. Other than trying as best we can to keep up with advances in sound generation, in so far as it is relevant to portable generative music of course, the main problem with something this flexible relates to the interfaces you choose to provide to it to allow complex sound design, especially on mobile, and how they can be accessible. This is an area we have made some progress in, but we still have a long way to go and are still mulling over exactly what to do next.

Aside from the ongoing development of our essential core technologies such as the NME and PSE, we have found it is the mobile factors that have shaped our thinking the most with respect to our products over the last few years. So, although we started out in 2007 with desktop versions of Noatikl and Liptikl we now have mobile versions of both of those. Mixtikl is the evolution of SSEYO miniMIXA and at the time of Mixtikl’s first release (2008) we built it with a scalable XML front end so the same UI would work on mobile (Pocket PC) and desktop (Windows and Mac), meaning it started out as a hybrid. Apps that came after that, such as Tiklbox Ambient Generative Music Player (https://intermorphic/tiklbox) and have been totally mobile-centric designs, using native controls where we can, and that we are looking to develop Desktop versions for as we’re adopting Swift where we can.

Business models really do have a major impact on what developers can do and how it is done, so we experimented with many different approaches trying to find the right balance for our offerings. After much trial and error over a few years we have now pretty well settled on a model the best allows us to move forwardsWe have built Wotja to be not only a creativity tool, but also a generative music publishing system in its own right in that totally custom Noatikl pieces can be imported into Wotja and saved or exported as wotjas and then played, for free, in Wotja.

It is not easy these days surviving as a niche app developer, what with VC money continuing to pump into apps resulting in market consolidations, a global app market with ever more noise, shifting customer expectations and everyone suffering increasing pressures on their time. However, we love doing what we do, we love our customers, fans and friends and, for those interested in the areas we work in, we expect to continue trying to innovate in our own niches and even to find a few new ones!

That’s the end of the interview that Tim and I worked through earlier in the year. To finish off this post Tim has added a few words about what’s happening now at Intermorphic.

A lot has happened at Intermorphic since we started this interview. We have some exciting new developments in the works and which are not far off now, hence this “end of year” addendum.

In the last year we focused hard on stability and released a number of related updates for Noatikl 3 and Mixtikl 7. We wanted to be in a good position for the long journey to our next major milestone in Reflective Music, namely Wotja 4. This has been a major undertaking for us as we are gradually consolidating the best of Noatikl 3, Mixtikl 7, Tiklbox 1 and Wotja 3 into one app, Wotja 4. We expect to release the first iOS version and macOS Safari App Extension in January 2017 followed by desktop versions sometime later in the year.

Wotja 4.0 is actually the start of a whole new journey for us, a new stage of evolution if you like, and there is a lot to be excited about. Being able to focus on just one reflective AND generative music app with full editing means we will be able to move it forward faster with improvements and extensions and it also means we are better able to evolve our music and sound engines. To that end the Noatikl Music Engine is evolving into the Intermorphic Music Engine (IME) and the Partikl Sound Engine is evolving into the Intermorphic Sound Engine (ISE). Both of these engines are at the heart of the new Wotja Reflective & Generative Music System. Of course we will also be extending the Intermorphic Text Engine, too, in due course.

Although we don’t feel it appropriate this close to release to pre-announce the details of Wotja 4 :), one thing we can say is that the IME in it will itself support multiple Text To Music (TTM) Pattern voices to allow the creation of richer reflective music tapestries that can sit on a gorgeous bed of generative music – with everything being deeply and totally customisable. TTM has been in Wotja since its first release, but prior to Wotja 4 it was not in the engine itself so only one was allowed in a wotja – no longer!

If you have read this far then many thanks for your interest in what we are doing and we hope that you might decide to get and try out Wotja 4.

Especial thanks to Ashley at Palmsounds who does more than anyone else we know – and we have no idea how he does it unless there are 10 of him.

Our season’s wishes to all! Intermorphic.

So that’s all for 2016! Have a lovely New Year and we’ll be back again in a day or so.

0

Why I’m really getting into Studio Amplify’s KRFT

I saw this post from Discchord and thought I’d add a few comments and thoughts into the mix.

I only started playing with this yesterday, and only briefly, but I can echo the comments that Discchord quotes: “You start with a blank Surface and add your own elements to create up your jammable track or instrument. There’s no social, login etc; it’s purely for music making.”, Or as Klaatu puts it, “This is what they were talking about in the first place and it’s like lemur with synths and samplers built in. With the keyboards as well as piano roll and pads for the drums it’s getting there.”

KRFT is in beta right now, but as Klaatu puts it above, it’s like Lemur but without the complexity. It’s like building not only your music, but the way you want to make it at the same time.

I love the idea, and so far I’m loving the execution too. Well done Studio Amplify, From Junglator, to NOIZ, to KRFT. It’s good stuff.

0

IceGear Instruments release Redshrike Synthesizer just for your iPhone

IceGear Instruments have been around a long time and have consistently supported the iOS music making world. Their first app, Argon (a personal favourite) was released back in December of 2009, and ever since they’ve brought us updates and new apps that have been exceptionally popular with the iOS community. I’m sure that their newest offering will be no exception.

Redshrike Synthesizer is a Polyphonic Subtractive Synthesizer for iPhone. Here are the main features:

  • Oscillator
    • Waveform Morphing: Saw – Triangle – Pulse
    • Frequency Modulation: Type, Frequency, Depth
    • ENV(AD)/LFO
    • Detune (3xOSC)
    • Sub Oscillator
  • Pitch
    • Octave
    • ENV(AD)/LFO
    • Drift
  • Noise
    • Type: White, Pink, Blue, Pitched, Digital, Glitch
    • ENV(AD)
  • Resonator
    • Pitch/Feedback/Routing
    • ENV(AD)/LFO
  • Filter
    • Type: 24dB/Oct, 12dB/Oct
    • Cutoff Frequency
    • Resonance
    • Drive
    • Low
    • ENV(ADSR)/LFO
    • Velocity
  • AMP
    • Level
    • ENV(ADSR)/LFO
    • Velocity
  • Effects : Chorus, Delay, Reverb

Programmable Arpeggiator
You can easily create your own pattern.

Inter-App Audio
You can stream live audio directly to other Inter-App Audio host applications.

Audiobus
You can stream live audio directly to other Audiobus-compatible apps. See http://audiob.us for more information.

Ableton Link
Ableton Link is a new technology that synchronizes beat, phase and tempo of Ableton Live and Link-enabled iOS apps over a wireless network.

  • MIDI
  • CoreMIDI / Virtual MIDI Input
  • MIDI Controller Mapping with MIDI Learn
  • External MIDI sync

Resizable Keyboard
You can change which octaves are shown by dragging on bottom of keyboard.

Redshrike Synthesizer is on launch sale right now with 40% off for a limited time only

19

Apple have turned a corner, Android is still lagging behind, and why is there no third option?

iphone7-press-01-970-80

If you don’t know already then you should be aware that Apple has started the process of killing off the headphone jack in their latest iPhone version. Aside from that it’s a lovely new iPhone. More RAM, better speakers, and stereo ones at that. But I can’t go there without the jack. All of this adapter stuff doesn’t work for me, and as for the AirPods, they just don’t make sense to me. They last for 5 hours before needing a charge, that’s great, but I’ve never had to charge my headphones ever. And, more importantly, there’s no mention of the audio latency anywhere, I don’t think that bodes well.

apple-airpods

What’s more, I doubt that Apple will stop there. This is the beginning. The iPad will be next, then the Mac too. Jacks will vanish and others will do the same.

So if you’re not going to the iPhone 7, where else is there to go? Well there are of course lots of other devices available. Personally I may go to the 6s next, it seems a reasonable compromise for now. There is Android of course, but for mobile musicians this might not be a palatable move from iOS, as, let’s face it, as a platform it doesn’t rival the range and diversity of iOS music creation. That’s only fair to say I think, and in itself a real shame. Android always had promise, but it doesn’t seem to have delivered so far.

And what’s more, the jack removal movement is there already, with the Moto Z already going jackless! It was in fact a device I was looking at with some interest due its modular nature, but with no jack it really lacks appeal. It won’t be the only jack free device soon either, that’s my bet.

big_moto-z-moto-z-force-edge-to-edge-1200x728

So where else is there to go? Well that’s really the point of this piece. There isn’t anywhere else to go that really works as an iOS alternative. Apple have really done what they set out to in making it an ecosystem that you can’t get out of. If you like your iOS music apps you’re pretty stuck right now, and that seems like a real shame. There’s little chance that Apple will licence iOS to another handset manufacturer, so there’ll be no device that really comes close.

You could view this as a real market opportunity, but in all honesty, who will take it on? It’s a gap that no one is likely to fill at all, and that’s so disappointing.

Personally I could attempt a return to Palm OS, or even Windows Mobile (the really old one), but I know that it wouldn’t last. What I’m really after is a real alternative, but who’s got pockets big enough for that?

Any thoughts? I’m stuck!

4

Today is iOS10 day, remember not to do anything!

Today is the big day when every mobile musician knows that they must resist the temptation to go ahead and download Apple’s latest version of iOS. Version 10. I personally would’ve liked them to call it iOSX, but after they changed the name of OSX to MacOS that was off the cards for sure. I’d still like them to give iOS versions actual names and not numbers. I think insects would be good, but that’s a whole other story.

For today you need to resist all the goodies that Apple’s promised in iOS10 as you can be sure that it’ll break something in your musical workflow and it’ll be hard if not impossible to go back.

So sit tight, I’m sure that soon enough we’ll see a flood of app updates for iOS10. There have been a few already but just a trickle.

0

ZONT looks like a modular Pocket Operator, but we won’t see it for more than a year

upload-5798e6e0-3d82-11e6-8eec-cb3fcce3008e

I originally saw this over at Matrixsynth and ever since I’ve been meaning to take a closer look.

My first impression is that I love the idea of it, especially the modular design using interchangeable sound cartridges. I’m sure that I’m not the only person that’s going to appeal to. Bluetooth pairing to a mobile device is also a smart move.

Also the dock with RCA, MIDI, USB-C and 3.5mm jack outputs is a nice touch for using the thing when you’re not on the move. A very smart move indeed.

But Zont is a long way off for now. We won’t see it until late 2017, which is a long time to wait. Also there’s no mention of a price point as yet, which is to be expected, and whilst this might have a similar form factor to a Pocket Operator I’d expect the price to be significantly higher.

So for now you can check the pictures on the ZONT site and the tech specs below

Tech Specs:

  • Universal input interface
  • Stereo speakers
  • AMOLED display
  • Bluetooth connection
  • Wi-Fi cloud sync
  • Built-in rechargeable battery
  • Interchangable sound cartridges
  • 3.5mm headphone jack
  • USB-C
  • iOS and Android app

As soon as I know more about this device I’ll be sure to share it with you.