Sounding off: The future of effects processing

  • Share
  • What's the next step for plug-in developers? Jono Buchanan investigates.
  • Sounding off: The future of effects processing image
  • I've been thinking this week about the future of effects processing, and what kinds of tools might be available to us in a few years down the road. If you look around at emerging technologies, the unrelenting power of ever-faster computer processors and the desires of consumers—but perhaps most importantly, the way we look to effects processing to inspire us. Soon enough software companies devoted to recreating sounds will have enough processing power to effectively make tools which offer the EQ and dynamics channels of yesteryear. Hopefully, new companies wanting to enter the effects processing marketplace will be looking further afield. Aside from tone and dynamics processors, the future of effects processing surely offers a much greater level of interaction with the tools our computers and tablet devices will host. One of the main advantages of touch-screen technology offered by the iPad and the array of other tablets due to be rolled out this year is that multiple parameters can be manipulated simultaneously and as these devices work over wi-fi for both studio and live musicians, the majority of us will surely eschew the humble computer mouse as a means to input control changes over effect parameters. Any Omnisphere user, for instance, will love the interaction offered by V1.5 and its Omni-TR iPad controller and many other devices will surely follow suit. The greater question, however, is what we'll want effects processing to do which isn't already covered by the hundreds of commercially available (or freeware) plug-ins already out there. To answer this, we need to think about what effects actually do and how some of these functions might combine to produce new processors. Broadly speaking, effects processors either process pitch, tone, volume, space (reverb), echo, pan position (including surround), or sound quality (bit crushers) or combinations of all of the above; in other words, precisely the parameters we can control with either acoustic or electronic instruments. Newer technologies such as Live's Beat Repeat, or third-party plug-ins like Glitch, LiveCut or iZotope's new Stutter Edit offer a number of these options within one host and then allow for a degree of randomization based on probability. The desire for tools which randomize parameters is unlikely to dissipate any time soon as it's so inspiring—a symbiotic relationship where we users are sufficiently in control but the computer still offers surprises. Similarly, technology such as The Mouth, developed by Tim Exile for Native Instruments, offers the capacity to broadly turn one musical instrument into the sound of another. I suspect we'll see other processors adopt a similar strategy. Another technology I can see emerging is microscopic randomisation. The weakness of some of the larger sample libraries offering drum, guitar or orchestral samples is that despite huge databases and programming options designed to replicate the unpredictability of real live performance, too often the programming skills required to operate these libraries effectively are fairly advanced. A plug-in that, for instance, takes an ensemble string sample and is able to break it down into a number of individual strands (perhaps using advanced granular synthesis) and apply a minute randomization to pitch (say up to 10 cents), tone and volume, as well as extremely subtle vibrato to some detected data but not all, would allow even the most basically stocked sample libraries to come alive. Instantly, the need for a large database of samples would reduce considerably. In this way, exactly the same individual sample could be infinitely varied in almost psychoacoustically subtle ways. What's clear is that for those with the money to pack their hard drives, we plug-in users have never had it so good. And it's only going to get better. What's also clear, however, is that we've chosen to make the musical work we do dependent on the technology we use. So we either need to be proactively coming up with suggestions for new plug-ins to float to software developers, or accepting that these guys and the tools they build will help shape the work we do.
RA