A singular voice in electronic music reveals the techniques and processes behind his hotly anticipated new album.
Hopkins is a perfect subject for this type of conversation. For one, he takes sound design to absurd levels, and can offer specifics on how and why his music feels a certain way. He does some extremely complicated stuff, but he's not a pedant for technique. Singularity is full of plus-size impacts and grandiose moments that'll make both production nerds and casual listeners stand back in awe. But there's great textural and spatial sensitivity on display—not for its own sake, but because this dynamic sound world is key to Hopkins' formula. His music also happens to be staggeringly popular. We spoke for only an hour, but the conversation was filled with nuggets of information that should have many producers running to the computer with new ideas.
I read that you didn't want to use much piano or field recordings on the new record.
The last album, which came out around five years ago now, placed field recordings front and centre. Diamond Mine had quite a lot as well. So I thought they shouldn't be an obvious feature on the new record. They're still in there, but they're more embedded in the instrumentation. There's quite a big recording of thunder on the first track, "Singularity." There's an owl somewhere else, too. Other bits and pieces are littered throughout but they're basically hidden. They're supposed to function as instruments instead of suggesting the outside world.
I was going to ask about the bass frequencies on "Singularity," actually. I thought it must have been some sort of filtered, overdriven convolution.
Yeah, it's thunder. I recorded it from my house. Then I put a resonator on it, which ended up being pitched around E-flat. Turns out that's the same pitch as the drone running three octaves higher, which is why it sounds a bit like feedback. The track is supposed to have a dangerous, foreboding sound to it. When we hear thunder in our everyday lives, whatever you're doing, it has an emotional impact. Even subliminally, it has a looming weirdness to it.
With the piano, it was important to me to use it differently than the last record. As you said, at one point I thought it would be nice to have none at all. But then if piano naturally suggests itself at some point in the process, you can't think, "Well, I said I wasn't going to do that so I'll leave it out." None of the lead tracks were written on piano—that's the important thing.
I love the simplicity of the piano. It has a nostalgic quality for me. The piano you hear on "Recovery" is the same one I used on every record to date. But I also worked on this amazing, full-size, nine-foot Steinway. The low piano notes on "Echo Dissolve" are from the grand but the treble is from my old upright.
You can hear how the timbre of the strings on the upright catches the reverb in a different way. It also seems like you deliberately recorded the upright with a super high noise floor. You can really hear the felt on the hammers and the internal mechanisms of the piano. Are you just driving a pre-amp really hard?
I love that felt hammer sound. It's much easier to get from the upright than a grand. I hang two Russian Oktava mics from the ceiling directly into the body of the piano so that they're very, very close to the hammers. Usually you'd have the mics farther away and play the piano louder. I love having it mic'd up sensitively and then playing incredibly quietly. And yeah, I turn up the gain a lot. The noise brings life into it.
It screams intimacy. It's like someone breathing down your neck.
Yeah and that's important because that section of the record, right towards the end, is all about contrasts. There are quite a lot of grandiose moments earlier on but it gradually gets more purified and simplified. So it's very important that when we do hit this instrumental ending, it's as intimate and wooden and real-sounding as possible.
It sounds like DJing has had a big influence on the first half of the record.
The first half, for sure. There are specific things I would try out in sets to see if they worked. I've played "Everything Connected" in almost every set for the last two years, but it was in a constant state of flux. It was meant to be on Immunity but it didn't fit. The beats back then weren't up to scratch and it was five BPM slower, so it sounded really plodding and boring compared to the other tracks on that record. I kept the melodies and completely reproduced the rhythmic side. That one especially is designed to move a room full of people.
DJing with unfinished tracks can be interesting because playing it over the top of other people's music changes how you perceive the mix balance.
Yeah, it can highlight things that are wrong with the mix. Often it's led me to simplify my mixes. I tend to use a lot of layers, it's very much a part of the sound. But in a DJ set, you notice that most other tracks don't really have that much going on. That's a quality I was interested in pursuing. The finished product is still quite sonically complex, but I really wanted to boil it down to the sounds that are absolutely necessary.
I'm very interested in how more traditional techno producers make such minimalistic stuff without things getting boring. I find that really, really hard to do. I didn't try and emulate that approach but there's a lot you can learn from that style of doing things.
The more pumping tracks on the record have some fairly outrageous breakdowns and drops. How do you construct these big moments without it feeling rote?
There's a reason why those words and ideas sound cliché: they're great techniques that work really well. I'll never get tired of teasing a build. Generally, I'll make the breaks run for an irregular amount of bars. Other times I'll make them go on for too long. Or when it finally hits, I'll transition to something totally different than what you expected.
In "Everything Connected," there's no hint that an extreme Korg MS-20 riff is going to enter because it doesn't appear at any point before the drop. So you might set up the direction of a track with a certain sound or idea, and the listener expects that idea to come back in a heavier form after a breakdown. Instead of that, I'll use a completely new riff and another set of drum sounds.
"Neon Pattern Drum" is sort of my take on trance, although it's not really trancey. It has a 5/4 rhythm over a 4/4 beat and it's gated in this strange way. There isn't a drop in it, but there's a bit where it holds back and whooshes back in. It all happens on irregular bar numbers and odd beats, so while you can feel an impact coming, you don't know when it's going to hit.
You mentioned there that after a drop you sometimes change the instrumentation. I noticed a few times on the record that the overall mix balance and tone colour shifts quite drastically in these moments without sounding too out of place. I take it this is extremely detailed automation—there's no convenient way of doing it, right?
Yep, just lots of infinitesimal tweaking across loads of parameters. I mean, the idea is that hopefully it sounds sexy, but the process behind it certainly is not. Unfortunately, it always comes down to minutiae on a screen. It's just an accepted fact. A lot of effort goes into making things sound like they just happened naturally, but there's always a lot of work behind it. But I kind of love that.
The mix in particular was a very creative thing with this record, which is kind of what you're talking about I think—changing the mix between sections, holding back on certain sounds between bars. Also being very careful to mark out frequency zones for each sound, particularly with three-dimensional plug-ins. I use a lot of room and phase effects, not to make things sound wide or narrow, but for head-fuck sounds where you can't tell where they're coming from. There are a lot of great plug-ins for that these days.
Are you talking about plug-ins that allow you to physically place and move the listener around a multi-dimensional field? Like that GRM Tools plug-in, for instance?
I don't have anything that allows you to actually place the sound. I still use a lot of Audio Ease Altiverb, which has hundreds of impulse responses of actual spaces. I use the stereo imager to set where the sound sits, and then it goes into a room or a hall or whatever it is. If you take the bigger impulse responses and reduce the decay time dramatically you can create very strange effects where things just sound really set-back, and you don't know why. Then it's fun to resample it, mono it and see what it's doing to the transients. It generally just makes things sound odd and physical.
Altiverb really is the core. Anything that sounds like a real space is probably Altiverb. Then again, I use loads of different reverbs and delays to achieve different effects. You can create reverb-like sounds with Soundtoys EchoBoy because you've got control over diffusion as well as the delays. Ableton's own reverb is useful for a particular clean type of echo. Waves' TrueVerb and Renaissance Reverb sound a little bit rubbish, by which I mean they're not really highly useable. They don't sound super glossy, they sound a bit older, like a Lexicon or something. Occasionally frequencies bounce out that you wouldn't get in an expensive new reverb, and sometimes I really want that. Renaissance Reverb in particular with a very long decay can sound amazing on a piano, very wooden and real.
Obviously spatial processing is a huge part of your sound. Are you routing loads of returns through each other or is it a new patch for every sound?
It's one for every sound, unfortunately. This is why I spend a long time writing. I always think it would be nice to have my send-return system set up so I could route things to different effects chains, but they always need to be tailored precisely for the sound.
I do everything in Ableton from start to finish these days. With the chaining system within racks, you can create duplicate channels and process them differently. You could EQ the dry signal and have another being processed by reverb without the reverb processing the dry signal, for instance. I didn't know about this when I started using Ableton, but it's such a useful feature. It means someone like me, who wants to have dedicated effects on every single track without sharing those effects with other sounds, can do that without having to resample and reprocess all those effects.
What got you onto Ableton?
It was always the plan to learn it because I've been using it in my live shows for years. I used to write in Logic, which sent me mad, really. I'm sure it's great for songwriting and producing, but for electronic music it's not what I needed. That linear left-to-right format on its own, it's just not enough. It doesn't provide enough freedom and the feeling of creativity wasn't there for me. So I just did one album with Logic and before then it was Cubase. Now it's Ableton.
I finally got into Ableton because pretty much every musician friend I have has been using it for ages. They'd all say, "No, trust me, you just have to learn it and get into it." I was struggling because I'd been addicted to using Sound Forge in addition to Logic for many years. I was always arguing that you couldn't get that level of sonic editing and offline processing in Ableton. Which you kind of can't, but if you're happy to go very far down the route of tweaking one sound at one time, you can do literally anything you can imagine. I use so many plug-ins, it's sort of impossible to ever imagine that they could all run at the same time. The sessions become pretty chaotic, so often I'll resample to get rid of 20 plug-ins and just have a new sound. Ableton's very satisfying for that.
I read that you liked using Sound Forge with Logic because it separated sound editing from arrangement.
That was a good thing, for sure. But I think with all the benefits Ableton brings, if you can get used to the fact that there's no separation between sound editing and arrangement, then you have everything. Ableton is so flexible that if you can think of an idea you can generally achieve it. Your brain starts to figure out new ways of working within the program. I'd run Sound Forge in Direct Mode, which means any change you make is instantly reflected in Logic. It would be nice to be able to do that with Ableton but I never wanted to use two programs at once to achieve it.
What does your initial idea generation process look like?
It's the part of the album-writing process that I like the least. Thinking about the amount of layers and processing that end up in the final versions, and all the things that have gone into making it sound just so—there's really a huge amount of time and work. So when you've got nothing at all it's daunting. Sometimes I'll encounter a track from a previous album and think, "How am I ever going to get to that point?" It seems so overwhelming. Of course, all you have to do is start.
What does this stage look like?
Filling up loads of clips in Ableton with audio, MIDI or whatever. I remember the first sound I made where I thought, "OK, this is now officially interesting." Because I made a load of shit stuff first as I was learning the program. In order to learn you just have to write any old crap—I recall making something that sounded like a shit version of The Field. Those early ideas were mostly derivative and it wasn't until I got into resampling and the effects chains that things started becoming interesting.
The first sound I made on the record that survived to the final version is at the beginning of "Neon Drum Pattern." There's this pad sound that gradually gets eroded into a rhythm, that 5/4 pattern I mentioned earlier. That's actually a plug-in that Tim Exile was kind enough to make for me. Basically it allows you to apply the exact volume envelope of one sound to another.
There's this invisible gating going on a lot across the record.
It's usually that plug-in. It has a very particular sound but I'm sure there are other ways of achieving the same effect. Still, Tim's plug-in has a beautiful attack to it.
Your control over the envelopes when you're making these sucking, gated effects is quite expressive, almost like you've mapped it to a controller and performed it.
I always mapped those parameters to faders and knobs on a controller. Tim put in attack, release, threshold sensitivity, the usual stuff you'd expect. But at the same time, if you're using a drum machine as a modulator, you've got control over how that's triggering the plug-in. Basically this means you can alter the timbre of what's creating the envelope, which is really interesting.
I take it you could route any number of sources to trigger the envelope, too. Sounds like Tim should release it.
We did talk about it but he's busy with something else right now. On the other hand, there's a grim kind of satisfaction in knowing no one else has the plug-in. But I think it probably should be available.
I love the fact that you can have an idea for a plug-in and someone can make it, whether it's with Reaktor, Max or whatever. I don't use any of these programming platforms. I just use stuff that exists. But I do have ideas. I've got another one that Cherif—he's a brilliant engineer at this studio—built me that allows you to trigger an envelope with MIDI, so you could have, say, an odd-shaped LFO generating weird rhythms that control another track. Again, I'm sure there's stuff like this out there, but it's great being able to specifically develop what I need with micro-tweaks and controllability.
To return to this early stage for a moment, what are the initial sound sources that populate these clips?
We could carry on using "Neon Pattern Drum" as an example. The sound at the beginning of the process was the Korg Trinity, but it became quite complicated. There's a very sharp attack sound routed into a fully wet Renaissance Reverb, which turns it into a pad. That's the top layer. The mid-layer is an organ sound from the Trinity, while the bass comes from the Moog Sub 37. They're all routed to one output, which is then modulated by the drum machine.
I think one important rule I have is that the sounds have to be recorded well. If you're really going to mangle sounds and bring out details that maybe you didn't even intend to be there, it pays to have it nicely recorded. So I'll always try to use good pre-amps and proper gain staging.
Then there are situations where noise is welcome. I have one of the original MS-20s, which hasn't been cleaned up and ruined like the new ones have. If you have the filter really low but boost the input gain this amazing noise seeps out over the filter, which just sounds so great. I did that a lot on Immunity.
So you're a stickler for pre-amps and convertors?
Not convertors. I'm lucky that the people who helped set up my studio are experts in that stuff. I use the Focusrite Clarett interface but I don't know much about it.
There's an endless rabbit hole of distractions, jargon and conspiracy theories once you get to that level of detail in recording.
It's funny. Despite all the levels of processing and detail in my music, I don't consider myself a technical person. It's all on instinct and what sounds good to me at a certain point. It's the same with mixing. My ears tell me whether there's too much of this or that. Sometimes I don't know where a sound is coming from, so I'll resample it and check where the bumps are on the waveform.
So at the end of the process you're nudging peaks up and down by a decibel at a time?
It gets pretty precise. I'm often trying to clean up evidence of the sound morphing techniques that I use. Sometimes I'll hear a noise and then it stops dead, so I go in and find a sound I worked over a hundred times and realise I didn't quite filter the end out properly.
Some productions have well over 120 tracks of audio. Not playing simultaneously, just bits all over the place. It's absolute chaos to look at. So obviously there are going to be things you miss. The last couple of weeks with the album were based on really precise tweaking that I'm sure no one will ever notice. It's funny, you forget about it all once it's been mastered and you haven't heard it for a couple of weeks. You think about all the work you did two weeks before and it sounds basically the same.
There are a lot of high-impact moments on the record that really pop out. Would you say you pursue loudness, or is it more knowing how to create the illusion of volume?
It's all relative. The world of streaming is affecting how people are mastering now. I think if you try and go for the loudest master nowadays it'll end up sounding worse. This record isn't mastered as loud as the last one. It's quieter and there's more room for sounds to breathe. The mix is less claustrophobic. This is all stylistically intentional because emotionally it's a much more open record.
Still, I think it's very important that things sound really solid and physical and real. It helps to find the loudest point on the record and make sure everything else is quieter. It sounds simple, and it is. On this record, the loudest parts are the end of the first track and the second half of the fourth track. If you get those right, then you just need to make sure that you don't undermine them by making other sections too loud.
A lot of your breakdowns and transitions have loads of subtle parameter changes going on, which almost subliminally set the listener up for a change. You're probably automating tens of parameters, but could you give us a sense of what this involves?
It's about gradually making things sound worse so they can become better again. There's a track called "C O S M" where the stereo field is narrowed to about 60%. It gradually gets even narrower and the frequency bands reduce as well. It all happens so gradually that you don't think the sound is getting crap or weak. But when it opens up, there's a lot of impact.
Narrowing it down is an interesting one. Maybe people's ears aren't as attuned to changes in width as they are with frequency or volume.
Yeah it's weird, I don't hear it often in other tracks but it's quite an obvious thing to play with. I also like making things mono and panning that mono sound around, or making that mono signal go in phase behind you and then open out from behind. There's a lot of stuff like that going on in the background, particularly with wet reverbs.
How important is speed in your writing process? Are you working with templates for instance?
That's more or less what I do with the Trinity. I've had it since the late '90s and I've got hundreds of my sounds in there. These days, they don't often make it to the finished record but they're ready to play straight away. I haven't gotten around to building a bank of things in Ableton—I really need to invest some time in getting those starting points in there.
For the sake of immediacy, I might drum a beat on the desk with a mic and split that out onto the Push. Then maybe I'll resequence and EQ it into the semblance of a beat, then quickly play chords over the top on the Trinity. The main sound in "Emerald Rush" ended up being made out of five different synths but the very first time I played it, the rhythm you're hearing is from the original improvisation. So that spark of the idea is still in there.
You can then retrospectively get surgical in how you build the sound up without the Trinity being involved. So the final sound is an Ableton vocoder mixed with the Elektron Analog Keys, then the treble comes from the reFX Nexus, which is a software synth. Then the bass comes from the MS-20. It's all cut rigidly together over the original rhythm that I tapped in on the table.
There's something to be said for working with placeholders so you can move on with the overall composition.
What's interesting is that sometimes you might find that some element of the placeholder, like its reverb for instance, can be the genesis of another angle or through-line of inspiration for how the sound progresses. There may be a tiny bit remaining of the very first sound. You get addicted to what you start out with. If you remove every trace of that original spark, sometimes the track can suffer.
Just to backtrack a second, you'll have a mic on your desk for picking up rhythms you drum out with your fingers?
I'll use the on-board mic on the Mac or I'll drum on something near the mics over the piano.
Then you'll chop those up and sub out individual hits for other sounds?
The great thing is to separate them out onto the pads on the Push and move them around a bit. Of course, you can drag new sounds onto a pad. The kicks are very multi-layered and complex on this record. There's often five elements in a single kick drum. To return to "Emerald Rush" again, there's an element for the knocking sound, one for the tap, another for a distorted texture, one for the mid-range and finally the sub. I'm sure everyone is doing similar stuff, I just haven't worked with many people that do it.
A lot of people do it but maybe not with quite so many layers. Five is quite a lot.
It just has to end up sounding like one kick drum. The sounds all come from different sources so they shouldn't phase. Sometimes they're from drum machines but by this point I can't remember where any of them have come from. I made that bass drum, like, a year and a half ago. But I'm drawing from a big library of drum machine samples that a friend gave me. They make great starting points. I also have to mention that the pitch envelope in Ableton's Simpler is great for sculpting kicks. Any kind of sine wave with a pitch envelope and distortion—it's a great combination for getting strong attack out of a sound.
Sounds like the architecture of Roland's kicks.
Yeah, I think the 909 is a triangle with a very sharp pitch envelope. I think I stumbled upon this by accident in Sound Forge a long time ago when I was trying to work out how to get attack out of a sound. This works on anything, not just drum sounds. The pitch envelope is a great way of getting a very precise transient.
Jon Hopkins is playing at this year's Lovebox festival in London on the weekend of July 13th.