AskAudio: With consumers now able to experience spatial audio on many popular streaming services, in movie theaters and live venues, why is it a particularly exciting time for music creators and mixers?
Guillaume LeNost (Managing Director, L-Acoustics Ltd): Until recently, the creation tools, streaming services and consumer devices were “channel-based”, with the most basic example being stereo: the left channel, the right channel. It was a pain to handle spatial audio or multichannel mixing from a production standpoint. You had to do a stereo mix, a 5.1 mix, a 7.1 mix, etc. In other words, your mixes were “speaker-dependent”. With today’s technologies such L-ISA, the mixes are “object-based”. This means that a single spatial audio mix can be rendered for stereo, 5.1, 7.1, 7.1.4, or headphones, and it means that music creators can focus on the quality of the mix and the best way to use “space”, rather than managing the number of speakers people will use.
Consumer devices are now smart enough to figure out the best way to render the content. Professional devices used in cinemas and live venues, including our L-ISA technology, have applied these concepts for years. With L-ISA Studio, music creators and mixers can now create for a streaming platform, and also prepare their live spatial performance at the same time. This is quite unique! The concept of 3D space is really interesting from a creative standpoint, as it can influence the composition itself, much more than an EQ or a compressor.
With L-ISA Studio, music creators and mixers can now create for a streaming platform, and also prepare their live spatial performance at the same time.
AA: How accessible is this technology - including L-Acoustics’ L-ISA Studio technology - to home and project studio creators?
GL: Since L-ISA Studio is a pure software solution, you can run it anywhere — you do not need access to sophisticated hardware. Also, it works seamlessly with all the major DAWs. With a MacBook, a pair of headphones and a head tracker device, you could be on the beach, on a tour bus, in a hotel, wherever you need to be and creating a spatial mix. Back in your home studio, you can drive up to 12 speakers, in any 3D configuration. You can then encode your work into a consumer format (such as Apple Spatial Audio or Sony 360) or bring it to the stage, into a full-blown L-ISA production.
AA: I know L-Acoustics comes from a live heritage, but what are some of the signposts that drove the company to move towards the studio / music creation space?
GL: We designed our L-ISA software tools for live performances. They are super easy to use, although very powerful, with advanced 3D reverberations for example. Our live sound users wanted to use the technology beyond the time they had in the venue: before a show, after a show. The creatives willing to prepare an immersive installation were willing to share ideas with their teams before a project happened. Going towards studio / music creation was a natural workflow and a consequence of the success of immersive audio for live applications.
AA: How can people who haven’t worked in spatial audio before begin to think about it from a technical setup perspective? What is needed to begin the journey?
GL: The easiest way to begin working in spatial audio is to download the L-ISA Studio software, grab a pair of headphones and follow our quick start videos. In a few minutes, you will have transformed each of your DAW tracks, or each of your stems into an “object” that you can position all around you in three dimensions, including “close” or “far” with an integrated 3D room engine. It can create great results even for remixing as little as four stems!
AA: What are some of the primary considerations on instrument placement and reverb placement when thinking about a spatial / immersive mix?
GL: It is quite easy to explain. When doing a stereo mix, you have three main sets of tools: EQ, dynamics, and depth (reverb). Some of these decisions are hard to make and are almost “workarounds” to compensate for the fact that you are putting many instruments into two channels (a L/R mix). You would scoop a guitar EQ to give space for a vocal, for example. With spatial audio, you have a very powerful fourth tool at your disposal: space. By simply separating instruments in space, you can alleviate a lot of time-consuming EQ / compression decisions. This is because this fourth tool uses our natural ability to localize sounds from different directions as different “objects”.
The additional benefit of having a 3D reverb in L-ISA Studio is that it replaces three operations in one: you don’t need to ride faders, pan reverb, and send reverb anymore. All of that is handled via a single parameter called “distance”, which is actually a very powerful mixing paradigm, related to the traditional “depth” within the mix.
AA: There are a few immersive mix tools on the market. Can you talk about how L-ISA Studio is positioned and what it is particularly useful for?
GL: L-ISA Studio can be used to create content, prepare a performance, and post-produce. It is quite different compared to other tools that are mostly targeting post-production. There are other solutions besides L-ISA Studio on the market, and many of these are designed for after the music or content has been created. The end result can be a bit underwhelming when the content hasn't been developed originally as spatial audio. When we see how artists think about spatial attributes from the very beginning, it can influence how they create their mixes and stems and how they choose to compose within the space. For instance, we recently collaborated with Brian Eno to create a piece for a Serpentine Gallery in London. He prepared some stems and then spent a few days in our studio to prepare a spatial mix. As soon as he started listening to the mix with the spatial rendering, he began adding more stems into the composition. It was quite interesting to see this and hear the correlation between spatial hearing and composition.
Two other major benefits of L-ISA Studio are that you can design any speaker configuration (not only the standard 5.1 or 7.1.2), and it includes a powerful room engine which saves a lot of time in creating immersive content — even in post.
AA: How important is the visual interface in L-ISA Studio and what kind of attention was this aspect given in the development phase?
GL: We spent a lot of time perfecting the UI in L-ISA Studio so our users could visualize exactly what is happening within the soundscape. We also follow a design rule of “less is more”, and kept only a few meaningful controls per object, on the main page. The onboarding is super quick, so users are not overwhelmed. More advanced features (such as OSC control, or room engine advanced controls) can be discovered at your own pace. UI topics always trigger heated discussions in our R&D meetings because we take pride in it!
AA: Many music creators have to translate their playback mixes from studio to live environments. Is this easily done in L-ISA Studio and is it easy to make this transition if you’ve spent time perfecting the ‘in studio’ component?
GL: In the answer to the first question, I introduced the concept of object-based mixing. The power of L-ISA Studio means that you can prepare a 3D mix on your headphones, in your mobile studio, and then play it on a massive multichannel club of festival system, simply by connecting the user interface and audio stems to our hardware audio renderer, the L-ISA Processor II. This is usually done in three clicks: selecting a processing device in the L-ISA UI, selecting a soundcard in your DAW, and pressing play! For creators prepping large shows, there is even the possibility of simulating different listening positions within the target space.
AA: Can you talk about how our users should be thinking about monitoring and interoperability with DAWs?
GL: L-ISA Studio is compatible with any DAW, as it interfaces via two means: an audio bridge (to route each track to L-ISA), and control plugins (to optionally use automation for objects positions). In the DAW, you can select the audio bridge as your soundcard, and insert the control plugins (VST/AU/AAX) if you like. You can then monitor your mix on headphones (with optional certified headtrackers) or on speakers (up to 12).
AA: Now that L-ISA Studio is out there, what are you focused on?
GL: Today, our challenge is no longer to convince pro audio professionals that immersive audio is the future since we believe this has already been established. Now, our challenge is to encourage the creation of immersive content right from the outset of a project. When the “spatial audio” dimension is taken into account at the earlier stages of the creative process, the results can be significantly better. We also have some secret ideas in the pipeline that will be very appealing to electronic music / clubbing, stay tuned!