Google develops touchscreen synthesiser based on artificial intelligence

  • Published
    Wed, Mar 14, 2018, 12:55
  • Words
  • Share
  • NSynth was built by the tech giant's Magenta research group.
  • Google develops touchscreen synthesiser based on artificial intelligence image
  • Google's Magenta research project has developed an open-source instrument called NSynth. The experimental synthesiser uses an algorithm that uses Google's neural network to learn and reproduce the timbral qualities of other sounds and instruments, which can then be combined into something new. Another Google-funded team, Creative Lab, then built hardware to control the algorithm, called NSynth Super. The NSynth Super contains a touchscreen X-Y pad, allowing users to morph between four different sounds assigned to each corner of the pad. The hardware can be controlled via MIDI, meaning it can be synced to various DAWs or controllers. Google isn't selling the device but you can build your own using Raspberry Pi and a set of directions posted on GitHub. Google isn't the only tech giant exploring the combination of artificial intelligence and music. Earlier this week, Spotify began testing an auto-mixing function in its playlists. Watch a video about NSynth.
RA