  {"id":9930,"date":"2019-11-25T20:45:26","date_gmt":"2019-11-26T01:45:26","guid":{"rendered":"https:\/\/digital.hbs.edu\/platform-digit\/submission\/deezers-spleeter-deconstructing-music-with-ai\/"},"modified":"2019-11-25T20:45:26","modified_gmt":"2019-11-26T01:45:26","slug":"deezers-spleeter-deconstructing-music-with-ai","status":"publish","type":"hck-submission","link":"https:\/\/d3.harvard.edu\/platform-digit\/submission\/deezers-spleeter-deconstructing-music-with-ai\/","title":{"rendered":"Deezer\u2019s Spleeter: Deconstructing music with AI"},"content":{"rendered":"<h3><strong>About Deezer<\/strong><\/h3>\n<p>Founded in August 2007, Deezer is an online music streaming service based in Paris, France. While Spotify has gained market leadership in the online music steaming space, Deezer continues to hold its own with 14 million monthly active users (MAU) in over 180 countries as of January 2019 [1]. Per Similarweb.com estimates, 35% of Deezer\u2019s users are in France, 11% in Brazil and the remainder spread out across the world.<\/p>\n<h3><strong>What is Spleeter?<\/strong><\/h3>\n<figure id=\"attachment_9935\" aria-describedby=\"caption-attachment-9935\" style=\"width: 597px\" class=\"wp-caption alignnone\"><a href=\"https:\/\/d3.harvard.edu\/platform-digit\/wp-content\/uploads\/sites\/2\/2019\/11\/XSM2TzXhvvZgZKv8rPaTn6-970-80.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\" wp-image-9935\" src=\"https:\/\/d3.harvard.edu\/platform-digit\/wp-content\/uploads\/sites\/2\/2019\/11\/XSM2TzXhvvZgZKv8rPaTn6-970-80.jpg\" alt=\"\" width=\"597\" height=\"336\" srcset=\"https:\/\/d3.harvard.edu\/platform-digit\/wp-content\/uploads\/sites\/2\/2019\/11\/XSM2TzXhvvZgZKv8rPaTn6-970-80.jpg 970w, https:\/\/d3.harvard.edu\/platform-digit\/wp-content\/uploads\/sites\/2\/2019\/11\/XSM2TzXhvvZgZKv8rPaTn6-970-80-300x169.jpg 300w, https:\/\/d3.harvard.edu\/platform-digit\/wp-content\/uploads\/sites\/2\/2019\/11\/XSM2TzXhvvZgZKv8rPaTn6-970-80-768x432.jpg 768w, https:\/\/d3.harvard.edu\/platform-digit\/wp-content\/uploads\/sites\/2\/2019\/11\/XSM2TzXhvvZgZKv8rPaTn6-970-80-600x337.jpg 600w\" sizes=\"auto, (max-width: 597px) 100vw, 597px\" \/><\/a><figcaption id=\"caption-attachment-9935\" class=\"wp-caption-text\">Credit: Deezer.com<\/figcaption><\/figure>\n<p>On November 4<sup>th<\/sup> 2019, Deezer released Spleeter, a machine learning tool for source separation. Spleeter is a project from Deezer\u2019s research division and is made available online as a Python library based on Tensorflow. Although source separation remains a relatively obscure topic, its applications in music information retrieval (MIR) has the potential to have a far reaching impact on the way we are able to produce and consume music.<\/p>\n<h3><strong>Source separation 101<\/strong><\/h3>\n<p>At its core, source separation is the separation of a desired signal from a set of mixed signals. In music, this means deconstructing a recorded musical piece into its components by isolating each instrumental layer on the track. For example, 4-stem source separation on Coldplay\u2019s hit single, \u201cYellow\u201d, would yield the following layers (aka \u201cstems\u201d) in isolation:<\/p>\n<ol>\n<li>Vocals by Chris Martin, singing about the stars and such<\/li>\n<li>Drums\/percussion by Will Champion<\/li>\n<li>Bass guitar by Guy Berryman<\/li>\n<li>Electric guitar by Jonny Buckland<\/li>\n<\/ol>\n<p><span style=\"font-size: 16px\">While it may seem like a relatively straightforward process, accurate source separation is difficult to accomplish. Today, most professionally recorded music is made by recording each instrument on a separate channel, and then the final combined track is produced in a step called the \u201cmixdown\u201d. In this final step, all the individual tracks are blended together for mastering and then digitally compressed for delivery. All the sound waveforms are meshed together, in a process akin to an irreversible chemical reaction that is it impossible to undo.<\/span>Nonetheless, Spleeter has made the impossibly difficult task of source separation a lot easier using machine learning.<\/p>\n<h3><strong>How does Spleeter work?<\/strong><\/h3>\n<p>A common technique used source separation is time-frequency (TF) masking. Different types of sounds in the musical track correspond to varying frequencies. For instance, the lead vocals would occupy different frequency bands as compared with the drums. Using TF masking, the mixture of frequencies that make up a piece of music is filtered, allowing us to pick and choose which frequencies to keep. What remains after this process is the separated stem of the instrument that we want to isolate.<\/p>\n<p><figure id=\"attachment_9934\" aria-describedby=\"caption-attachment-9934\" style=\"width: 1008px\" class=\"wp-caption alignnone\"><a href=\"https:\/\/d3.harvard.edu\/platform-digit\/wp-content\/uploads\/sites\/2\/2019\/11\/source-sep.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-9934\" src=\"https:\/\/d3.harvard.edu\/platform-digit\/wp-content\/uploads\/sites\/2\/2019\/11\/source-sep.jpg\" alt=\"\" width=\"1008\" height=\"258\" srcset=\"https:\/\/d3.harvard.edu\/platform-digit\/wp-content\/uploads\/sites\/2\/2019\/11\/source-sep.jpg 1008w, https:\/\/d3.harvard.edu\/platform-digit\/wp-content\/uploads\/sites\/2\/2019\/11\/source-sep-300x77.jpg 300w, https:\/\/d3.harvard.edu\/platform-digit\/wp-content\/uploads\/sites\/2\/2019\/11\/source-sep-768x197.jpg 768w, https:\/\/d3.harvard.edu\/platform-digit\/wp-content\/uploads\/sites\/2\/2019\/11\/source-sep-600x154.jpg 600w\" sizes=\"auto, (max-width: 1008px) 100vw, 1008px\" \/><\/a><figcaption id=\"caption-attachment-9934\" class=\"wp-caption-text\">Spectrogram showing separation (a) &amp; (b) and mixture (c) of stems. Credit in footnote [2].<\/figcaption><\/figure>The tricky part of this process is being able to approximate which frequencies correspond to which instruments. Given that the audible range of frequencies for humans is 20 to 20,000 Hz, a lot of processing is needed to accurately classify the broad range of frequencies contained in a musical track. Traditionally, this step was done manually by using snippets of isolated vocals (which are hard to find) to approximate the frequencies that should be left unmasked, thereby making a \u201cminus one\u201d track commonly used for bootleg karaoke.<\/p>\n<p>Today, Spleeter does the heavy lifting as it comes with pretrained models for standard 2,4 and 5-stem separation. Using Spleeter is as simple as installing the package and running the separator function on a command line interface, which then creates a .wav file for each stem. In addition, Spleeter also allows users to train custom source separation models and evaluate the models against a benchmark dataset (musDB18 available on SigSep, another open source project).<\/p>\n<h3><strong>Applications and challenges<\/strong><\/h3>\n<p>Deezer is in a unique position to build a generalized source separation engine because it has access to a large catalog of music that few organizations do. However, a major challenge is that Deezer is not legally or ethically able to release the vast library of stems made from its own catalog due to copyrighting. So how does Deezer capture value?<\/p>\n<p>Deezer can use its musical data assets to train its own machine learning models, which can then be deployed to improve its user value proposition. In addition, Spleeter enables Deezer to perform source separation at scale. On the GPU version, one can expect separation at 100x faster than real-time [3], which allows for rapid deployment on its growing catalog of music. As such, it is conceivable that Deezer can accomplish the following:<\/p>\n<p><strong>Make novel music recommendations<\/strong>. Deezer can use the finer data granularity to cluster music by similarity for a given instrument. For example, Deezer could identify bands that have lead singers who have a similar vocal timbre as Coldplay\u2019s lead singer, Chris Martin. Coldplay fans might hence get recommendations to give Blue Merle a listen.<\/p>\n<p><iframe loading=\"lazy\" title=\"Coldplay - Yellow (Official Video)\" width=\"500\" height=\"281\" src=\"https:\/\/www.youtube.com\/embed\/yKNxeF4KMsY?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe> <iframe loading=\"lazy\" title=\"Blue Merle - Burning In The Sun\" width=\"500\" height=\"375\" src=\"https:\/\/www.youtube.com\/embed\/YoWA_Lr6aoQ?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe><\/p>\n<p><strong>Machine-learning enabled classification of music by genre. <\/strong>Most music genres have unique instrumental characteristics. For instance, music in the drum and bass genre has a distinctive style of complex percussive syncopation whereas the house music genre almost always has a steady rhythm in 4\/4 time. Using Spleeter, Deezer can isolate the drum stems of the music in its catalog and classify them along the axes of style and speed. As a result, Deezer can use machine-learning to automatically classify music by genre as it is able to deconstruct a mixdown and find similarities across different songs for any given instrument or combinations of instruments. Automatic classification would allow users to browse music by genre without requiring Deezer to manually label each track in its catalog.<\/p>\n<h3><strong>Recommendations<\/strong><\/h3>\n<p>Deezer should maintain Spleeter as an open source project that is kept autonomous from the consumer-facing business unit. In my opinion, the real value of a generalized source separation engine lies with applications in music production. Having access to high quality stems allows music producers to experiment, stretch the boundaries of musical genres and create new affective experiences for listeners. By becoming a leader in the open source music community, Deezer would cement its position as the center of gravity for digital innovation in music. In doing so, it will capture a fair portion of the value created as it gains brand recognition over its famous rival, Spotify.<\/p>\n<p>&nbsp;<\/p>\n<p>References<\/p>\n<p>[1] &#8220;About Us&#8221;. 2019.\u00a0<em>Deezer<\/em>. https:\/\/www.deezer.com\/us\/company.<\/p>\n<p>[2] Rafii, Zafar, Antoine Liutkus, Fabian-Robert Stoter, Stylianos Mimilakis, Derry FitzGerald, and Bryan Bryan. 2019. &#8220;An Overview Of Lead And Accompaniment Separation In Music&#8221;.\u00a0<em>IEEE TRANSACTIONS ON AUDIO, SPEECH AND LANGUAGE PROCESSING<\/em>, no. 201: 3.<\/p>\n<p>[3] &#8220;Releasing Spleeter: Deezer R&amp;D Source Separation Engine&#8221;. 2019.\u00a0<em>Medium<\/em>. https:\/\/deezer.io\/releasing-spleeter-deezer-r-d-source-separation-engine-2b88985e797e.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Deezer, an online music streaming service recently released Spleeter, an ML tool used to deconstruct music into its constituent instrumental tracks. <\/p>\n","protected":false},"author":11638,"featured_media":9942,"comment_status":"open","ping_status":"closed","template":"","categories":[2525,2523,229,2524],"class_list":["post-9930","hck-submission","type-hck-submission","status-publish","has-post-thumbnail","hentry","category-coldplay","category-deezer","category-music","category-spleeter","hck-taxonomy-organization-deezer","hck-taxonomy-industry-music","hck-taxonomy-country-france"],"connected_submission_link":"https:\/\/d3.harvard.edu\/platform-digit\/assignment\/value-creation-with-ai\/","yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.3 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Deezer\u2019s Spleeter: Deconstructing music with AI - Digital Innovation and Transformation<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/d3.harvard.edu\/platform-digit\/submission\/deezers-spleeter-deconstructing-music-with-ai\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Deezer\u2019s Spleeter: Deconstructing music with AI - Digital Innovation and Transformation\" \/>\n<meta property=\"og:description\" content=\"Deezer, an online music streaming service recently released Spleeter, an ML tool used to deconstruct music into its constituent instrumental tracks.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/d3.harvard.edu\/platform-digit\/submission\/deezers-spleeter-deconstructing-music-with-ai\/\" \/>\n<meta property=\"og:site_name\" content=\"Digital Innovation and Transformation\" \/>\n<meta property=\"og:image\" content=\"https:\/\/d3.harvard.edu\/platform-digit\/wp-content\/uploads\/sites\/2\/2019\/11\/Crowd-at-concert6.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1600\" \/>\n\t<meta property=\"og:image:height\" content=\"1067\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data1\" content=\"6 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-digit\\\/submission\\\/deezers-spleeter-deconstructing-music-with-ai\\\/\",\"url\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-digit\\\/submission\\\/deezers-spleeter-deconstructing-music-with-ai\\\/\",\"name\":\"Deezer\u2019s Spleeter: Deconstructing music with AI - Digital Innovation and Transformation\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-digit\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-digit\\\/submission\\\/deezers-spleeter-deconstructing-music-with-ai\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-digit\\\/submission\\\/deezers-spleeter-deconstructing-music-with-ai\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-digit\\\/wp-content\\\/uploads\\\/sites\\\/2\\\/2019\\\/11\\\/Crowd-at-concert6.jpg\",\"datePublished\":\"2019-11-26T01:45:26+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-digit\\\/submission\\\/deezers-spleeter-deconstructing-music-with-ai\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/d3.harvard.edu\\\/platform-digit\\\/submission\\\/deezers-spleeter-deconstructing-music-with-ai\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-digit\\\/submission\\\/deezers-spleeter-deconstructing-music-with-ai\\\/#primaryimage\",\"url\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-digit\\\/wp-content\\\/uploads\\\/sites\\\/2\\\/2019\\\/11\\\/Crowd-at-concert6.jpg\",\"contentUrl\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-digit\\\/wp-content\\\/uploads\\\/sites\\\/2\\\/2019\\\/11\\\/Crowd-at-concert6.jpg\",\"width\":1600,\"height\":1067},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-digit\\\/submission\\\/deezers-spleeter-deconstructing-music-with-ai\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-digit\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Submissions\",\"item\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-digit\\\/submission\\\/\"},{\"@type\":\"ListItem\",\"position\":3,\"name\":\"Deezer\u2019s Spleeter: Deconstructing music with AI\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-digit\\\/#website\",\"url\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-digit\\\/\",\"name\":\"Digital Innovation and Transformation\",\"description\":\"MBA Student Perspectives\",\"potentialAction\":[{\"@type\":\"性视界Action\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-digit\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Deezer\u2019s Spleeter: Deconstructing music with AI - Digital Innovation and Transformation","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/d3.harvard.edu\/platform-digit\/submission\/deezers-spleeter-deconstructing-music-with-ai\/","og_locale":"en_US","og_type":"article","og_title":"Deezer\u2019s Spleeter: Deconstructing music with AI - Digital Innovation and Transformation","og_description":"Deezer, an online music streaming service recently released Spleeter, an ML tool used to deconstruct music into its constituent instrumental tracks.","og_url":"https:\/\/d3.harvard.edu\/platform-digit\/submission\/deezers-spleeter-deconstructing-music-with-ai\/","og_site_name":"Digital Innovation and Transformation","og_image":[{"width":1600,"height":1067,"url":"https:\/\/d3.harvard.edu\/platform-digit\/wp-content\/uploads\/sites\/2\/2019\/11\/Crowd-at-concert6.jpg","type":"image\/jpeg"}],"twitter_card":"summary_large_image","twitter_misc":{"Est. reading time":"6 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/d3.harvard.edu\/platform-digit\/submission\/deezers-spleeter-deconstructing-music-with-ai\/","url":"https:\/\/d3.harvard.edu\/platform-digit\/submission\/deezers-spleeter-deconstructing-music-with-ai\/","name":"Deezer\u2019s Spleeter: Deconstructing music with AI - Digital Innovation and Transformation","isPartOf":{"@id":"https:\/\/d3.harvard.edu\/platform-digit\/#website"},"primaryImageOfPage":{"@id":"https:\/\/d3.harvard.edu\/platform-digit\/submission\/deezers-spleeter-deconstructing-music-with-ai\/#primaryimage"},"image":{"@id":"https:\/\/d3.harvard.edu\/platform-digit\/submission\/deezers-spleeter-deconstructing-music-with-ai\/#primaryimage"},"thumbnailUrl":"https:\/\/d3.harvard.edu\/platform-digit\/wp-content\/uploads\/sites\/2\/2019\/11\/Crowd-at-concert6.jpg","datePublished":"2019-11-26T01:45:26+00:00","breadcrumb":{"@id":"https:\/\/d3.harvard.edu\/platform-digit\/submission\/deezers-spleeter-deconstructing-music-with-ai\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/d3.harvard.edu\/platform-digit\/submission\/deezers-spleeter-deconstructing-music-with-ai\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/d3.harvard.edu\/platform-digit\/submission\/deezers-spleeter-deconstructing-music-with-ai\/#primaryimage","url":"https:\/\/d3.harvard.edu\/platform-digit\/wp-content\/uploads\/sites\/2\/2019\/11\/Crowd-at-concert6.jpg","contentUrl":"https:\/\/d3.harvard.edu\/platform-digit\/wp-content\/uploads\/sites\/2\/2019\/11\/Crowd-at-concert6.jpg","width":1600,"height":1067},{"@type":"BreadcrumbList","@id":"https:\/\/d3.harvard.edu\/platform-digit\/submission\/deezers-spleeter-deconstructing-music-with-ai\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/d3.harvard.edu\/platform-digit\/"},{"@type":"ListItem","position":2,"name":"Submissions","item":"https:\/\/d3.harvard.edu\/platform-digit\/submission\/"},{"@type":"ListItem","position":3,"name":"Deezer\u2019s Spleeter: Deconstructing music with AI"}]},{"@type":"WebSite","@id":"https:\/\/d3.harvard.edu\/platform-digit\/#website","url":"https:\/\/d3.harvard.edu\/platform-digit\/","name":"Digital Innovation and Transformation","description":"MBA Student Perspectives","potentialAction":[{"@type":"性视界Action","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/d3.harvard.edu\/platform-digit\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"}]}},"_links":{"self":[{"href":"https:\/\/d3.harvard.edu\/platform-digit\/wp-json\/wp\/v2\/hck-submission\/9930","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/d3.harvard.edu\/platform-digit\/wp-json\/wp\/v2\/hck-submission"}],"about":[{"href":"https:\/\/d3.harvard.edu\/platform-digit\/wp-json\/wp\/v2\/types\/hck-submission"}],"author":[{"embeddable":true,"href":"https:\/\/d3.harvard.edu\/platform-digit\/wp-json\/wp\/v2\/users\/11638"}],"replies":[{"embeddable":true,"href":"https:\/\/d3.harvard.edu\/platform-digit\/wp-json\/wp\/v2\/comments?post=9930"}],"version-history":[{"count":0,"href":"https:\/\/d3.harvard.edu\/platform-digit\/wp-json\/wp\/v2\/hck-submission\/9930\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/d3.harvard.edu\/platform-digit\/wp-json\/wp\/v2\/media\/9942"}],"wp:attachment":[{"href":"https:\/\/d3.harvard.edu\/platform-digit\/wp-json\/wp\/v2\/media?parent=9930"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/d3.harvard.edu\/platform-digit\/wp-json\/wp\/v2\/categories?post=9930"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}