  {"id":36541,"date":"2018-11-13T19:59:42","date_gmt":"2018-11-14T00:59:42","guid":{"rendered":"https:\/\/digital.hbs.edu\/platform-rctom\/submission\/from-auto-tune-to-auto-compose\/"},"modified":"2018-11-13T20:05:55","modified_gmt":"2018-11-14T01:05:55","slug":"from-auto-tune-to-auto-compose","status":"publish","type":"hck-submission","link":"https:\/\/d3.harvard.edu\/platform-rctom\/submission\/from-auto-tune-to-auto-compose\/","title":{"rendered":"From Auto- tune to Auto- compose"},"content":{"rendered":"<p><strong>The sound of music<\/strong><\/p>\n<p>In a September 2018 paper, Japanese data scientists Eita Nakamura and Kunihiko Kaneko at Kyoyo University and the University of Tokyo respectively found that western classical music over the past several centuries has followed laws of evolution. This large- scale study has implications for the understanding of other cultural phenomena such as the evolution of language, fashion, and science. Evolution is an algorithmic process applied to populations in which certain traits are passed on to the next generation and others are culled. In music, as in art in general, this involves passing on past traditions while incorporating new features. Indeed, algorithmic music composition has a long standing history as in Chinese windchimes, Greek wind- powered Aeolian harps of the Japanese <em>suikinkutsu. <\/em>Iannis Xenakis used Markov chains \u2013 which use existing fragments of music in equal probability- for his 1958 compositions, Analogique.<\/p>\n<p><strong>I see in Magenta<\/strong><\/p>\n<p>Google Magenta\u2019s NSynth takes this a step further and uses deep learning to aid music makers in their composition. NSynth specifically aims to use deep learning and deep neural networks to provide artists with a vast array of instrument sounds (according to instrument class, genre and complexity). Your odd musician could then answer a question such as \u201cWhat do you get when you cross a piano with a flute?\u201d Being able to do so pushes the evolution of music further and provide artists with a wide range of tools through which they expand existing global discography. Algorithmic composition has also been shown to be helpful in addressing composer\u2019s block; arguably every artist\u2019s worst nightmare.<\/p>\n<p>The main challenge that the team at Magenta face is the fact that available data-sets are small, biased and not freely available largely due to copyright restrictions. To solve this in the short term, the team is building on the Free Music Archive (FMA), a web based repository and use the existing AudioSet concept ontology to classify specific sounds. This ontology (or knowledge management structure) is originally motivated by a lack of large- scale annotated audio- data for scientific research purposes and is derived from YouTube videos, with the goal of providing a testbed for identifying acoustic events. Lastly, the team has also conducted crowd- sourced annotation using the <em>CrowdFlower<\/em> platform in correcting and verify the labels of clips identified as likely positives. The use of control examples at this stage is especially important in weeding out contributors whose accuracy drops before a certain threshold. Should they consistently answer obvious questions wrongly (such as labelling a piano sample as a trumpet), they are not eligible to take part in the exercise.<\/p>\n<p>However, given that the AudioSet collection is derived from YouTube videos, there are no guarantees on the legality of licensing, sharing, and archiving the content. As such, the use of the content is currently extremely limited. Moreover, most of the content comes from solo performances making it currently difficult to model and evaluate on ensemble performances. Nevertheless, this work will serve to build a baseline model that the team can build up from. In particular, they aim to focus on identifying the classes that correspond to musical instruments, resulting in a set of more than 70 relevant classes. For the sake of coverage, they are merged into \u201cinstruments\u201d, e.g. \u201cAcoustic Guitar\u201d, \u201cElectric Guitar\u201d, and \u201cTapping (guitar technique)\u201d become guitar, while \u201cCello\u201d and \u201cViolin\u201d remain distinct. The medium to long term goal of the team is to iteratively refine the concepts as the dataset grows and the acoustic model improves. Moreover, another novel approach in designing their instrument dataset that they aim to use going forward is to consider instruments outside the current \u2018vocabulary\u2019 through additional crowd- sourcing, semi- supervised learning and incremental evaluation.<\/p>\n<p><strong>50 shades of audio<\/strong><\/p>\n<p>I would suggest that the management in the medium term also look into different deep learning techniques that can be used for broader ethnomusicology. A risk that exists given the current approach is that the dataset and instruments used are limited to those dominant in the West. As Magenta thinks about crowdsourcing, it could look at other models such as Vocaloids, software voicebanks that have grown over the past few years from collaborative content creation. Hatsune Miku, one of the vocaloid personas has performed in front of sold- out concerts. In the same vein of thinking out of the box, I believe there are multiple uses for products such as NSynth can be extended into other cultural fields. It has been argued that music could be older by language. Using neural networks in training a deep learning network for a dead language by way of example (half of the approximately 6000 languages in the world today will be extinct today) could be incredibly helpful in storing the rich cultural diversity we currently enjoy.<\/p>\n<p>Interesting questions remain for Magenta and artists in general. To what degree can \u201cart\u201d be drawn from algorithmic composition based off computing power? What would the role of human composers look like in the future? How does the industry approach copyright law and proprietary rights in this context? (791 words)<\/p>\n<p>&nbsp;<\/p>\n<p>[1] MIT Technology Review, \u201cData mining reveals the hidden laws of evolution behind classzical music,\u201d Sept. 28, 2018, at <a href=\"https:\/\/www.technologyreview.com\/s\/612194\/data-mining-reveals-the-hidden-laws-of-evolution-behind-classical-music\">https:\/\/www.technologyreview.com\/s\/612194\/data-mining-reveals-the-hidden-laws-of-evolution-behind-classical-music<\/a><\/p>\n<p>[2] Nakamura, E. and Kaneko, K. (2018), \u201cStatistical Evolutionary Laws in Music Styles,\u201d available at <a href=\"https:\/\/arxiv.org\/abs\/1809.05832\">https:\/\/arxiv.org\/abs\/1809.05832<\/a>.<\/p>\n<p>[3] Schulkin, J., and Raglan, G. (2014), \u201cThe Evolution of Music and Human Social Capability,\u201d available at <a href=\"https:\/\/www.frontiersin.org\/articles\/10.3389\/fnins.2014.00292\/full\">https:\/\/www.frontiersin.org\/articles\/10.3389\/fnins.2014.00292\/full<\/a>.<\/p>\n<p>[4]Giuseppe Bandiera, Oriol Romani Picas, Hiroshi Tokuda, Wataru Hariya, Koji Oishi, and Xavier Serra. Good-sounds. org: A framework to explore goodness in instrumental sounds. In Proceedings of the 17<sup>th<\/sup> International Society for Music Information Retrieval Conference, pages 414\u2013419, 2016.<\/p>\n<p>[5] Thierry Bertin-Mahieux, Daniel P. W. Ellis, Brian Whitman, and Paul Lamere. The Million Song Dataset. In Proceedings of the 12th International Society for Music Information Retrieval Conference, pages 591\u2013 596, 2011.<\/p>\n<p>[6] Rachel M Bittner, Justin Salamon, Mike Tierney, Matthias Mauch, Chris Cannam, and Juan Pablo Bello. MedleyDB: A multitrack dataset for annotation intensive MIR research. In ISMIR, volume 14, pages 155\u2013160, 2014.<\/p>\n<p>[7] Juan J Bosch, Jordi Janer, Ferdinand Fuhrmann, and Perfecto Herrera. A comparison of sound segregation techniques for predominant instrument recognition in musical audio signals. In ISMIR, pages 559\u2013564, 2012.<\/p>\n<p>[8] Mark Cartwright, Ayanna Seals, Justin Salamon, Alex Williams, Stefanie Mikloska, Duncan MacConnell, E Law, J Bello, and O Nov. Seeing sound: Investigating the effects of visualizations and complexity on crowdsourced audio annotations. Proceedings of the ACM on Human-Computer Interaction, 1(1), 2017.<\/p>\n<p>[9] Olivier Chapelle, Bernhard Schlkopf, and Alexander Zien. Semi-Supervised Learning. The MIT Press, 1<sup>st<\/sup> edition, 2010.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Can composers see themselves outdone by data scientists and computer programmers?<\/p>\n","protected":false},"author":11164,"featured_media":36604,"comment_status":"open","ping_status":"closed","template":"","categories":[346,249,4239],"class_list":["post-36541","hck-submission","type-hck-submission","status-publish","has-post-thumbnail","hentry","category-machine-learning","category-music","category-open-innovation","hck-taxonomy-organization-google","hck-taxonomy-industry-music","hck-taxonomy-country-united-states"],"connected_submission_link":"https:\/\/d3.harvard.edu\/platform-rctom\/assignment\/rc-tom-challenge-2018\/","yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.3 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>From Auto- tune to Auto- compose - Technology and Operations Management<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/d3.harvard.edu\/platform-rctom\/submission\/from-auto-tune-to-auto-compose\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"From Auto- tune to Auto- compose - Technology and Operations Management\" \/>\n<meta property=\"og:description\" content=\"Can composers see themselves outdone by data scientists and computer programmers?\" \/>\n<meta property=\"og:url\" content=\"https:\/\/d3.harvard.edu\/platform-rctom\/submission\/from-auto-tune-to-auto-compose\/\" \/>\n<meta property=\"og:site_name\" content=\"Technology and Operations Management\" \/>\n<meta property=\"article:modified_time\" content=\"2018-11-14T01:05:55+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/d3.harvard.edu\/platform-rctom\/wp-content\/uploads\/sites\/4\/2018\/11\/thQTFYHSQ0-1.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"373\" \/>\n\t<meta property=\"og:image:height\" content=\"165\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data1\" content=\"5 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-rctom\\\/submission\\\/from-auto-tune-to-auto-compose\\\/\",\"url\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-rctom\\\/submission\\\/from-auto-tune-to-auto-compose\\\/\",\"name\":\"From Auto- tune to Auto- compose - Technology and Operations Management\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-rctom\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-rctom\\\/submission\\\/from-auto-tune-to-auto-compose\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-rctom\\\/submission\\\/from-auto-tune-to-auto-compose\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-rctom\\\/wp-content\\\/uploads\\\/sites\\\/4\\\/2018\\\/11\\\/thQTFYHSQ0-1.jpg\",\"datePublished\":\"2018-11-14T00:59:42+00:00\",\"dateModified\":\"2018-11-14T01:05:55+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-rctom\\\/submission\\\/from-auto-tune-to-auto-compose\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/d3.harvard.edu\\\/platform-rctom\\\/submission\\\/from-auto-tune-to-auto-compose\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-rctom\\\/submission\\\/from-auto-tune-to-auto-compose\\\/#primaryimage\",\"url\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-rctom\\\/wp-content\\\/uploads\\\/sites\\\/4\\\/2018\\\/11\\\/thQTFYHSQ0-1.jpg\",\"contentUrl\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-rctom\\\/wp-content\\\/uploads\\\/sites\\\/4\\\/2018\\\/11\\\/thQTFYHSQ0-1.jpg\",\"width\":373,\"height\":165},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-rctom\\\/submission\\\/from-auto-tune-to-auto-compose\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-rctom\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Submissions\",\"item\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-rctom\\\/submission\\\/\"},{\"@type\":\"ListItem\",\"position\":3,\"name\":\"From Auto- tune to Auto- compose\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-rctom\\\/#website\",\"url\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-rctom\\\/\",\"name\":\"Technology and Operations Management\",\"description\":\"MBA Student Perspectives\",\"potentialAction\":[{\"@type\":\"性视界Action\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-rctom\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"From Auto- tune to Auto- compose - Technology and Operations Management","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/d3.harvard.edu\/platform-rctom\/submission\/from-auto-tune-to-auto-compose\/","og_locale":"en_US","og_type":"article","og_title":"From Auto- tune to Auto- compose - Technology and Operations Management","og_description":"Can composers see themselves outdone by data scientists and computer programmers?","og_url":"https:\/\/d3.harvard.edu\/platform-rctom\/submission\/from-auto-tune-to-auto-compose\/","og_site_name":"Technology and Operations Management","article_modified_time":"2018-11-14T01:05:55+00:00","og_image":[{"width":373,"height":165,"url":"https:\/\/d3.harvard.edu\/platform-rctom\/wp-content\/uploads\/sites\/4\/2018\/11\/thQTFYHSQ0-1.jpg","type":"image\/jpeg"}],"twitter_card":"summary_large_image","twitter_misc":{"Est. reading time":"5 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/d3.harvard.edu\/platform-rctom\/submission\/from-auto-tune-to-auto-compose\/","url":"https:\/\/d3.harvard.edu\/platform-rctom\/submission\/from-auto-tune-to-auto-compose\/","name":"From Auto- tune to Auto- compose - Technology and Operations Management","isPartOf":{"@id":"https:\/\/d3.harvard.edu\/platform-rctom\/#website"},"primaryImageOfPage":{"@id":"https:\/\/d3.harvard.edu\/platform-rctom\/submission\/from-auto-tune-to-auto-compose\/#primaryimage"},"image":{"@id":"https:\/\/d3.harvard.edu\/platform-rctom\/submission\/from-auto-tune-to-auto-compose\/#primaryimage"},"thumbnailUrl":"https:\/\/d3.harvard.edu\/platform-rctom\/wp-content\/uploads\/sites\/4\/2018\/11\/thQTFYHSQ0-1.jpg","datePublished":"2018-11-14T00:59:42+00:00","dateModified":"2018-11-14T01:05:55+00:00","breadcrumb":{"@id":"https:\/\/d3.harvard.edu\/platform-rctom\/submission\/from-auto-tune-to-auto-compose\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/d3.harvard.edu\/platform-rctom\/submission\/from-auto-tune-to-auto-compose\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/d3.harvard.edu\/platform-rctom\/submission\/from-auto-tune-to-auto-compose\/#primaryimage","url":"https:\/\/d3.harvard.edu\/platform-rctom\/wp-content\/uploads\/sites\/4\/2018\/11\/thQTFYHSQ0-1.jpg","contentUrl":"https:\/\/d3.harvard.edu\/platform-rctom\/wp-content\/uploads\/sites\/4\/2018\/11\/thQTFYHSQ0-1.jpg","width":373,"height":165},{"@type":"BreadcrumbList","@id":"https:\/\/d3.harvard.edu\/platform-rctom\/submission\/from-auto-tune-to-auto-compose\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/d3.harvard.edu\/platform-rctom\/"},{"@type":"ListItem","position":2,"name":"Submissions","item":"https:\/\/d3.harvard.edu\/platform-rctom\/submission\/"},{"@type":"ListItem","position":3,"name":"From Auto- tune to Auto- compose"}]},{"@type":"WebSite","@id":"https:\/\/d3.harvard.edu\/platform-rctom\/#website","url":"https:\/\/d3.harvard.edu\/platform-rctom\/","name":"Technology and Operations Management","description":"MBA Student Perspectives","potentialAction":[{"@type":"性视界Action","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/d3.harvard.edu\/platform-rctom\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"}]}},"_links":{"self":[{"href":"https:\/\/d3.harvard.edu\/platform-rctom\/wp-json\/wp\/v2\/hck-submission\/36541","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/d3.harvard.edu\/platform-rctom\/wp-json\/wp\/v2\/hck-submission"}],"about":[{"href":"https:\/\/d3.harvard.edu\/platform-rctom\/wp-json\/wp\/v2\/types\/hck-submission"}],"author":[{"embeddable":true,"href":"https:\/\/d3.harvard.edu\/platform-rctom\/wp-json\/wp\/v2\/users\/11164"}],"replies":[{"embeddable":true,"href":"https:\/\/d3.harvard.edu\/platform-rctom\/wp-json\/wp\/v2\/comments?post=36541"}],"version-history":[{"count":0,"href":"https:\/\/d3.harvard.edu\/platform-rctom\/wp-json\/wp\/v2\/hck-submission\/36541\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/d3.harvard.edu\/platform-rctom\/wp-json\/wp\/v2\/media\/36604"}],"wp:attachment":[{"href":"https:\/\/d3.harvard.edu\/platform-rctom\/wp-json\/wp\/v2\/media?parent=36541"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/d3.harvard.edu\/platform-rctom\/wp-json\/wp\/v2\/categories?post=36541"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}