  {"id":16392,"date":"2016-11-17T20:23:47","date_gmt":"2016-11-18T01:23:47","guid":{"rendered":"https:\/\/digital.hbs.edu\/platform-rctom\/submission\/the-ethics-of-ai-robotic-cars-licensed-to-kill\/"},"modified":"2016-11-17T20:24:57","modified_gmt":"2016-11-18T01:24:57","slug":"the-ethics-of-ai-robotic-cars-licensed-to-kill","status":"publish","type":"hck-submission","link":"https:\/\/d3.harvard.edu\/platform-rctom\/submission\/the-ethics-of-ai-robotic-cars-licensed-to-kill\/","title":{"rendered":"The Ethics of AI: Robotic Cars, Licensed to Kill"},"content":{"rendered":"<p><b>Artificial intelligence is the driving force behind the next wave of computing innovation. <\/b><span style=\"font-weight: 400\">It\u2019s powering the Big Tech race to build the best smart assistant: Apple\u2019s Siri, Amazon\u2019s Alexa, IBM\u2019s Watson, Google\u2019s Assistant, Windows\u2019 Cortana, Facebook\u2019s M, to name a few. It\u2019s pivotal to the United States\u2019 defense strategy; the Pentagon has pledged $18Bn to fund development of autonomous weapons over the next three years [1]. And it\u2019s spurring competition in the automobile industry, as AI will (literally) drive autonomous vehicles.<\/span><\/p>\n<p><b>AI has huge potential benefits for society. But AI needs to be trained by humans, and that comes with immense risk. <\/b><span style=\"font-weight: 400\">Take Microsoft\u2019s experiment with a chatbot it called Tay, an AI that spoke like a millennial and would learn from its interactions with humans on Twitter. \u201cThe more you talk the smarter Tay gets\u201d [2]. Within 24 hours humans manipulated Tay into becoming a racist, homophobic, offensive chatbot. <\/span><\/p>\n<p>&nbsp;<\/p>\n<p><b>Exhibit 1 &#8211; How Tay\u2019s Tweets evolved through interaction with humans [3] <\/b><\/p>\n<p><a href=\"https:\/\/d3.harvard.edu\/platform-rctom\/wp-content\/uploads\/sites\/4\/2016\/11\/Tay2.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-16416\" src=\"https:\/\/d3.harvard.edu\/platform-rctom\/wp-content\/uploads\/sites\/4\/2016\/11\/Tay2.png\" alt=\"tay2\" width=\"577\" height=\"489\" srcset=\"https:\/\/d3.harvard.edu\/platform-rctom\/wp-content\/uploads\/sites\/4\/2016\/11\/Tay2.png 1290w, https:\/\/d3.harvard.edu\/platform-rctom\/wp-content\/uploads\/sites\/4\/2016\/11\/Tay2-300x254.png 300w, https:\/\/d3.harvard.edu\/platform-rctom\/wp-content\/uploads\/sites\/4\/2016\/11\/Tay2-768x651.png 768w, https:\/\/d3.harvard.edu\/platform-rctom\/wp-content\/uploads\/sites\/4\/2016\/11\/Tay2-1024x868.png 1024w, https:\/\/d3.harvard.edu\/platform-rctom\/wp-content\/uploads\/sites\/4\/2016\/11\/Tay2-600x509.png 600w\" sizes=\"auto, (max-width: 577px) 100vw, 577px\" \/><\/a><\/p>\n<p>&nbsp;<\/p>\n<p><b>More urgently, the challenge of teaching AI sound judgment extends to autonomous vehicles. Specifically, the ethical decisions that we\u2019ll have to program into the robots&#8217; algorithms.<\/b><span style=\"font-weight: 400\"> Given its stewardship in the self-driving car space, Google will play a major role in shaping how AIs drive. Google\u2019s business model is to design a car that can transport people safely at the push of a button. This would deliver undoubtable value creation, as over 1.2 million people die worldwide from vehicular accidents, and in the US 94% of these are caused by human error [4]. However, the operating model through which Google executes on this presents difficult moral dilemmas.<\/span><\/p>\n<p><b>Google will have to take a stance on how the car should make decisions regarding loss of human life.<\/b><span style=\"font-weight: 400\"> The big question is: how should we program the cars to behave when faced with an unavoidable accident? <\/span><b>In the situation that a person or people have to die, how do we train the autonomous vehicle to make that decision of who to kill?<\/b><span style=\"font-weight: 400\"> As stated in the <\/span><i><span style=\"font-weight: 400\">MIT Technology Review<\/span><\/i><span style=\"font-weight: 400\"> \u201cShould it minimize the loss of life, even if it means sacrificing the occupants, or should it protect the occupants at all costs? Should it choose between these extremes at random?&#8230;Who would buy a car programmed to sacrifice the owner?\u201c [5] <\/span><\/p>\n<p><span style=\"font-weight: 400\">Consider the \u201cTrolley Problem, a thought exercise in ethics, in Exhibit 2. Do you favor intervention versus nonintervention? Consider MIT\u2019s<\/span><i><span style=\"font-weight: 400\"> Moral Machine Project<\/span><\/i><span style=\"font-weight: 400\">, a website that presents moral dilemmas (example in Exhibit 3) involving driverless cars and forces you to <\/span><b>pick your perceived lesser of two evils<\/b><span style=\"font-weight: 400\">. In browsing through the scenarios on <\/span><a href=\"http:\/\/moralmachine.mit.edu\/\"><span style=\"font-weight: 400\">http:\/\/moralmachine.mit.edu\/<\/span><\/a><span style=\"font-weight: 400\">, I personally find that there\u2019s no clear answer. It\u2019s hugely uncomfortable to decide which parties should be killed.<\/span><\/p>\n<p>&nbsp;<\/p>\n<p><b>Exhibit 2: The Trolley Problem: A runaway trolley is speeding down a track towards 5 people. If you pull the lever you can divert the trolley to another track where it would only kill 1 person. Do you pull the lever? [6]<\/b><\/p>\n<p><a href=\"https:\/\/d3.harvard.edu\/platform-rctom\/wp-content\/uploads\/sites\/4\/2016\/11\/trolley.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-16425\" src=\"https:\/\/d3.harvard.edu\/platform-rctom\/wp-content\/uploads\/sites\/4\/2016\/11\/trolley.jpg\" alt=\"trolley\" width=\"544\" height=\"362\" srcset=\"https:\/\/d3.harvard.edu\/platform-rctom\/wp-content\/uploads\/sites\/4\/2016\/11\/trolley.jpg 645w, https:\/\/d3.harvard.edu\/platform-rctom\/wp-content\/uploads\/sites\/4\/2016\/11\/trolley-300x200.jpg 300w, https:\/\/d3.harvard.edu\/platform-rctom\/wp-content\/uploads\/sites\/4\/2016\/11\/trolley-600x399.jpg 600w\" sizes=\"auto, (max-width: 544px) 100vw, 544px\" \/><\/a><\/p>\n<p>&nbsp;<\/p>\n<p><strong>Exhibit 3: MIT Moral Machine: Do nothing and kill the pedestrians who are violating the crosswalk signal &#8212; 1 grandma and 4 children? Or swerve and kill the car\u2019s passengers &#8212; 4 adult kidnappers and the child they\u2019re holding hostage? [7]<\/strong><\/p>\n<p><a href=\"https:\/\/d3.harvard.edu\/platform-rctom\/wp-content\/uploads\/sites\/4\/2016\/11\/mitmoraldilem.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-16427\" src=\"https:\/\/d3.harvard.edu\/platform-rctom\/wp-content\/uploads\/sites\/4\/2016\/11\/mitmoraldilem.png\" alt=\"mitmoraldilem\" width=\"601\" height=\"476\" srcset=\"https:\/\/d3.harvard.edu\/platform-rctom\/wp-content\/uploads\/sites\/4\/2016\/11\/mitmoraldilem.png 736w, https:\/\/d3.harvard.edu\/platform-rctom\/wp-content\/uploads\/sites\/4\/2016\/11\/mitmoraldilem-300x238.png 300w, https:\/\/d3.harvard.edu\/platform-rctom\/wp-content\/uploads\/sites\/4\/2016\/11\/mitmoraldilem-600x475.png 600w\" sizes=\"auto, (max-width: 601px) 100vw, 601px\" \/><\/a><\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"font-weight: 400\">In a study on how people view these dilemmas, a group of computer science and psychology researchers discovered a <\/span><b>consistent paradox<\/b><span style=\"font-weight: 400\">: most people want to live in a world of utilitarian autonomous vehicles &#8212; vehicles that will minimize casualties, even if that means sacrificing their passengers for the greater good. However, respondents themselves would prefer to sit in an autonomous vehicle that protects their own life as a passenger at all costs [8]. <\/span><b>It\u2019s clear that there\u2019s no objective algorithm to decide who should die. So it\u2019s on us humans, the programmers at Google and other automakers, to design the system ourselves. <\/b><\/p>\n<p><span style=\"font-weight: 400\">In addition to algorithmic design, we\u2019ll have to redefine vehicular laws. Currently our adjudication relies on the \u201creasonable person\u201d standard of driver negligence. But when an AI sits behind the steering wheel, does the \u201creasonable person\u201d standard still apply? Are the programmers who designed the accident algorithm now liable? <\/span><\/p>\n<p><span style=\"font-weight: 400\">There\u2019s no question behind the value of Google\u2019s business model. The real question is how Google will operationalize it. Last month Microsoft, Google, Amazon, IBM and Facebook announced the <\/span><b>Partnership on Artificial Intelligence to Benefit People and Society (PAIBPS) <\/b><span style=\"font-weight: 400\">to support research and standard-setting [9]. But these companies are all competing in the same race to be the leader in AI. Can we trust that they\u2019ll take the time to carefully think through the ethical dilemmas rather than accelerate to win the race? To allow autonomous vehicles to <\/span><span style=\"text-decoration: underline\"><i><span style=\"font-weight: 400\">save<\/span><\/i><\/span><span style=\"font-weight: 400\"><span style=\"text-decoration: underline\"> lives<\/span> Silicon Valley will have to wrangle with the ethical dilemma of how cars <\/span><span style=\"text-decoration: underline\"><i><span style=\"font-weight: 400\">take<\/span><\/i><\/span><span style=\"font-weight: 400\"><span style=\"text-decoration: underline\"> lives<\/span>.<\/span><\/p>\n<p>&nbsp;<\/p>\n<p><em>(Word Count: 792)<\/em><\/p>\n<p>_________________<\/p>\n<p><b>Citations:<\/b><\/p>\n<ol>\n<li style=\"font-weight: 400\"><a href=\"http:\/\/www.nytimes.com\/2016\/10\/26\/us\/pentagon-artificial-intelligence-terminator.html\"><span style=\"font-weight: 400\">http:\/\/www.nytimes.com\/2016\/10\/26\/us\/pentagon-artificial-intelligence-terminator.html<\/span><\/a><\/li>\n<li style=\"font-weight: 400\"><a href=\"http:\/\/qz.com\/653084\/microsofts-disastrous-tay-experiment-shows-the-hidden-dangers-of-ai\/\"><span style=\"font-weight: 400\">http:\/\/qz.com\/653084\/microsofts-disastrous-tay-experiment-shows-the-hidden-dangers-of-ai\/<\/span><\/a><\/li>\n<li style=\"font-weight: 400\"><a href=\"http:\/\/www.theverge.com\/2016\/3\/24\/11297050\/tay-microsoft-chatbot-racist\"><span style=\"font-weight: 400\">http:\/\/www.theverge.com\/2016\/3\/24\/11297050\/tay-microsoft-chatbot-racist<\/span><\/a><\/li>\n<li style=\"font-weight: 400\"><a href=\"https:\/\/www.google.com\/selfdrivingcar\/\"><span style=\"font-weight: 400\">https:\/\/www.google.com\/selfdrivingcar\/<\/span><\/a><\/li>\n<li style=\"font-weight: 400\"><a href=\"https:\/\/www.technologyreview.com\/s\/542626\/why-self-driving-cars-must-be-programmed-to-kill\/\"><span style=\"font-weight: 400\">https:\/\/www.technologyreview.com\/s\/542626\/why-self-driving-cars-must-be-programmed-to-kill\/<\/span><\/a><\/li>\n<li style=\"font-weight: 400\"><a href=\"http:\/\/nymag.com\/selectall\/2016\/08\/trolley-problem-meme-tumblr-philosophy.html\"><span style=\"font-weight: 400\">http:\/\/nymag.com\/selectall\/2016\/08\/trolley-problem-meme-tumblr-philosophy.html<\/span><\/a><\/li>\n<li style=\"font-weight: 400\"><a href=\"http:\/\/moralmachine.mit.edu\/\"><span style=\"font-weight: 400\">http:\/\/moralmachine.mit.edu\/<\/span><\/a><\/li>\n<li style=\"font-weight: 400\"><a href=\"http:\/\/www.popularmechanics.com\/cars\/a21492\/the-self-driving-dilemma\/\"><span style=\"font-weight: 400\">http:\/\/www.popularmechanics.com\/cars\/a21492\/the-self-driving-dilemma\/<\/span><\/a><span style=\"font-weight: 400\">; <\/span><a href=\"http:\/\/science.sciencemag.org\/content\/352\/6293\/1573\"><span style=\"font-weight: 400\">http:\/\/science.sciencemag.org\/content\/352\/6293\/1573<\/span><\/a><\/li>\n<li style=\"font-weight: 400\"><a href=\"https:\/\/www.ft.com\/content\/dd320ca6-8a84-11e6-8cb7-e7ada1d123b1\"><span style=\"font-weight: 400\">https:\/\/www.ft.com\/content\/dd320ca6-8a84-11e6-8cb7-e7ada1d123b1<\/span><\/a><\/li>\n<\/ol>\n<p><b>Cover image: <\/b><a href=\"http:\/\/www.techradar.com\/news\/car-tech\/google-self-driving-car-everything-you-need-to-know-1321548\"><span style=\"font-weight: 400\">http:\/\/www.techradar.com\/news\/car-tech\/google-self-driving-car-everything-you-need-to-know-1321548<\/span><\/a><\/p>\n<p>&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Google\u2019s self-driving cars will make an immense difference in reducing road-related deaths. But in the case of an unavoidable accident, how do we train AI to make a judgment call on who should die?<\/p>\n","protected":false},"author":2406,"featured_media":16440,"comment_status":"open","ping_status":"closed","template":"","categories":[],"class_list":["post-16392","hck-submission","type-hck-submission","status-publish","has-post-thumbnail","hentry"],"connected_submission_link":"https:\/\/d3.harvard.edu\/platform-rctom\/assignment\/digitization-challenge-2016\/","yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.3 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>The Ethics of AI: Robotic Cars, Licensed to Kill - Technology and Operations Management<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/d3.harvard.edu\/platform-rctom\/submission\/the-ethics-of-ai-robotic-cars-licensed-to-kill\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"The Ethics of AI: Robotic Cars, Licensed to Kill - Technology and Operations Management\" \/>\n<meta property=\"og:description\" content=\"Google\u2019s self-driving cars will make an immense difference in reducing road-related deaths. But in the case of an unavoidable accident, how do we train AI to make a judgment call on who should die?\" \/>\n<meta property=\"og:url\" content=\"https:\/\/d3.harvard.edu\/platform-rctom\/submission\/the-ethics-of-ai-robotic-cars-licensed-to-kill\/\" \/>\n<meta property=\"og:site_name\" content=\"Technology and Operations Management\" \/>\n<meta property=\"article:modified_time\" content=\"2016-11-18T01:24:57+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/d3.harvard.edu\/platform-rctom\/wp-content\/uploads\/sites\/4\/2016\/11\/cars-1.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"965\" \/>\n\t<meta property=\"og:image:height\" content=\"214\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data1\" content=\"4 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-rctom\\\/submission\\\/the-ethics-of-ai-robotic-cars-licensed-to-kill\\\/\",\"url\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-rctom\\\/submission\\\/the-ethics-of-ai-robotic-cars-licensed-to-kill\\\/\",\"name\":\"The Ethics of AI: Robotic Cars, Licensed to Kill - Technology and Operations Management\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-rctom\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-rctom\\\/submission\\\/the-ethics-of-ai-robotic-cars-licensed-to-kill\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-rctom\\\/submission\\\/the-ethics-of-ai-robotic-cars-licensed-to-kill\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-rctom\\\/wp-content\\\/uploads\\\/sites\\\/4\\\/2016\\\/11\\\/cars-1.jpg\",\"datePublished\":\"2016-11-18T01:23:47+00:00\",\"dateModified\":\"2016-11-18T01:24:57+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-rctom\\\/submission\\\/the-ethics-of-ai-robotic-cars-licensed-to-kill\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/d3.harvard.edu\\\/platform-rctom\\\/submission\\\/the-ethics-of-ai-robotic-cars-licensed-to-kill\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-rctom\\\/submission\\\/the-ethics-of-ai-robotic-cars-licensed-to-kill\\\/#primaryimage\",\"url\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-rctom\\\/wp-content\\\/uploads\\\/sites\\\/4\\\/2016\\\/11\\\/cars-1.jpg\",\"contentUrl\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-rctom\\\/wp-content\\\/uploads\\\/sites\\\/4\\\/2016\\\/11\\\/cars-1.jpg\",\"width\":965,\"height\":214},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-rctom\\\/submission\\\/the-ethics-of-ai-robotic-cars-licensed-to-kill\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-rctom\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Submissions\",\"item\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-rctom\\\/submission\\\/\"},{\"@type\":\"ListItem\",\"position\":3,\"name\":\"The Ethics of AI: Robotic Cars, Licensed to Kill\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-rctom\\\/#website\",\"url\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-rctom\\\/\",\"name\":\"Technology and Operations Management\",\"description\":\"MBA Student Perspectives\",\"potentialAction\":[{\"@type\":\"性视界Action\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-rctom\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"The Ethics of AI: Robotic Cars, Licensed to Kill - Technology and Operations Management","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/d3.harvard.edu\/platform-rctom\/submission\/the-ethics-of-ai-robotic-cars-licensed-to-kill\/","og_locale":"en_US","og_type":"article","og_title":"The Ethics of AI: Robotic Cars, Licensed to Kill - Technology and Operations Management","og_description":"Google\u2019s self-driving cars will make an immense difference in reducing road-related deaths. But in the case of an unavoidable accident, how do we train AI to make a judgment call on who should die?","og_url":"https:\/\/d3.harvard.edu\/platform-rctom\/submission\/the-ethics-of-ai-robotic-cars-licensed-to-kill\/","og_site_name":"Technology and Operations Management","article_modified_time":"2016-11-18T01:24:57+00:00","og_image":[{"width":965,"height":214,"url":"https:\/\/d3.harvard.edu\/platform-rctom\/wp-content\/uploads\/sites\/4\/2016\/11\/cars-1.jpg","type":"image\/jpeg"}],"twitter_card":"summary_large_image","twitter_misc":{"Est. reading time":"4 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/d3.harvard.edu\/platform-rctom\/submission\/the-ethics-of-ai-robotic-cars-licensed-to-kill\/","url":"https:\/\/d3.harvard.edu\/platform-rctom\/submission\/the-ethics-of-ai-robotic-cars-licensed-to-kill\/","name":"The Ethics of AI: Robotic Cars, Licensed to Kill - Technology and Operations Management","isPartOf":{"@id":"https:\/\/d3.harvard.edu\/platform-rctom\/#website"},"primaryImageOfPage":{"@id":"https:\/\/d3.harvard.edu\/platform-rctom\/submission\/the-ethics-of-ai-robotic-cars-licensed-to-kill\/#primaryimage"},"image":{"@id":"https:\/\/d3.harvard.edu\/platform-rctom\/submission\/the-ethics-of-ai-robotic-cars-licensed-to-kill\/#primaryimage"},"thumbnailUrl":"https:\/\/d3.harvard.edu\/platform-rctom\/wp-content\/uploads\/sites\/4\/2016\/11\/cars-1.jpg","datePublished":"2016-11-18T01:23:47+00:00","dateModified":"2016-11-18T01:24:57+00:00","breadcrumb":{"@id":"https:\/\/d3.harvard.edu\/platform-rctom\/submission\/the-ethics-of-ai-robotic-cars-licensed-to-kill\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/d3.harvard.edu\/platform-rctom\/submission\/the-ethics-of-ai-robotic-cars-licensed-to-kill\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/d3.harvard.edu\/platform-rctom\/submission\/the-ethics-of-ai-robotic-cars-licensed-to-kill\/#primaryimage","url":"https:\/\/d3.harvard.edu\/platform-rctom\/wp-content\/uploads\/sites\/4\/2016\/11\/cars-1.jpg","contentUrl":"https:\/\/d3.harvard.edu\/platform-rctom\/wp-content\/uploads\/sites\/4\/2016\/11\/cars-1.jpg","width":965,"height":214},{"@type":"BreadcrumbList","@id":"https:\/\/d3.harvard.edu\/platform-rctom\/submission\/the-ethics-of-ai-robotic-cars-licensed-to-kill\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/d3.harvard.edu\/platform-rctom\/"},{"@type":"ListItem","position":2,"name":"Submissions","item":"https:\/\/d3.harvard.edu\/platform-rctom\/submission\/"},{"@type":"ListItem","position":3,"name":"The Ethics of AI: Robotic Cars, Licensed to Kill"}]},{"@type":"WebSite","@id":"https:\/\/d3.harvard.edu\/platform-rctom\/#website","url":"https:\/\/d3.harvard.edu\/platform-rctom\/","name":"Technology and Operations Management","description":"MBA Student Perspectives","potentialAction":[{"@type":"性视界Action","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/d3.harvard.edu\/platform-rctom\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"}]}},"_links":{"self":[{"href":"https:\/\/d3.harvard.edu\/platform-rctom\/wp-json\/wp\/v2\/hck-submission\/16392","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/d3.harvard.edu\/platform-rctom\/wp-json\/wp\/v2\/hck-submission"}],"about":[{"href":"https:\/\/d3.harvard.edu\/platform-rctom\/wp-json\/wp\/v2\/types\/hck-submission"}],"author":[{"embeddable":true,"href":"https:\/\/d3.harvard.edu\/platform-rctom\/wp-json\/wp\/v2\/users\/2406"}],"replies":[{"embeddable":true,"href":"https:\/\/d3.harvard.edu\/platform-rctom\/wp-json\/wp\/v2\/comments?post=16392"}],"version-history":[{"count":0,"href":"https:\/\/d3.harvard.edu\/platform-rctom\/wp-json\/wp\/v2\/hck-submission\/16392\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/d3.harvard.edu\/platform-rctom\/wp-json\/wp\/v2\/media\/16440"}],"wp:attachment":[{"href":"https:\/\/d3.harvard.edu\/platform-rctom\/wp-json\/wp\/v2\/media?parent=16392"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/d3.harvard.edu\/platform-rctom\/wp-json\/wp\/v2\/categories?post=16392"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}