  {"id":33616,"date":"2018-11-13T18:24:10","date_gmt":"2018-11-13T23:24:10","guid":{"rendered":"https:\/\/digital.hbs.edu\/platform-rctom\/submission\/orcam-a-new-vision-for-machine-learning\/"},"modified":"2018-11-13T18:24:10","modified_gmt":"2018-11-13T23:24:10","slug":"orcam-a-new-vision-for-machine-learning","status":"publish","type":"hck-submission","link":"https:\/\/d3.harvard.edu\/platform-rctom\/submission\/orcam-a-new-vision-for-machine-learning\/","title":{"rendered":"OrCam: A New Vision for Machine Learning"},"content":{"rendered":"<p>In late 2013, Israeli computer scientist Amnon Shashua and entrepreneur Ziv Aviram introduced the OrCam <em>MyEye<\/em>, a wearable visual assistant \u201csystem\u201d \u2013 part computer, digital sensor, speaker and machine learning algorithm(s) \u2013 aimed at improving the lives of the visually impaired, a population numbering more than 20 million in the U.S.<a href=\"#_ftn1\" name=\"_ftnref1\">[1]<\/a> and over 285 million worldwide<a href=\"#_ftn2\" name=\"_ftnref2\">[2]<\/a>.\u00a0These include people afflicted by medical conditions such as macular degeneration, cataract and diabetic retinopathy as well as others who have suffered vision loss in military combat. The device is about the size of your finger, weighs just under an ounce and costs $2,500, roughly the price of a good hearing aid.<a href=\"#_ftn3\" name=\"_ftnref3\">[3]<\/a><\/p>\n<figure id=\"attachment_33933\" aria-describedby=\"caption-attachment-33933\" style=\"width: 300px\" class=\"wp-caption alignright\"><a href=\"https:\/\/d3.harvard.edu\/platform-rctom\/wp-content\/uploads\/sites\/4\/2018\/11\/WHO-World-Visual-Impairment.png\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-33933 size-medium\" src=\"https:\/\/d3.harvard.edu\/platform-rctom\/wp-content\/uploads\/sites\/4\/2018\/11\/WHO-World-Visual-Impairment-300x199.png\" alt=\"\" width=\"300\" height=\"199\" srcset=\"https:\/\/d3.harvard.edu\/platform-rctom\/wp-content\/uploads\/sites\/4\/2018\/11\/WHO-World-Visual-Impairment-300x199.png 300w, https:\/\/d3.harvard.edu\/platform-rctom\/wp-content\/uploads\/sites\/4\/2018\/11\/WHO-World-Visual-Impairment-600x398.png 600w, https:\/\/d3.harvard.edu\/platform-rctom\/wp-content\/uploads\/sites\/4\/2018\/11\/WHO-World-Visual-Impairment.png 726w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/><\/a><figcaption id=\"caption-attachment-33933\" class=\"wp-caption-text\">An estimated 285 million people are visually impaired globally (~245 million with low vision and ~40 million blind).<\/figcaption><\/figure>\n<p><strong>Design Spec(tacle)s:<\/strong><\/p>\n<p>While wearing the <em>MyEye<\/em> attached to one\u2019s glasses, a user points at whatever s\/he wants the device to read and the device\u2019s camera, upon recognizing the human\u2019s outstretched hand, takes a picture of the text \u2013 be it a billboard, food package label or newspaper \u2013 and, after running the image through its algorithms, reads the text aloud.\u00a0 Leveraging supervised learning technology, the product is trained on millions of images of text, products, and languages so that it can identify and interpret the proper image when it comes into view.<\/p>\n<p><iframe loading=\"lazy\" title=\"Assistive Technology Redefined: OrCam MyEye 2&#039;s Life-Changing Advanced Capabilities\" width=\"640\" height=\"360\" src=\"https:\/\/www.youtube.com\/embed\/TxjSFpLMxUs?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe><\/p>\n<div class=\"mceTemp\"><\/div>\n<p>While OrCam began as a \u201cAI-native\u201d technology, advancements in machine learning have pushed the boundaries of what the <em>MyEye<\/em> is able to do \u2013 and created challenges for Shashua and Aviram as they seek to further develop their product.<\/p>\n<p><strong>Seeing Around Corners:<\/strong><\/p>\n<p><u>Existing product improvements<\/u>: OrCam has capitalized on ML advancements to improve its core product. <em>MyEye<\/em> benefits from greater image capture capabilities as the underlying software is continuously trained on new formats of text such as new fonts, sizes, surfaces and lighting conditions.\u00a0 It can now identify whether there is sufficient natural light to capture an image and can recognize when an image is upside down and clue the user to flip the item.\u00a0 In addition, <em>MyEye<\/em> can announce color patterns (useful for dressing oneself in the morning), recognize millions of products and store additional objects like credit cards or grocery items.<\/p>\n<figure id=\"attachment_34115\" aria-describedby=\"caption-attachment-34115\" style=\"width: 150px\" class=\"wp-caption alignright\"><a href=\"https:\/\/d3.harvard.edu\/platform-rctom\/wp-content\/uploads\/sites\/4\/2018\/11\/Prince-William-OrCam.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-34115 size-thumbnail\" src=\"https:\/\/d3.harvard.edu\/platform-rctom\/wp-content\/uploads\/sites\/4\/2018\/11\/Prince-William-OrCam-150x150.jpg\" alt=\"\" width=\"150\" height=\"150\" \/><\/a><figcaption id=\"caption-attachment-34115\" class=\"wp-caption-text\">Prince William testing the OrCam MyEye 2.0 alongside Israeli Prime Minister, Bibi Netanyahu.<\/figcaption><\/figure>\n<p><u>New product features:<\/u> Beyond text, OrCam has focused on adding facial recognition capabilities, which similarly harness the power of machine learning. \u00a0Through supervised learning, users can record family members faces in &lt;30 seconds<a href=\"#_ftn4\" name=\"_ftnref4\">[4]<\/a> and the device will cycle through its programmed dataset to identify this person the next time s\/he comes into view.\u00a0 Further, the device can parse unstructured inputs (e.g., new faces) to give the user clues about bystanders it doesn\u2019t recognize (\u201cit\u2019s a young woman in front of you\u201d).<a href=\"#_ftn5\" name=\"_ftnref5\">[5]<\/a><\/p>\n<p><strong>Challenges for 20\/20 and Beyond:<\/strong><\/p>\n<p>One of the classic tensions with wearable technology surrounds how much intelligence is stored on the device versus in the cloud. \u00a0On the one hand, the device should be aesthetically light and easy to use (the initial product was burdened by a clunky cable and base unit), but on the other, it should be &#8220;big&#8221; enough to process, store and compute a lot of data. One option to overcome the constraint of limited built-in memory is using real-time cloud storage, but doing so consumes a large amount of power and drains the battery (today\u2019s battery life is ~2 hours<a href=\"#_ftn6\" name=\"_ftnref6\">[6]<\/a>), not to mention raising a host of user privacy issues if the device is synced with other personal, cloud-based apps.<\/p>\n<p>How better to solve this problem than by turning ML capabilities <em>inward <\/em>to make the product itself function more efficiently?\u00a0 For example, the <em>MyEye <\/em>can harness machine learning to decide when to process new images (defer processing when in \u201clow battery\u201d mode), which ones to process (prioritize those with sufficient light exposure) and how to regulate its overall database (delete redundant images or re-capture a low-resolution image). <a href=\"#_ftn7\" name=\"_ftnref7\">[7]<\/a>\u00a0 In addition, by training its image sensors to re-aim when an object is only in partial view, the <em>MyEye<\/em> will be able to process information more accurately and, importantly, efficiently, as it no longer wastes time matching incomplete data. <a href=\"#_ftn8\" name=\"_ftnref8\">[8]<\/a><\/p>\n<p><strong>Future Optics:<\/strong><\/p>\n<p>One key question for OrCam surrounds how far it should expand its target market. The basic product design \u2013 which required a user to point to an object to trigger the reading system \u2013 was geared for the \u201clow vision\u201d market; in fact, it precluded \u201cfully blind\u201d individuals who wouldn\u2019t know where to point. \u00a0However, much more potential exists to embed the product into daily life for the visually impaired and beyond.\u00a0 What if OrCam could leverage its accumulated dataset to send corresponding information about an individual \u2013 such as name, birthdate, or last time of meeting \u2013 once it recognizes that person\u2019s face?<a href=\"#_ftn9\" name=\"_ftnref9\">[9]<\/a>. \u00a0Or, if the device reads McDonald\u2019s labels every Sunday, can it make statistical inferences to recommend coupons associated with these user preferences? Overall, how should OrCam weigh increasing the <em>MyEye\u2019s<\/em> functionality and mass market appeal against protecting user privacy and maintaining a consumer-friendly, wearable design?<\/p>\n<p>(Word count: 798).<\/p>\n<p><strong>Citations:<\/strong><\/p>\n<p><a href=\"#_ftnref1\" name=\"_ftn1\">[1]<\/a> Erik Brynjolfsson and Andrew McAfee. \u201cThe Dawn of the Age of Artificial Intelligence.\u201d\u00a0<em>The Atlantic<\/em> (February 2014).<\/p>\n<p><a href=\"#_ftnref2\" name=\"_ftn2\">[2]<\/a> \u201cGlobal Data on Visual Impairments.\u201d\u00a0<em>World Health Organization<\/em> (2012).<\/p>\n<p><a href=\"#_ftnref3\" name=\"_ftn3\">[3]<\/a> Katharine Schwab. \u201cThe $1 Billion Company That\u2019s Building Wearable AI for Blind People.\u201d\u00a0<em>Fast Company<\/em> (May 2018).<\/p>\n<p><a href=\"#_ftnref4\" name=\"_ftn4\">[4]<\/a> Alex Lee. \u201cMy Eye 2.0 uses AI to help visually impaired people explore the world.\u201d\u00a0<em>Alphr<\/em> (February 2018).<\/p>\n<p><a href=\"#_ftnref5\" name=\"_ftn5\">[5]<\/a> Romain Dillet. \u201cThe OrCam MyEye helps visually impaired people read and identify things.\u201d\u00a0<em>TechCrunch<\/em> (November 2017).<\/p>\n<p><a href=\"#_ftnref6\" name=\"_ftn6\">[6]<\/a> Katharine Schwab. \u201cThe $1 Billion Company That\u2019s Building Wearable AI for Blind People.\u201d\u00a0<em>Fast Company<\/em> (May 2018).<\/p>\n<p><a href=\"#_ftnref7\" name=\"_ftn7\">[7]<\/a> Yonatan Wexler and Amnon Shashua. \u201cApparatus for Adjusting Image Capture Settings.\u201d Patent No. US 9,060,127 B2. <em>United States Patent Office<\/em> (June 2015).<\/p>\n<p><a href=\"#_ftnref8\" name=\"_ftn8\">[8]<\/a> Yonatan Wexler and Amnon Shashua. \u201cApparatus and Method for Analyzing Images.\u201d Patent No. US 9,911,361 B2. <em>United States Patent Office<\/em> (March 2018).<\/p>\n<p><a href=\"#_ftnref9\" name=\"_ftn9\">[9]<\/a> For HBO\u2019s <em>Veep<\/em> fans, think of this as a real-life Gary Walsh to our Selina Meyer!<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Can machine learning help people see again? OrCam says yes.<\/p>\n","protected":false},"author":11783,"featured_media":34754,"comment_status":"open","ping_status":"closed","template":"","categories":[4864,346,4865],"class_list":["post-33616","hck-submission","type-hck-submission","status-publish","has-post-thumbnail","hentry","category-consumer-wearbables","category-machine-learning","category-vision-restoration","hck-taxonomy-organization-orcam","hck-taxonomy-industry-technology","hck-taxonomy-country-israel"],"connected_submission_link":"https:\/\/d3.harvard.edu\/platform-rctom\/assignment\/rc-tom-challenge-2018\/","yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.3 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>OrCam: A New Vision for Machine Learning - Technology and Operations Management<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/d3.harvard.edu\/platform-rctom\/submission\/orcam-a-new-vision-for-machine-learning\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"OrCam: A New Vision for Machine Learning - Technology and Operations Management\" \/>\n<meta property=\"og:description\" content=\"Can machine learning help people see again? OrCam says yes.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/d3.harvard.edu\/platform-rctom\/submission\/orcam-a-new-vision-for-machine-learning\/\" \/>\n<meta property=\"og:site_name\" content=\"Technology and Operations Management\" \/>\n<meta property=\"og:image\" content=\"https:\/\/d3.harvard.edu\/platform-rctom\/wp-content\/uploads\/sites\/4\/2018\/11\/OrCam-Product-Spec-1.png\" \/>\n\t<meta property=\"og:image:width\" content=\"564\" \/>\n\t<meta property=\"og:image:height\" content=\"479\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data1\" content=\"5 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-rctom\\\/submission\\\/orcam-a-new-vision-for-machine-learning\\\/\",\"url\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-rctom\\\/submission\\\/orcam-a-new-vision-for-machine-learning\\\/\",\"name\":\"OrCam: A New Vision for Machine Learning - Technology and Operations Management\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-rctom\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-rctom\\\/submission\\\/orcam-a-new-vision-for-machine-learning\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-rctom\\\/submission\\\/orcam-a-new-vision-for-machine-learning\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-rctom\\\/wp-content\\\/uploads\\\/sites\\\/4\\\/2018\\\/11\\\/OrCam-Product-Spec-1.png\",\"datePublished\":\"2018-11-13T23:24:10+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-rctom\\\/submission\\\/orcam-a-new-vision-for-machine-learning\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/d3.harvard.edu\\\/platform-rctom\\\/submission\\\/orcam-a-new-vision-for-machine-learning\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-rctom\\\/submission\\\/orcam-a-new-vision-for-machine-learning\\\/#primaryimage\",\"url\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-rctom\\\/wp-content\\\/uploads\\\/sites\\\/4\\\/2018\\\/11\\\/OrCam-Product-Spec-1.png\",\"contentUrl\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-rctom\\\/wp-content\\\/uploads\\\/sites\\\/4\\\/2018\\\/11\\\/OrCam-Product-Spec-1.png\",\"width\":564,\"height\":479},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-rctom\\\/submission\\\/orcam-a-new-vision-for-machine-learning\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-rctom\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Submissions\",\"item\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-rctom\\\/submission\\\/\"},{\"@type\":\"ListItem\",\"position\":3,\"name\":\"OrCam: A New Vision for Machine Learning\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-rctom\\\/#website\",\"url\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-rctom\\\/\",\"name\":\"Technology and Operations Management\",\"description\":\"MBA Student Perspectives\",\"potentialAction\":[{\"@type\":\"性视界Action\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-rctom\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"OrCam: A New Vision for Machine Learning - Technology and Operations Management","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/d3.harvard.edu\/platform-rctom\/submission\/orcam-a-new-vision-for-machine-learning\/","og_locale":"en_US","og_type":"article","og_title":"OrCam: A New Vision for Machine Learning - Technology and Operations Management","og_description":"Can machine learning help people see again? OrCam says yes.","og_url":"https:\/\/d3.harvard.edu\/platform-rctom\/submission\/orcam-a-new-vision-for-machine-learning\/","og_site_name":"Technology and Operations Management","og_image":[{"width":564,"height":479,"url":"https:\/\/d3.harvard.edu\/platform-rctom\/wp-content\/uploads\/sites\/4\/2018\/11\/OrCam-Product-Spec-1.png","type":"image\/png"}],"twitter_card":"summary_large_image","twitter_misc":{"Est. reading time":"5 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/d3.harvard.edu\/platform-rctom\/submission\/orcam-a-new-vision-for-machine-learning\/","url":"https:\/\/d3.harvard.edu\/platform-rctom\/submission\/orcam-a-new-vision-for-machine-learning\/","name":"OrCam: A New Vision for Machine Learning - Technology and Operations Management","isPartOf":{"@id":"https:\/\/d3.harvard.edu\/platform-rctom\/#website"},"primaryImageOfPage":{"@id":"https:\/\/d3.harvard.edu\/platform-rctom\/submission\/orcam-a-new-vision-for-machine-learning\/#primaryimage"},"image":{"@id":"https:\/\/d3.harvard.edu\/platform-rctom\/submission\/orcam-a-new-vision-for-machine-learning\/#primaryimage"},"thumbnailUrl":"https:\/\/d3.harvard.edu\/platform-rctom\/wp-content\/uploads\/sites\/4\/2018\/11\/OrCam-Product-Spec-1.png","datePublished":"2018-11-13T23:24:10+00:00","breadcrumb":{"@id":"https:\/\/d3.harvard.edu\/platform-rctom\/submission\/orcam-a-new-vision-for-machine-learning\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/d3.harvard.edu\/platform-rctom\/submission\/orcam-a-new-vision-for-machine-learning\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/d3.harvard.edu\/platform-rctom\/submission\/orcam-a-new-vision-for-machine-learning\/#primaryimage","url":"https:\/\/d3.harvard.edu\/platform-rctom\/wp-content\/uploads\/sites\/4\/2018\/11\/OrCam-Product-Spec-1.png","contentUrl":"https:\/\/d3.harvard.edu\/platform-rctom\/wp-content\/uploads\/sites\/4\/2018\/11\/OrCam-Product-Spec-1.png","width":564,"height":479},{"@type":"BreadcrumbList","@id":"https:\/\/d3.harvard.edu\/platform-rctom\/submission\/orcam-a-new-vision-for-machine-learning\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/d3.harvard.edu\/platform-rctom\/"},{"@type":"ListItem","position":2,"name":"Submissions","item":"https:\/\/d3.harvard.edu\/platform-rctom\/submission\/"},{"@type":"ListItem","position":3,"name":"OrCam: A New Vision for Machine Learning"}]},{"@type":"WebSite","@id":"https:\/\/d3.harvard.edu\/platform-rctom\/#website","url":"https:\/\/d3.harvard.edu\/platform-rctom\/","name":"Technology and Operations Management","description":"MBA Student Perspectives","potentialAction":[{"@type":"性视界Action","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/d3.harvard.edu\/platform-rctom\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"}]}},"_links":{"self":[{"href":"https:\/\/d3.harvard.edu\/platform-rctom\/wp-json\/wp\/v2\/hck-submission\/33616","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/d3.harvard.edu\/platform-rctom\/wp-json\/wp\/v2\/hck-submission"}],"about":[{"href":"https:\/\/d3.harvard.edu\/platform-rctom\/wp-json\/wp\/v2\/types\/hck-submission"}],"author":[{"embeddable":true,"href":"https:\/\/d3.harvard.edu\/platform-rctom\/wp-json\/wp\/v2\/users\/11783"}],"replies":[{"embeddable":true,"href":"https:\/\/d3.harvard.edu\/platform-rctom\/wp-json\/wp\/v2\/comments?post=33616"}],"version-history":[{"count":0,"href":"https:\/\/d3.harvard.edu\/platform-rctom\/wp-json\/wp\/v2\/hck-submission\/33616\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/d3.harvard.edu\/platform-rctom\/wp-json\/wp\/v2\/media\/34754"}],"wp:attachment":[{"href":"https:\/\/d3.harvard.edu\/platform-rctom\/wp-json\/wp\/v2\/media?parent=33616"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/d3.harvard.edu\/platform-rctom\/wp-json\/wp\/v2\/categories?post=33616"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}