  {"id":215,"date":"2015-09-12T20:19:10","date_gmt":"2015-09-13T00:19:10","guid":{"rendered":"https:\/\/digital.hbs.edu\/platform-digit\/submission\/affectiva-analyzing-facial-expressions-to-increase-emotional-resonance-of-video-ads\/"},"modified":"2015-09-12T22:18:28","modified_gmt":"2015-09-13T02:18:28","slug":"analyzing-facial-expressions-to-increase-emotional-resonance-of-video-ads","status":"publish","type":"hck-submission","link":"https:\/\/d3.harvard.edu\/platform-digit\/submission\/analyzing-facial-expressions-to-increase-emotional-resonance-of-video-ads\/","title":{"rendered":"Analyzing facial expressions to increase emotional resonance of video ads"},"content":{"rendered":"<p>Companies like Mars and Unilever spend billions per year on video advertisements. Traditionally, they have relied on focus groups and surveys to test and refine their campaigns. While focus groups allow market researchers to accurately assess consumers\u2019 reactions and involvement, they are expensive to scale in terms of participants and geographic reach. Surveys are more scalable but often lack accuracy as they rely on participants\u2019 self-reported emotional reactions.<\/p>\n<p>Affectiva applies sophisticated digital technology to achieve both: high accuracy and almost unlimited scale. Market research participants simply log-in on Affectiva\u2019s website using their own devices and watch an ad &#8211; or any other video content that is being tested &#8211; from anywhere around the world. Through the devices\u2019 webcam, Affectiva\u2019s software records and analyzes viewers\u2019 facial expressions using computer vision algorithms and a database of more than one billion facial frames. It measures discrete metrics such as smile, surprise, dislike, attention and confusion as well as continuous metrics such as valence and expressiveness. Advertisers can use a cloud-based user interface to analyze emotion metrics moment-by-moment to see which specific parts of an ad might need to be tweaked. They can also compare reactions of different groups of viewers to gauge the impact of an ad across demographic and regional differences. These insights allow companies to increase the emotional resonance of their marketing content resulting in more effective advertisements.<\/p>\n<p><iframe loading=\"lazy\" title=\"Affectiva Overview\" width=\"640\" height=\"360\" src=\"https:\/\/www.youtube.com\/embed\/mFrSFMnskI4?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe><\/p>\n<p>Affectiva captures the value it creates for customers using a software-as-a-service model. Instead of buying\u00a0the software, customers are charged per usage.\u00a0Affectiva has partnered with market research companies such as Millward Brown and InsightsExpress to acquire new customers for its innovative technology.<\/p>\n<p>Like other digital winners Affectiva has benefited from extremely low marginal costs compared to traditional industry players, a first-mover advantage as well as the use of application programing interfaces (APIs) and software development kits (SDKs) to spur innovation by third-party developers. Affectiva\u2019s marginal costs of analyzing the emotional reactions of an additional viewer consist of nothing more than the negligible required computing power. While the same is\u00a0true for online surveys, for Affectiva there are not only costs but also benefits associated with each additional user. A machine learning algorithm improves its software with every facial expression it analyzes. This continuous improvement allows Affective to realize a significant first-mover advantage. Since its launch in 2011, the company has tested 11,000 media units and gathered facial expressions from over 2 million face videos in over 75 countries building the world\u2019s largest emotion analytics database.\u00a0<a href=\"http:\/\/www.affectiva.com\/\" target=\"_blank\">[1]<\/a> To gather even more data and explore use cases beyond its market research business, Affectiva offers third-party developers APIs and SDKs to include its emotion analytics technology in their applications. In the future, this may allow Affectiva to win in other markets as well. For example, smart televisions might leverage Affectiva\u2019s software to understand peoples\u2019 movie watching preferences and refine its recommendation engine accordingly. <a href=\"http:\/\/www.inc.com\/audacious-companies\/april-joyner\/affectiva.html\" target=\"_blank\">[2]<\/a><\/p>\n<p>&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Affectiva is winning by using digital technology to create and capture value in the market research industry.<\/p>\n","protected":false},"author":73,"featured_media":216,"comment_status":"open","ping_status":"closed","template":"","categories":[658,41,43,7],"class_list":["post-215","hck-submission","type-hck-submission","status-publish","has-post-thumbnail","hentry","category-advertising","category-emotion-analytics","category-market-research","category-winner"],"connected_submission_link":"https:\/\/d3.harvard.edu\/platform-digit\/assignment\/digital-winners-and-losers\/","yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.3 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Analyzing facial expressions to increase emotional resonance of video ads - Digital Innovation and Transformation<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/d3.harvard.edu\/platform-digit\/submission\/analyzing-facial-expressions-to-increase-emotional-resonance-of-video-ads\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Analyzing facial expressions to increase emotional resonance of video ads - Digital Innovation and Transformation\" \/>\n<meta property=\"og:description\" content=\"Affectiva is winning by using digital technology to create and capture value in the market research industry.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/d3.harvard.edu\/platform-digit\/submission\/analyzing-facial-expressions-to-increase-emotional-resonance-of-video-ads\/\" \/>\n<meta property=\"og:site_name\" content=\"Digital Innovation and Transformation\" \/>\n<meta property=\"article:modified_time\" content=\"2015-09-13T02:18:28+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/d3.harvard.edu\/platform-digit\/wp-content\/uploads\/sites\/2\/2015\/09\/15831.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"614\" \/>\n\t<meta property=\"og:image:height\" content=\"327\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data1\" content=\"2 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-digit\\\/submission\\\/analyzing-facial-expressions-to-increase-emotional-resonance-of-video-ads\\\/\",\"url\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-digit\\\/submission\\\/analyzing-facial-expressions-to-increase-emotional-resonance-of-video-ads\\\/\",\"name\":\"Analyzing facial expressions to increase emotional resonance of video ads - Digital Innovation and Transformation\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-digit\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-digit\\\/submission\\\/analyzing-facial-expressions-to-increase-emotional-resonance-of-video-ads\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-digit\\\/submission\\\/analyzing-facial-expressions-to-increase-emotional-resonance-of-video-ads\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-digit\\\/wp-content\\\/uploads\\\/sites\\\/2\\\/2015\\\/09\\\/15831.jpg\",\"datePublished\":\"2015-09-13T00:19:10+00:00\",\"dateModified\":\"2015-09-13T02:18:28+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-digit\\\/submission\\\/analyzing-facial-expressions-to-increase-emotional-resonance-of-video-ads\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/d3.harvard.edu\\\/platform-digit\\\/submission\\\/analyzing-facial-expressions-to-increase-emotional-resonance-of-video-ads\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-digit\\\/submission\\\/analyzing-facial-expressions-to-increase-emotional-resonance-of-video-ads\\\/#primaryimage\",\"url\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-digit\\\/wp-content\\\/uploads\\\/sites\\\/2\\\/2015\\\/09\\\/15831.jpg\",\"contentUrl\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-digit\\\/wp-content\\\/uploads\\\/sites\\\/2\\\/2015\\\/09\\\/15831.jpg\",\"width\":614,\"height\":327},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-digit\\\/submission\\\/analyzing-facial-expressions-to-increase-emotional-resonance-of-video-ads\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-digit\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Submissions\",\"item\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-digit\\\/submission\\\/\"},{\"@type\":\"ListItem\",\"position\":3,\"name\":\"Analyzing facial expressions to increase emotional resonance of video ads\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-digit\\\/#website\",\"url\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-digit\\\/\",\"name\":\"Digital Innovation and Transformation\",\"description\":\"MBA Student Perspectives\",\"potentialAction\":[{\"@type\":\"性视界Action\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-digit\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Analyzing facial expressions to increase emotional resonance of video ads - Digital Innovation and Transformation","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/d3.harvard.edu\/platform-digit\/submission\/analyzing-facial-expressions-to-increase-emotional-resonance-of-video-ads\/","og_locale":"en_US","og_type":"article","og_title":"Analyzing facial expressions to increase emotional resonance of video ads - Digital Innovation and Transformation","og_description":"Affectiva is winning by using digital technology to create and capture value in the market research industry.","og_url":"https:\/\/d3.harvard.edu\/platform-digit\/submission\/analyzing-facial-expressions-to-increase-emotional-resonance-of-video-ads\/","og_site_name":"Digital Innovation and Transformation","article_modified_time":"2015-09-13T02:18:28+00:00","og_image":[{"width":614,"height":327,"url":"https:\/\/d3.harvard.edu\/platform-digit\/wp-content\/uploads\/sites\/2\/2015\/09\/15831.jpg","type":"image\/jpeg"}],"twitter_card":"summary_large_image","twitter_misc":{"Est. reading time":"2 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/d3.harvard.edu\/platform-digit\/submission\/analyzing-facial-expressions-to-increase-emotional-resonance-of-video-ads\/","url":"https:\/\/d3.harvard.edu\/platform-digit\/submission\/analyzing-facial-expressions-to-increase-emotional-resonance-of-video-ads\/","name":"Analyzing facial expressions to increase emotional resonance of video ads - Digital Innovation and Transformation","isPartOf":{"@id":"https:\/\/d3.harvard.edu\/platform-digit\/#website"},"primaryImageOfPage":{"@id":"https:\/\/d3.harvard.edu\/platform-digit\/submission\/analyzing-facial-expressions-to-increase-emotional-resonance-of-video-ads\/#primaryimage"},"image":{"@id":"https:\/\/d3.harvard.edu\/platform-digit\/submission\/analyzing-facial-expressions-to-increase-emotional-resonance-of-video-ads\/#primaryimage"},"thumbnailUrl":"https:\/\/d3.harvard.edu\/platform-digit\/wp-content\/uploads\/sites\/2\/2015\/09\/15831.jpg","datePublished":"2015-09-13T00:19:10+00:00","dateModified":"2015-09-13T02:18:28+00:00","breadcrumb":{"@id":"https:\/\/d3.harvard.edu\/platform-digit\/submission\/analyzing-facial-expressions-to-increase-emotional-resonance-of-video-ads\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/d3.harvard.edu\/platform-digit\/submission\/analyzing-facial-expressions-to-increase-emotional-resonance-of-video-ads\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/d3.harvard.edu\/platform-digit\/submission\/analyzing-facial-expressions-to-increase-emotional-resonance-of-video-ads\/#primaryimage","url":"https:\/\/d3.harvard.edu\/platform-digit\/wp-content\/uploads\/sites\/2\/2015\/09\/15831.jpg","contentUrl":"https:\/\/d3.harvard.edu\/platform-digit\/wp-content\/uploads\/sites\/2\/2015\/09\/15831.jpg","width":614,"height":327},{"@type":"BreadcrumbList","@id":"https:\/\/d3.harvard.edu\/platform-digit\/submission\/analyzing-facial-expressions-to-increase-emotional-resonance-of-video-ads\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/d3.harvard.edu\/platform-digit\/"},{"@type":"ListItem","position":2,"name":"Submissions","item":"https:\/\/d3.harvard.edu\/platform-digit\/submission\/"},{"@type":"ListItem","position":3,"name":"Analyzing facial expressions to increase emotional resonance of video ads"}]},{"@type":"WebSite","@id":"https:\/\/d3.harvard.edu\/platform-digit\/#website","url":"https:\/\/d3.harvard.edu\/platform-digit\/","name":"Digital Innovation and Transformation","description":"MBA Student Perspectives","potentialAction":[{"@type":"性视界Action","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/d3.harvard.edu\/platform-digit\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"}]}},"_links":{"self":[{"href":"https:\/\/d3.harvard.edu\/platform-digit\/wp-json\/wp\/v2\/hck-submission\/215","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/d3.harvard.edu\/platform-digit\/wp-json\/wp\/v2\/hck-submission"}],"about":[{"href":"https:\/\/d3.harvard.edu\/platform-digit\/wp-json\/wp\/v2\/types\/hck-submission"}],"author":[{"embeddable":true,"href":"https:\/\/d3.harvard.edu\/platform-digit\/wp-json\/wp\/v2\/users\/73"}],"replies":[{"embeddable":true,"href":"https:\/\/d3.harvard.edu\/platform-digit\/wp-json\/wp\/v2\/comments?post=215"}],"version-history":[{"count":0,"href":"https:\/\/d3.harvard.edu\/platform-digit\/wp-json\/wp\/v2\/hck-submission\/215\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/d3.harvard.edu\/platform-digit\/wp-json\/wp\/v2\/media\/216"}],"wp:attachment":[{"href":"https:\/\/d3.harvard.edu\/platform-digit\/wp-json\/wp\/v2\/media?parent=215"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/d3.harvard.edu\/platform-digit\/wp-json\/wp\/v2\/categories?post=215"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}