  {"id":36437,"date":"2018-11-13T19:59:54","date_gmt":"2018-11-14T00:59:54","guid":{"rendered":"https:\/\/digital.hbs.edu\/platform-rctom\/submission\/fighting-fake-news-with-ai\/"},"modified":"2018-11-14T18:57:20","modified_gmt":"2018-11-14T23:57:20","slug":"fighting-fake-news-with-ai","status":"publish","type":"hck-submission","link":"https:\/\/d3.harvard.edu\/platform-rctom\/submission\/fighting-fake-news-with-ai\/","title":{"rendered":"Fighting Fake News with AI"},"content":{"rendered":"<p><span style=\"font-weight: 400\">We all have seen them: \u201cnews\u201d, ads, chain messages. They are scattered with content supposedly selected for us: unreliable news, rumors, malicious texts try to divide communities, cities, nations and the world. Fake news is content created maliciously with the intention of misinforming people, generating more traffic to sites with scandalous headlines, or with extremely politically biased content. In general, these type of news try to deceive the consumer imitating the appearance of reputable sources such as newspapers, blogs, and even videos, generating damage to democracy and societies.<\/span><\/p>\n<p><span style=\"font-weight: 400\">The most important example in recent years has been the US elections in 2016 where more than 146 million people were reached by content created by 470 Russian-backed accounts<\/span><span style=\"font-weight: 400\">[1]<\/span><span style=\"font-weight: 400\">, these attacks plus the Cambridge Analytica scandal made the founder of Facebook declare to the Congress and explain these vulnerabilities<\/span><span style=\"font-weight: 400\">[2]<\/span><span style=\"font-weight: 400\">. Without going any further, in the recent election in Brazil, disinformation spread through different social networks, generating confusion among the followers of different candidates, further increasing the differences between opposition groups, and generating unnecessary violence in a democratic process<\/span><span style=\"font-weight: 400\">[3]<\/span><span style=\"font-weight: 400\">. Even worse, at least twelve people have died in India after being falsely accused, through WhatsApp messages, of being children kidnappers<\/span><span style=\"font-weight: 400\">[4]<\/span><span style=\"font-weight: 400\">.<\/span><\/p>\n<p><span style=\"font-weight: 400\">It seems that for the average reader it&#8217;s not a habit to review the origin or the quality of the information consumed. Even more difficult is to trace the origin of some content through social networks where with a simple click you can share to thousands of people any type of content. A similar problem is experienced in chat platforms, where it&#8217;s almost impossible to know where the information comes from, many times copied and pasted between groups of friends.<\/span><\/p>\n<p><span style=\"font-weight: 400\">A study published in the journal Science in 2018<\/span><span style=\"font-weight: 400\">[5]<\/span><span style=\"font-weight: 400\">, shows how false news diffused faster than truthful news. An analysis conducted on hundreds of thousands of Twitter posts shows how as many as 126,000 rumors were spread over 3 million people. The study points that the speed and scope of this false news diffuse up to 100 times faster compared to real news. The researchers&#8217; conclusion is that false news reached more people around the world than truthful news.<\/span><\/p>\n<p><span style=\"font-weight: 400\">Social networks have created a lot of value, both for consumers and advertisers, generating platforms that connect friends, families, groups with common interests and diverse communities, but at the same time, these platforms have created massive distribution channels for malicious groups. It is true that Facebook, Twitter, and Google are working on improving their platforms<\/span><span style=\"font-weight: 400\">[6]<\/span><span style=\"font-weight: 400\">, but this effort is not enough to solve the problem. The question now is: can these companies by themselves regulate their content and platforms? Can technologies like AI help us solve this?<\/span><\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"font-weight: 400\">Factmata, a London-based AI startup hopes to solve this. Backed up by big names like Mark Cuban and Biz Stone<\/span><span style=\"font-weight: 400\">[7]<\/span><span style=\"font-weight: 400\">\u00a0they have been able to raise 1.6 million in funding capital<\/span><span style=\"font-weight: 400\">[8]<\/span><span style=\"font-weight: 400\">. Since 2017, they&#8217;ve been working on cutting-edge technologies to detect false news. Using natural language processing and machine learning they are able to identify certain characteristics in texts such as credibility and quality. The semantic analysis performed on the text detects up to four dimensions<\/span><span style=\"font-weight: 400\">[9]<\/span><span style=\"font-weight: 400\">:<\/span><\/p>\n<ol>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Hate speech and abusive content<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Propaganda and extremely politically biased content<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Spoof websites and content spread by known fake news networks<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Extreme clickbait content<\/span><\/li>\n<\/ol>\n<p><span style=\"font-weight: 400\">These dimensions are finally taken into consideration to create a content quality score, which allows users to see clearly which sources and contents are more reliable. This does not mean defining what information is correct or incorrect, it means giving the users the necessary tools to be able to define it on their own, increasing the awareness and critical thought of each user.<\/span><\/p>\n<p><span style=\"font-weight: 400\">Factmata aims to help different industries such as media, advertising, finance, trading, public relations and more. Two different platforms will be launched to the market this year, with different objectives: the business platform<\/span><span style=\"font-weight: 400\">[10]<\/span><span style=\"font-weight: 400\"> designed for advertisers and advertising platforms allows analyzing the content of multiple URLs to measure the risk of publishing ads on these pages. The main difference with current solutions is that it doesn&#8217;t use a simple black\/white-list, but analyzes the context of the entire content, sentence by sentence. A platform focused on the news industry will soon be launched<\/span><span style=\"font-weight: 400\">[11]<\/span><span style=\"font-weight: 400\">. Reporters and researchers will be able to use the platform to verify the quality of different sources of information through credibility-scoring artificial intelligence.<\/span><\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"font-weight: 400\">Technological platforms generated the problem, and technological platforms will correct it. Even so, algorithms are not enough to eradicate the underlying problem, and technology companies, media conglomerates, social networks, and even the state must make a greater effort to educate its users and inhabitants. Education should go beyond teaching and promoting reading and consumption of informative content but should focus on critical thinking and analysis of sources. These platforms must also improve their content algorithms, avoiding generating echo-chambers where users only see one side of the story, without being able to analyze other points of view, further increasing the confirmation of their views without easily visible counter-arguments. Finally, the technologies used to analyze and classify news should be audited by different regulatory and perhaps public entities, to ensure unbiased analysis when possible.<\/span><\/p>\n<p><span style=\"font-weight: 400\">There are many questions that are extremely complex to answer, and which will remain under discussion in the coming decades: can or should algorithms help us select what content to consume? Are these algorithms transparent enough to not fall into the same biases of a human being? Who decides what content is reliable and how this clashes with our freedom of speech?<\/span><\/p>\n<p>&nbsp;<\/p>\n<hr \/>\n<p><b>References<\/b><\/p>\n<p><span style=\"font-weight: 400\">[1] Wells, D. (2018). Tech Giants Disclose Russian Activity on Eve of Congressional Appearance. [online] WSJ. Available at: https:\/\/www.wsj.com\/articles\/facebook-estimates-126-million-people-saw-russian-backed-content-1509401546?mod=article_inline [Accessed 14 Nov. 2018].<\/span><\/p>\n<p><span style=\"font-weight: 400\">[2] Alpert, G. (2018). In Facebook\u2019s Effort to Fight Fake News, Human Fact-Checkers Struggle to Keep Up. [online] WSJ. Available at: https:\/\/www.wsj.com\/articles\/in-facebooks-effort-to-fight-fake-news-human-fact-checkers-play-a-supporting-role-1539856800?mod=searchresults&amp;page=1&amp;pos=16 [Accessed 13 Nov. 2018].<\/span><\/p>\n<p><span style=\"font-weight: 400\">[3] Seetharaman, P. (2018). In Brazil Vote, Misinformation Spreads on Social Media Despite Efforts to Stop It. [online] WSJ. Available at: https:\/\/www.wsj.com\/articles\/in-brazil-vote-fake-news-spreads-on-social-media-despite-efforts-to-stop-it-1540566295?mod=searchresults&amp;page=1&amp;pos=9 [Accessed 13 Nov. 2018].<\/span><\/p>\n<p><span style=\"font-weight: 400\">[4] The Economist. (2018). WhatsApp: Mark Zuckerberg\u2019s other headache. [online] Available at: https:\/\/www.economist.com\/business\/2018\/01\/27\/whatsapp-mark-zuckerbergs-other-headache [Accessed 13 Nov. 2018].<\/span><\/p>\n<p><span style=\"font-weight: 400\">[5] Vosoughi, S., Roy, D. and Aral, S. (2018). The spread of true and false news online. Science, [online] 359(6380), pp.1146-1151. Available at: http:\/\/science.sciencemag.org\/content\/359\/6380\/1146 [Accessed 14 Nov. 2018].<\/span><\/p>\n<p><span style=\"font-weight: 400\">[6] The Economist. (2018). WhatsApp suggests a cure for virality. [online] Available at: https:\/\/www.economist.com\/leaders\/2018\/07\/26\/whatsapp-suggests-a-cure-for-virality [Accessed 13 Nov. 2018].<\/span><\/p>\n<p><span style=\"font-weight: 400\">[7] TechCrunch. (2018). Factmata closes $1M seed round as it seeks to build an \u2018anti fake news\u2019 media platform. [online] Available at: https:\/\/techcrunch.com\/2018\/02\/01\/factmata-closes-1m-seed-round-as-it-seeks-to-build-an-anti-fake-news-media-platform\/ [Accessed 13 Nov. 2018].<\/span><\/p>\n<p><span style=\"font-weight: 400\">[8] CrunchBase. (2018). CrunchBase: Factmata. [online] Available at: https:\/\/www.crunchbase.com\/organization\/factmata#section-overview [Accessed 13 Nov. 2018].<\/span><\/p>\n<p><span style=\"font-weight: 400\">[9] Factmata.com. (2018). Factmata. [online] Available at: https:\/\/factmata.com\/technology.html [Accessed 14 Nov. 2018].<\/span><\/p>\n<p><span style=\"font-weight: 400\">[10] Factmata.com. (2018). Factmata. [online] Available at: https:\/\/factmata.com\/business.html [Accessed 14 Nov. 2018].<\/span><\/p>\n<p><span style=\"font-weight: 400\">[11] Factmata.com. (2018). Factmata. [online] Available at: https:\/\/factmata.com\/news-platform.html [Accessed 14 Nov. 2018].<\/span><\/p>\n<p><em>(929 words, sorry)<\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>We all have seen them: \u201cnews\u201d, ads, chain messages. They are scattered with content supposedly selected for us: unreliable news, rumors, malicious texts try to divide communities, cities, nations and the world. Fake news is content created maliciously with the [&hellip;]<\/p>\n","protected":false},"author":11450,"featured_media":36590,"comment_status":"open","ping_status":"closed","template":"","categories":[4673,346,4602,986],"class_list":["post-36437","hck-submission","type-hck-submission","status-publish","has-post-thumbnail","hentry","category-fake-news","category-machine-learning","category-natural-language-processing","category-news","hck-taxonomy-industry-journalism-and-news"],"connected_submission_link":"https:\/\/d3.harvard.edu\/platform-rctom\/assignment\/rc-tom-challenge-2018\/","yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.3 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Fighting Fake News with AI - Technology and Operations Management<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/d3.harvard.edu\/platform-rctom\/submission\/fighting-fake-news-with-ai\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Fighting Fake News with AI - Technology and Operations Management\" \/>\n<meta property=\"og:description\" content=\"We all have seen them: \u201cnews\u201d, ads, chain messages. They are scattered with content supposedly selected for us: unreliable news, rumors, malicious texts try to divide communities, cities, nations and the world. Fake news is content created maliciously with the [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/d3.harvard.edu\/platform-rctom\/submission\/fighting-fake-news-with-ai\/\" \/>\n<meta property=\"og:site_name\" content=\"Technology and Operations Management\" \/>\n<meta property=\"article:modified_time\" content=\"2018-11-14T23:57:20+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/d3.harvard.edu\/platform-rctom\/wp-content\/uploads\/sites\/4\/2018\/11\/foto.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1597\" \/>\n\t<meta property=\"og:image:height\" content=\"685\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data1\" content=\"6 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-rctom\\\/submission\\\/fighting-fake-news-with-ai\\\/\",\"url\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-rctom\\\/submission\\\/fighting-fake-news-with-ai\\\/\",\"name\":\"Fighting Fake News with AI - Technology and Operations Management\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-rctom\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-rctom\\\/submission\\\/fighting-fake-news-with-ai\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-rctom\\\/submission\\\/fighting-fake-news-with-ai\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-rctom\\\/wp-content\\\/uploads\\\/sites\\\/4\\\/2018\\\/11\\\/foto.png\",\"datePublished\":\"2018-11-14T00:59:54+00:00\",\"dateModified\":\"2018-11-14T23:57:20+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-rctom\\\/submission\\\/fighting-fake-news-with-ai\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/d3.harvard.edu\\\/platform-rctom\\\/submission\\\/fighting-fake-news-with-ai\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-rctom\\\/submission\\\/fighting-fake-news-with-ai\\\/#primaryimage\",\"url\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-rctom\\\/wp-content\\\/uploads\\\/sites\\\/4\\\/2018\\\/11\\\/foto.png\",\"contentUrl\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-rctom\\\/wp-content\\\/uploads\\\/sites\\\/4\\\/2018\\\/11\\\/foto.png\",\"width\":1597,\"height\":685},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-rctom\\\/submission\\\/fighting-fake-news-with-ai\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-rctom\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Submissions\",\"item\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-rctom\\\/submission\\\/\"},{\"@type\":\"ListItem\",\"position\":3,\"name\":\"Fighting Fake News with AI\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-rctom\\\/#website\",\"url\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-rctom\\\/\",\"name\":\"Technology and Operations Management\",\"description\":\"MBA Student Perspectives\",\"potentialAction\":[{\"@type\":\"性视界Action\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-rctom\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Fighting Fake News with AI - Technology and Operations Management","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/d3.harvard.edu\/platform-rctom\/submission\/fighting-fake-news-with-ai\/","og_locale":"en_US","og_type":"article","og_title":"Fighting Fake News with AI - Technology and Operations Management","og_description":"We all have seen them: \u201cnews\u201d, ads, chain messages. They are scattered with content supposedly selected for us: unreliable news, rumors, malicious texts try to divide communities, cities, nations and the world. Fake news is content created maliciously with the [&hellip;]","og_url":"https:\/\/d3.harvard.edu\/platform-rctom\/submission\/fighting-fake-news-with-ai\/","og_site_name":"Technology and Operations Management","article_modified_time":"2018-11-14T23:57:20+00:00","og_image":[{"width":1597,"height":685,"url":"https:\/\/d3.harvard.edu\/platform-rctom\/wp-content\/uploads\/sites\/4\/2018\/11\/foto.png","type":"image\/png"}],"twitter_card":"summary_large_image","twitter_misc":{"Est. reading time":"6 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/d3.harvard.edu\/platform-rctom\/submission\/fighting-fake-news-with-ai\/","url":"https:\/\/d3.harvard.edu\/platform-rctom\/submission\/fighting-fake-news-with-ai\/","name":"Fighting Fake News with AI - Technology and Operations Management","isPartOf":{"@id":"https:\/\/d3.harvard.edu\/platform-rctom\/#website"},"primaryImageOfPage":{"@id":"https:\/\/d3.harvard.edu\/platform-rctom\/submission\/fighting-fake-news-with-ai\/#primaryimage"},"image":{"@id":"https:\/\/d3.harvard.edu\/platform-rctom\/submission\/fighting-fake-news-with-ai\/#primaryimage"},"thumbnailUrl":"https:\/\/d3.harvard.edu\/platform-rctom\/wp-content\/uploads\/sites\/4\/2018\/11\/foto.png","datePublished":"2018-11-14T00:59:54+00:00","dateModified":"2018-11-14T23:57:20+00:00","breadcrumb":{"@id":"https:\/\/d3.harvard.edu\/platform-rctom\/submission\/fighting-fake-news-with-ai\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/d3.harvard.edu\/platform-rctom\/submission\/fighting-fake-news-with-ai\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/d3.harvard.edu\/platform-rctom\/submission\/fighting-fake-news-with-ai\/#primaryimage","url":"https:\/\/d3.harvard.edu\/platform-rctom\/wp-content\/uploads\/sites\/4\/2018\/11\/foto.png","contentUrl":"https:\/\/d3.harvard.edu\/platform-rctom\/wp-content\/uploads\/sites\/4\/2018\/11\/foto.png","width":1597,"height":685},{"@type":"BreadcrumbList","@id":"https:\/\/d3.harvard.edu\/platform-rctom\/submission\/fighting-fake-news-with-ai\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/d3.harvard.edu\/platform-rctom\/"},{"@type":"ListItem","position":2,"name":"Submissions","item":"https:\/\/d3.harvard.edu\/platform-rctom\/submission\/"},{"@type":"ListItem","position":3,"name":"Fighting Fake News with AI"}]},{"@type":"WebSite","@id":"https:\/\/d3.harvard.edu\/platform-rctom\/#website","url":"https:\/\/d3.harvard.edu\/platform-rctom\/","name":"Technology and Operations Management","description":"MBA Student Perspectives","potentialAction":[{"@type":"性视界Action","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/d3.harvard.edu\/platform-rctom\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"}]}},"_links":{"self":[{"href":"https:\/\/d3.harvard.edu\/platform-rctom\/wp-json\/wp\/v2\/hck-submission\/36437","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/d3.harvard.edu\/platform-rctom\/wp-json\/wp\/v2\/hck-submission"}],"about":[{"href":"https:\/\/d3.harvard.edu\/platform-rctom\/wp-json\/wp\/v2\/types\/hck-submission"}],"author":[{"embeddable":true,"href":"https:\/\/d3.harvard.edu\/platform-rctom\/wp-json\/wp\/v2\/users\/11450"}],"replies":[{"embeddable":true,"href":"https:\/\/d3.harvard.edu\/platform-rctom\/wp-json\/wp\/v2\/comments?post=36437"}],"version-history":[{"count":0,"href":"https:\/\/d3.harvard.edu\/platform-rctom\/wp-json\/wp\/v2\/hck-submission\/36437\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/d3.harvard.edu\/platform-rctom\/wp-json\/wp\/v2\/media\/36590"}],"wp:attachment":[{"href":"https:\/\/d3.harvard.edu\/platform-rctom\/wp-json\/wp\/v2\/media?parent=36437"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/d3.harvard.edu\/platform-rctom\/wp-json\/wp\/v2\/categories?post=36437"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}