SenseTime and Public Safety
Cities and countries already have begun to deploy AI and ML technologies for public safety and security. On one hand machine learning applications in image and video recognition can help law enforcement officials in detecting criminal activities and efficiently prevent acts that endanger public safety. On the other hand, without thoughtful safeguards, the misuse of these technologies by law enforcement poses sobering human rights risks. Using SenseTime as a case study, this article discusses how China's most valuable AI company, SenseTime, has partnered with government authorities to accelerate product development, why its partnership with the Chinese state is potentially problematic, and how the company can safeguard itself from the misuse of its product.
On Apr 9th, 2018, Beijing, China based company SenseTime announced that it had raised $600m from the Alibaba Group and other investors at a valuation of more than $4bn [1]. With the announcement, the company became the world鈥檚 most valuable artificial intelligence startup, and further underscored the gravity of the Chinese government鈥檚 national policy announcement just a year prior to become the world鈥檚 leader in the research, development, and commercialization of artificial intelligence technologies by 2030.
Founded in October 2014 by Dr. Xiao鈥檕u Tang, a Professor of Information Engineering at the Chinese University of Hong Kong, SenseTime has developed commercial products that leverage deep learning and machine learning in the development of computer vision to replicate tasks typically performed by trained human eyes. These tasks include facial, image, and text recognition, video image analysis, and image and video editing. SenseTime鈥檚 core platform is currently being utilized by more than 400 companies across a wide range of industries and verticals in applications that range from the playful to the mission critical [2]. One of SenseTime鈥檚 customers, Meitu, a Chinese selfie app, allows users to modify their appearances and take funny or more attractive-looking selfies using SenseTime鈥檚 image and video editing capabilities [3]. In comparison, China鈥檚 fintech companies leverage SenseTime鈥檚 platform as a mission-critical identity-verification system for opening an account. For China鈥檚 4,000 peer-to-peer lenders, SenseTime鈥檚 identity-verification product SenseFace 3.0, phased out the days-long manual identity verification process that bottle-necked loan disbursement in online lending [4].


Questions of privacy and the potential for government and corporate misuse have consistently dodged the company since its early days of product development and commercialization. Civil libertarians within China and internationally argue that SenseTime鈥檚 technologies have been used to track minorities in places like the Uighur region of Xinjiang and religious worshipers attending church in the coastal city of Wenzhou [10]. In response, the company has often attempted to publicly absolve itself from the use of its products by its clients. As SenseTime鈥檚 PR manager Franky Chan states: 鈥淪enseTime mainly provides customers with algorithms and technology to process their data. We do not obtain, and have no control of, the data from customers. By nature, AI is only a tool, it depends on whether the user uses it for good or bad causes.鈥 [11]
SenseTime鈥檚 response to these criticisms indicates that companies developing machine-enabled products within the image / video space need to strike a balance between efficiency and privacy. Instead of evading conversations on the potential misuse of its products, the company instead could proactively work with activists, policy makers, and industry actors to incorporate some of these concerns into the company鈥檚 product development by explicitly creating safeguards within its products that prevent misuse. Furthermore, the company鈥檚 public perception could be strengthened with additional transparency and disclosure on where, when, and how its products have been used in law enforcement actions within China and globally. Lastly, given its broad social ramifications, SenseTime could work in conjunction with civil society and corporate actors to help define and enforce laws governing the acceptable use of its technology both within China and globally.
As AI and Machine Learning technologies transform every profession, industry, and society, we will continue to be confronted by the ethical implications and consequences of these innovations. This raises the question: Are companies responsible for the misuse of their products? Furthermore, how should companies, if possible, safeguard themselves from such misuse?
(787 words)
[1] Bloomberg, 鈥淐hina Now Has the Most Valuable AI Startup in the World鈥 , accessed November 2018
[2] Quartz. 鈥淭he billion-dollar, Alibaba-backed AI company that鈥檚 quietly watching people in China鈥 , accessed November 2018
[3] Jiayang Fan, 鈥淐hina鈥檚 Selfie Obsession鈥, The New Yorker, December 25, 2017, https://www.newyorker.com/magazine/2017/12/18/chinas-selfie-obsession, accessed November 2018.
[4] SenseTime. Customer Cases. , accessed November 2018
[5] Josh Chin and Liza Lin. 鈥淐hina鈥檚 All-Seeing Surveillance State is Reading Its Citizen鈥檚 Faces鈥 Wall Street Journal. , accessed November 2018
[6] , Shu-Ching Jean Chen 鈥淭he Faces Behind China鈥檚 Artificial Intelligence Unicorn鈥 Forbes. , accessed November 2018
[7] Sohu.com. “性视界ing for pictures” to judge intelligence AI to help criminal investigation鈥 , accessed November 2018.
[8] Sebastian Moss. 鈥淐hina’s SenseTime, the world’s most valuable AI startup, plans five supercomputers鈥 , accessed November 2018.
[9] Suning. 鈥淪uning Announces Investment in SenseTime to Further Deploy Smart Retail Strategy With AI Innovation鈥 , accessed November 2018.
[10] Josh Chin and Liza Lin. 鈥淐hina鈥檚 All-Seeing Surveillance State is Reading Its Citizen鈥檚 Faces鈥 Wall Street Journal. , accessed November 2018
[11] Newstateman. 鈥淪enseTime: How the world鈥檚 most valuable AI startup in changing China,鈥 , accessed November 2018
Thanks for the article! This is a hard dilemma, and its impact only grows stronger as the days go by.
I believe companies should be held liable for the use of their products. On one end of the spectrum, there are regulations imposed on arms dealers especially for that purpose. Since the law always lags technology, I鈥檓 unaware of such regulations yet, but they are sure to come. Such products, if they are used for public defense purposes, should be regulated. The root problem is that we can鈥檛 trust the Chinese government to have any checks and balances on the use of such technology.
I call and raise your question: next year, when a criminal group hacks the platform to spy on their targets 鈥 who will be held liable? The government or SenseTime?
Even though we are still far from a sci-fi scene from a Spielberg movie (the film “Minority Report” reflects this dilemma), AI developments are evolving at a faster pace than regulation does, making the gap between law and reality larger. As you mentioned, companies have an excellent opportunity to promote the debate in society and involve many stakeholders even before launching the solutions to the market. However, to safeguard their position and sustainably develop the AI applications market, I believe that both suppliers of AI solutions and users of these products must act coordinately. Otherwise, the single action of an individual company won’t be enough to mobilize society and policymakers. The private sector, acting coordinately, can fuel the ethical and social debate and push people, governments and international organisms to react. However, I think that policymakers have the core responsibility for preventing AI misuse and for granting society that its voice would be reflected in according regulations.
Interesting topic, and great essay about AI and Sensetime.
I believe AI can be used in a good or bad way, which could benefit or hurt our sociey and human beings. While company has the responsiblity to use AI in a positive way. As you mentioned in the article, “By nature, AI is only a tool, it depends on whether the user uses it for good or bad causes.”
In terms of Ai using in public security issue, I personally think it should be used because public security is crucial for every single person. While I agree that this might hurt personal privacy. It should be very carefully when used in public.
Its somewhat eerie how close this mimics the “God Eye” in the Fast and Furious movie. I think society always gets nervous when we find out how much information companies have (i.e. Google and Facebook). Sometimes I wonder if we need to just accept that is the state that we are in and, if so, can we unleash the legal hurdles so that companies like this can go out and do tremendous good in the world. Privacy I think is almost a myth now. Even when we think that we have privacy – do we? Very thought provoking article.
Life imitates art perhaps 馃檪
Very interesting article, thanks! While the ethical topics might be controversial and the technology poses a substantial threat, I think it is not the primarily role of the company to determine what is “good” and what is “bad”. While we as a society hope that companies will operate with corporate responsibility, it would be foolish for the government not to regulate the potential ramifications of high-tech innovation. Since the government should represent, and work, for society, I think the government is the organism that is most likely to do a reasonable job of mitigating the potential threats.