Civic Tech | 性视界 Business School AI Institute /category/civic-tech/ The 性视界 Business School AI Institute catalyzes new knowledge to invent a better future by solving ambitious challenges. Wed, 22 Apr 2026 16:08:11 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 /wp-content/uploads/2026/04/cropped-Screenshot-2026-04-16-at-10.14.43-AM-32x32.png Civic Tech | 性视界 Business School AI Institute /category/civic-tech/ 32 32 The New Influence War: How AI Could Hack Democracy /the-new-influence-war-how-ai-could-hack-democracy/ Mon, 26 Jan 2026 13:24:52 +0000 /?p=29389 What the rise of AI swarms reveals about the future of influence, information, and democratic resilience. Listen to this article: As we move into the era of agentic AI, what kind of influence will this emerging technology have on democracy and misinformation? In the new Science paper 鈥淗ow Malicious AI Swarms Can Threaten Democracy,鈥 Amit […]

The post The New Influence War: How AI Could Hack Democracy appeared first on 性视界 Business School AI Institute.

]]>
What the rise of AI swarms reveals about the future of influence, information, and democratic resilience.

As we move into the era of agentic AI, what kind of influence will this emerging technology have on democracy and misinformation? In the new Science paper 鈥,鈥 , Assistant Professor of Business Administration at 性视界 Business School and Faculty PI of the at the 性视界 Business School AI Institute, and an international, multi-disciplinary group of co-authors argue that we鈥檙e entering a phase where 鈥渕alicious AI swarms鈥 could use multi-agent systems to infiltrate communities, mimic human social behavior, and iteratively refine persuasion tactics in real time. By expanding misinformation into persistent manipulation, these systems threaten the information ecosystem that democratic societies depend on, but Goldenberg and his co-authors also outline technical, economic, and institutional measures that could meaningfully defend against this new danger.

Key Insight: AI Swarms Operate Like Digital Societies

鈥淓nabled by these capabilities, a disruptive threat is emerging: swarms of collaborative, malicious AI agents.鈥 [1]

Unlike earlier botnets, which relied on centralized control, rigid scripts, and human labor, AI swarms combine LLM reasoning with multi-agent architectures to function more like adaptive digital societies. The authors define malicious AI swarms as systems of persistent agents that coordinate toward shared objectives, adapt in real time to engagement and platform cues, and operate with minimal human oversight across platforms. Five capabilities make these systems especially potent. (1) Swarms replace centralized command with fluid coordination, allowing thousands of AI personas to locally adapt while periodically synchronizing narratives. (2) They can map social networks to identify and infiltrate vulnerable communities with tailored appeals. (3) Human-level linguistic mimicry and irregular behavior patterns help them evade detection. (4) Continuous, automated A/B testing enables rapid optimization of persuasive content. (5) Finally, their always-on persistence allows influence to accumulate gradually, embedding itself within communities over time and subtly reshaping norms, language, and identity. As the article notes, recent elections in Taiwan and India already saw a proliferation of AI-generated propaganda and synthetic media outlets, meaning that this threat is already here and poised to expand in the future.

Key Insight: The Harm Cascade

鈥淓merging capabilities of swarm-driven influence campaigns threaten democracy by shaping public opinion, which leads to cascading harms.鈥 [2]

Goldenberg and his team argue that AI swarms could trigger a 鈥榗ascade鈥 of harms by systematically distorting the information ecosystem. By engineering 鈥榮ynthetic consensus鈥 and targeting different misinformation to different communities, these agents would have the power to undermine the independent thought essential for collective intelligence while simultaneously fragmenting the public sphere. This manipulation, together with coordinated synthetic harassment campaigns, could create a hostile environment that drives journalists and citizens into silence. The damage would compound as swarms 鈥榩oison鈥 the web with fabricated content that contaminates future AI training data. Ultimately, this sustained erosion of trust could corrode institutional legitimacy, rendering democratic safeguards vulnerable to collapse.

Key Insight: A Layered Defense Strategy

鈥淭aken together, these measures offer a layered strategy: immediate transparency to restore trust, proactive education to bolster citizens, resilient infrastructures to reduce systemic vulnerabilities, and sustained investment to monitor and adapt over time.鈥 [3]

Rather than a single fix, the authors argue for a layered defense strategy designed to raise the cost, complexity, and visibility of swarm-based manipulation. The first layer is always-on detection: continuous monitoring systems that identify statistically anomalous coordination patterns in real time, paired with public audits and transparency to reduce misuse. Because attackers will adapt, detection alone is insufficient. A second layer involves simulation and stress-testing. Agent-based simulations can replicate platform dynamics and recommender systems, allowing researchers and platforms to probe how swarms might evolve to recalibrate defenses before major elections or crises. Third, the authors emphasize empowering users through optional 鈥淎I shields,鈥 tools that flag likely swarm activity, allowing individuals to recognize suspicious content. Finally, the paper highlights governance and economic levers as essential. Proposals include standardized persuasion-risk evaluations for frontier models, mandatory disclosure of automated identities, stronger provenance infrastructure, and a distributed AI Influence Observatory to coordinate evidence across platforms, researchers, and civil society. Crucially, the authors argue that disrupting the commercial market for manipulation may be among the most effective ways to reduce large-scale abuse.

Why This Matters

For business leaders and professionals, this study reveals a threat that extends beyond electoral politics into the fundamental information ecosystem that underpins market confidence, consumer behavior, and corporate reputation. The same AI swarm technologies that  manipulate political discourse could target brand perception, financial markets, or industry narratives just as easily. The defense strategy outlined by the authors can similarly provide a roadmap for corporate action: implementing detection systems for monitoring threats to brand reputation, advocating for industry standards around AI transparency, and supporting governance initiatives that protect the broader information ecosystem. Executives who treat information integrity as core infrastructure will be better positioned to protect stakeholder trust, decision quality, and long-term resilience in an era of AI-enabled influence operations.

Bonus

For a look at how efforts to align AI systems with human preferences can unintentionally undermine trustworthiness itself, check out 鈥AI Alignment: The Hidden Costs of Trustworthiness.鈥&苍产蝉辫;

References

[1] Daniel Thilo Schroeder et al., 鈥淗ow Malicious AI Swarms Can Threaten Democracy,鈥 Science (391) (2026): 354.  

[2] Schroeder et al., 鈥淗ow Malicious AI Swarms Can Threaten Democracy,鈥 355.

[3] Schroeder et al., 鈥淗ow Malicious AI Swarms Can Threaten Democracy,鈥 357.

Meet the Authors

Headshot of Amit Goldenberg

is an assistant professor in the Negotiation Organization & Markets unit at 性视界 Business School, an affiliate with 性视界鈥檚 Department of Psychology, and a faculty principal investigator in the HBS AI Institute’s Digital Emotions Lab.

Additional Authors: Daniel Thilo Schroeder, Meeyoung Cha, Andrea Baronchelli, Nick Bostrom, Nicholas A. Christakis, David Garcia, Yara Kyrychenko, Kevin Leyton-Brown, Nina Lutz, Gary Marcus, Filippo Menczer, Gordon Pennycook, David G. Rand, Maria Ressa, Frank Schweitzer, Dawn Song, Christopher Summerfield, Audrey Tang, Jay Van Bavel, Sander van der Linden, and Jonas R. Kunst

The post The New Influence War: How AI Could Hack Democracy appeared first on 性视界 Business School AI Institute.

]]>
Evidence at the Core: How Policy Can Shape AI鈥檚 Future /evidence-at-the-core-how-policy-can-shape-ais-future/ Thu, 25 Sep 2025 12:24:43 +0000 /?p=28724 As AI technology advances, policymakers will face the crucial task of how to steer its development responsibly. In the new paper published in Science, 鈥淎dvancing science- and evidence-based AI policy,鈥 a multidisciplinary group of experts, including Himabindu Lakkaraju, Assistant Professor of Business Administration at 性视界 Business School and Principal Investigator in the Trustworthy AI Lab […]

The post Evidence at the Core: How Policy Can Shape AI鈥檚 Future appeared first on 性视界 Business School AI Institute.

]]>
As AI technology advances, policymakers will face the crucial task of how to steer its development responsibly. In the new paper published in Science, 鈥,鈥 a multidisciplinary group of experts, including , Assistant Professor of Business Administration at 性视界 Business School and Principal Investigator in the Trustworthy AI Lab at the 性视界 Business School AI Institute, argue that the future of AI governance depends on robust support for evidence utilization and generation.聽

Key Insight: Evidence Must Drive AI Policy

鈥淒efining what counts as (credible) evidence is the first hurdle for applying evidence-based policy to an AI context.鈥 [1]

The authors stress that the idea of evidence itself is not simple or straightforward. What qualifies as evidence can vary across fields: in health policy, randomized control trials serve as the gold standard, while in economics, forecasts and theoretical models hold weight. And history illustrates that evidence will often be questioned or ignored: the tobacco industry leaned on inconclusive studies to stall public health measures, and fossil fuel companies downplay climate risks despite knowing otherwise. These examples show that the tasks of defining, evaluating, and acting on evidence are urgent and complex. In response to these challenges, the authors encourage the US government to utilize the Foundations for Evidence-Based Policymaking Act (Evidence Act).

Key Insight: Policy Can Accelerate Evidence Generation

鈥淲e recommend that policy-makers require major AI companies to disclose more information about their safety practices to governments and, especially, to the public.鈥 [2]

The authors propose several mechanisms to make policy the driver of evidence creation. Policymakers should incentivize pre-release evaluations, ensuring that risks (such as using AI for malicious purposes, the likelihood of AI hallucinations, or the prevalence of AI generating copyrighted material) are measured before companies deploy new models. They also call for increased transparency, citing findings from the 2024 Foundation Model Transparency Index that top AI companies fall short when it comes to publicly reporting their risk-mitigation practices. They recommend post-deployment monitoring, such as adverse-event reporting systems that track concrete instances of harm once models are in use. Finally, they encourage protections for third-party research, noting that independent investigators often face legal and contractual barriers when probing AI systems. Safe harbor provisions, modeled on cybersecurity law, would enable such research to proceed in the public interest. Together, these measures would expand the evidence base and allow AI policy to evolve in step with the technology itself.

Key Insight: Consensus in a Fragmented Field

鈥淪cientific consensus, including on areas of uncertainty or immaturity, is a powerful primitive for better AI policy.鈥 [3]

The AI research and policy community is currently divided, with divergent views on the seriousness of risks and the speed of technological progress. This lack of alignment makes it difficult to establish clear, effective policy responses. Drawing from precedents in climate governance and disaster policy, the authors call for deliberate processes that foster consensus, even amid uncertainty. Global initiatives, such as the UN鈥檚 High-Level Advisory Board on AI and proposals for an International Scientific Panel, aim to provide shared baselines of evidence. Such consensus would not eliminate debate but would ensure that disagreements unfold with a common evidentiary framework, strengthening the legitimacy and durability of policy decisions.

Why This Matters

As AI becomes more central to business operations, having trustworthy and reliable systems will be crucial. Business leaders and executives will benefit from understanding the growing landscape of AI policy, supporting evidence-based foundations for AI technology, and following the guidance of institutions that produce independent research. By aligning with these principles, companies will not only be ready to comply with emerging regulations, but will also be a step ahead to build trust with customers and stakeholders. As the authors conclude, governing AI will be one of the grand challenges of the 21st century, and informed business leaders will have an important role to play facing it.

References

[1] Rishi Bommasani et al., “Advancing science- and evidence-based AI policy,鈥 Science 389 (2025): 459. DOI:

[2] Bommasani et al., “Advancing science- and evidence-based AI policy,鈥 460.

[3] Bommasani et al., “Advancing science- and evidence-based AI policy,鈥 461.

Meet the Authors

is an Assistant Professor of Business Administration at 性视界 Business School and PI at the HBS AI Institute鈥檚 Trustworthy AI Lab. She is also a faculty affiliate in the Department of Computer Science at 性视界 University, the 性视界 Data Science Initiative, Center for Research on Computation and Society, and the Laboratory of Innovation Science at 性视界. Professor Lakkaraju’s research focuses on the algorithmic, practical, and ethical implications of deploying AI models in domains involving high-stakes decisions such as healthcare, business, and policy.

Additional Authors: Rishi Bommasani, Sanjeev Arora, Jennifer Chayes, Yejin Choi, Mariano-Florentino Cu茅llar, Li Fei-Fei, Daniel E. Ho, Dan Jurafsky, Sanmi Koyejo, Arvind Narayanan, Alondra Nelson, Emma Pierson, Joelle Pineau, Scott Singer, Ga毛l Varoquaux, Suresh Venkatasubramanian, Ion Stoica, Percy Liang, and Dawn Song

The post Evidence at the Core: How Policy Can Shape AI鈥檚 Future appeared first on 性视界 Business School AI Institute.

]]>
AI-Driven Optimization: Transforming Refugee Resettlement /ai-driven-optimization-transforming-refugee-resettlement/ Thu, 24 Jul 2025 20:08:25 +0000 /?p=27985 On May 13, 2025, the HBS AI Institute (previously the Digital Data Design Institute at 性视界 (D^3)) held a university-wide Generative AI Symposium in partnership with the Office of the Vice Provost for Research, the Office of the Vice Provost for Advances in Learning, the Faculty of Arts and Sciences, the 性视界 John A. Paulson […]

The post AI-Driven Optimization: Transforming Refugee Resettlement appeared first on 性视界 Business School AI Institute.

]]>
On May 13, 2025, the HBS AI Institute (previously the Digital Data Design Institute at 性视界 (D^3)) held a university-wide Generative AI Symposium in partnership with the , the , the , the , and . This half-day event for 性视界 faculty, students, and staff focused on the impact of AI on research, teaching, operations, and innovative applications across professional schools and areas of practice.

In her session , Assistant Professor of Business Administration and HBS AI Institute Associate discussed the refugee relocation crisis, one of humanity鈥檚 most pressing challenges. Despite there being over 30 million people worldwide who need resettlement, approaches to refugee placement have mostly relied on manual processes and limited data, resulting in suboptimal outcomes for refugees and host communities. Paulson鈥檚 research and talk focus on how AI and machine learning can be utilized to model and optimize placement decisions, helping to improve this critical humanitarian process.

Key Insight: The Challenge of Successful Refugee Placement

鈥淸O]ver half of the refugees that are resettled to the US do not find employment within 90 days, at which point their benefits are phased out.鈥

Elisabeth Paulson

In her presentation, Paulson highlighted that some locations have employment rates of around 5%, while others are above 40%. Specific locations have capacity limits, so simply relocating everyone to locations with higher employment rates is not possible, nor does it consider successful cases in all areas. The overall low employment rate and stark disparity in location rates underscores the critical importance of initial placement decisions. Paulson鈥檚 research aims to improve the placement decision process with AI and machine learning.

Key Insight: Optimizing the Assignment Problem

鈥淸I]f we can predict these match qualities or these likelihoods of finding employment, then we can use optimization to find the optimal assignment of people to places.鈥

Elisabeth Paulson

A range of factors, such as gender and language proficiency, can affect whether a refugee will be successful in finding employment, but the importance and predictability of these factors differs across placement location, and the characteristics of refugee populations and host communities are dynamic and constantly in flux. Additionally, resettlement officers are forced to make placements one at a time (sequentially) without knowledge about the characteristics of future refugees. Paulson explained how AI and machine learning can help on both fronts by discovering synergies between people and successful employment locations, and using advanced mathematical modeling to balance sequential decision-making with long-term scenario probabilities. Using these methods, Paulson reported that US employment rates can increase by about six percentage points, which means thousands more who have been successfully relocated.

Key Insight: AI in Action through GeoMatch

鈥淸A]ll of these ideas and tools that I just talked about are all incorporated into a software tool called GeoMatch.鈥

Elisabeth Paulson

The practical application of this research has culminated in the development of GeoMatch, a tool housed at the Stanford Immigration Policy Lab with pilots running in the US and Switzerland. GeoMatch streamlines, improves, and speeds up the decision-making process, taking just minutes compared to hours when done manually. The tool also maintains human oversight, allowing relocation officers to modify and overrule recommendations. Paulson hopes that technology and machine learning behind GeoMatch will prove useful in other regions around the world as well.

Why This Matters

For business leaders and executives, the application of AI in refugee resettlement offers valuable insights into the broader potential of AI for complex resource allocation challenges. The methodology of personalized matching and strategic forecasting offers parallels with customer segmentation, human capital allocation, and market entry strategies. It also serves as a blueprint for implementing AI solutions that deliver both operational efficiency and strategic advantage, which are particularly relevant as organizations navigate increasingly complex global markets while managing constrained resources and uncertain environments.

Meet the Speaker

Headshot of Elisabeth Paulson

Elisabeth Paulson is an Assistant Professor of Business Administration in the Technology and Operations Management Unit at 性视界 Business School. Her research is in the area of operations for social good. In particular, she designs analytical methods and algorithms for allocating scarce resources efficiently and fairly to improve social outcomes. Much of her work draws on tools from optimization, machine learning, mathematical modeling, and statistics. She received her PhD in Operations Research from MIT.

The post AI-Driven Optimization: Transforming Refugee Resettlement appeared first on 性视界 Business School AI Institute.

]]>
AI Elevate: UAE: AI Readiness and Exponential Growth /ai-elevate-uae-ai-readiness-and-exponential-growth/ Thu, 10 Jul 2025 14:55:10 +0000 /?p=27689 As AI continues to reshape industries globally, the 性视界 Business School AI Institute (previously the Digital Data Design Institute at 性视界 (D^3)) and the 性视界 Business School Club of the Gulf Cooperation Council hosted AI Elevate: From Readiness to Exponential Growth on December 13, 2024, in Dubai, UAE. This one-day conference provided business leaders, researchers, […]

The post AI Elevate: UAE: AI Readiness and Exponential Growth appeared first on 性视界 Business School AI Institute.

]]>
As AI continues to reshape industries globally, the 性视界 Business School AI Institute (previously the Digital Data Design Institute at 性视界 (D^3)) and the 性视界 Business School Club of the Gulf Cooperation Council hosted AI Elevate: From Readiness to Exponential Growth on December 13, 2024, in Dubai, UAE. This one-day conference provided business leaders, researchers, and government officials with crucial insights into AI strategy, industry transformation, and global market integration. For an introduction to the day-long conference, see the Opening Remarks and the Agenda.

In the session , H.E. Omar Sultan Al Olama, the world鈥檚 first Minister of State for Artificial Intelligence, sat down with HBS AI Institute co-founder Karim Lakhani for a fireside chat to discuss the UAE鈥檚 strategic approach to AI integration and its impact on governance, growth, and quality of life.

Key Insight: History Driving AI Adoption

鈥淎n ignorance-based decision to ban something you don’t understand is going to lead to you going backwards.鈥

H.E. Omar Sultan Al Olama

Al Olama drew an important parallel between today鈥檚 AI hesitation and the Middle East鈥檚 historic decision to ban the printing press, which sent the region away from global knowledge leadership hundreds of years ago. Concerns about misinformation, loss of control over knowledge production, and fear of unknown consequences – what Al Olama terms 鈥榠gnorance-based decisions鈥 – are top of mind now because of the uncertainty around AI, but in this case the UAE is aggressively leaning into the new technology, such as by appointing a Minister of State for Artificial Intelligence, and launching more than 147 different applications of AI within the government.

Key Insight: A Dual Track for National Development

鈥淥ur development over 50 years was actually a very interesting cycle: we focused on software, so on people and their development, and then we focused on the hardware, which is the buildings, the bridges, the infrastructure, and now we’re going back to focusing on the software, because if you always balance the two, you progress. If you choose to develop one and not the other, you will always fall behind.鈥

H.E. Omar Sultan Al Olama

This dual approach has been central to the UAE鈥檚 growth strategy over the past five decades, with learning and upskilling in AI as only the latest step. For example, over 377 senior government officials recently completed an intensive AI training program, and 2.1 million UAE citizens engaged in prompt engineering for UAE Codes day.

Key Insight: AI for Quality of Life

鈥淲e need to dedicate this tool to the improvement of our lives.鈥

H.E. Omar Sultan Al Olama

Al Olama stressed that AI should be used to enhance people鈥檚 quality of life. For example, in Abu Dhabi, traffic lights are connected to an AI hub that optimizes flow, ensuring that the existing infrastructure can maintain efficiency even with population growth. Another example is the use of AI technology in airports, where facial recognition technology allows for a quicker and more seamless experience reducing lengthy waits at checkpoints prevalent elsewhere.

Why This Matters

Al Olama and Lakhani鈥檚 conversation provides executives with examples and a strategy for approaching AI adoption and transformation that extends beyond traditional models. The UAE鈥檚 experience demonstrates that successful AI implementation requires organizational forethought and commitment, balanced investment in both human and technological capital, and a fundamental reorientation towards human-centered outcomes. By fostering an AI-ready populace, the UAE demonstrates how government, business, and society at large can collaborate to prioritize meaningful outcomes. The UAE鈥檚 AI mandate is clear: invest with purpose, lead with clarity, and deploy with empathy.

Meet the Speakers

is the UAE Minister of State for Artificial Intelligence, Digital Economy and Remote Work Applications. He is also Director General of the Prime Minister鈥檚 Office at the Ministry of Cabinet Affairs.

Headshot of Karim Lakhani

is the Dorothy & Michael Hintze Professor of Business Administration at 性视界 Business School. He specializes in technology management, innovation, digital transformation, and artificial intelligence. He is the Co-Founder and Chair of the HBS AI Institute and the Founder and Co-Director of the Laboratory for Innovation Science at 性视界.

The post AI Elevate: UAE: AI Readiness and Exponential Growth appeared first on 性视界 Business School AI Institute.

]]>
The AI Revolution in Software Development: How Generative AI is Reshaping Coding Practices /the-ai-revolution-in-software-development-how-generative-ai-is-reshaping-coding-practices/ Fri, 22 Nov 2024 14:59:43 +0000 /?p=23948 The integration of artificial intelligence, such as generative AI, into daily workflows is helping workers streamline their approach to all sorts of tasks. 鈥淕enerative AI and the Nature of Work鈥 by Manuel Hoffman, postdoctoral fellow at Laboratory of Innovation Science at 性视界 (LISH), Sam Boysel postdoctoral fellow at LISH, Frank Nagle, an Assistant Professor in […]

The post The AI Revolution in Software Development: How Generative AI is Reshaping Coding Practices appeared first on 性视界 Business School AI Institute.

]]>
The integration of artificial intelligence, such as generative AI, into daily workflows is helping workers streamline their approach to all sorts of tasks. by , postdoctoral fellow at , postdoctoral fellow at LISH, , an Assistant Professor in the Strategy Unit at 性视界 Business School and a faculty affiliate of the 性视界 Business School AI Institute, , Senior Economist at Microsoft, and , Software Engineer at GitHub Inc., provides compelling evidence about how AI is transforming software development practices. By examining the impact of GitHub Copilot, an AI-powered code-completion tool, on open-source software developers, the study offers valuable insights into how AI may reshape knowledge work more broadly.

Key Insight: AI Enables Developers to Focus on Core Coding Tasks

“We find that top developers of open source software are engaging more in their core work of coding and are engaging less in their non-core work of project management.” [1]

The researchers found that developers with access to GitHub Copilot increased their coding activities by 5.4 percentage points (a 12.37% increase) while reducing project management activities by 10 percentage points (a 24.93% decrease). This suggests AI tools allow knowledge workers to spend more time on their primary skilled tasks by reducing the burden of auxiliary responsibilities, such as reviewing code and submitting and responding to pull requests.

Key Insight: AI Promotes More Autonomous Work Patterns

鈥淸G]enerative AI enables developers to bypass collaboration frictions and more easily make unilateral code contributions to projects.鈥 [2]

The study revealed that Copilot users engaged in more autonomous work, reducing their interactions with other developers. Specifically, developers with Copilot access worked in repositories with 17 fewer peers on average, a 79.3% reduction compared to non-users. This suggests AI tools may reduce the need for collaboration on routine tasks, allowing workers to operate more independently. Furthermore, the researchers found that a secondary effect of reduced collaboration was avoidance of the usual collaborative difficulties and transaction costs that would otherwise impede workflows.

Key Insight: Implications for Workers

“[E]stimates indicate that Copilot eligible developers are both exploring new languages and choosing languages with greater labor market return.” [3]

The research suggested that AI had a positive impact on less skilled workers. According to the study, less experienced workers who integrated AI tools into their workflow increased their coding activities and reduced time spent on project management activities at a higher rate than that of their more skilled coworkers.

The study also showed that Copilot-eligible developers increased their exposure to new programming languages by 21.79% compared to non-users. Moreover, the languages they explored tended to be associated with 1.41% higher salaries, suggesting AI tools may facilitate valuable skill development and career advancement. Based on this finding, which shows that the cost of experimentation appears to decrease with the introduction of Copilot, the researchers suggest that, overall, the AI tool caused programmers to focus increasingly on exploration activities in their work and decreasingly on exploitation.

Key Insight: AI’s Impact is Sustained Over Time

“[T]he benefits of accessing Copilot seem to arise very quickly and after some experimentation with it, the impacts are stable up to approximately two years.” [4]

The study tracked developers over a two-year period, finding that the effects of Copilot persisted throughout this time. While there was some initial ramp-up and later attenuation, the impact remained significant, suggesting AI tools can drive lasting changes in coding practices rather than just creating short-term productivity boosts.

Why This Matters

While this study focuses on open-source programmers, it suggests to business leaders the ways in which AI can reshape work practices in all sorts of organizations. Importantly, the study reveals that the benefits of AI are more pronounced for lower-skill workers. Executives can implement initiatives promoting the use of AI to close the gap between high-skill and low-skill workers to create more efficient work environments while also promoting upskilling and inclusivity. These findings could also encourage managers to identify areas for AI implementation, restructuring workflows to reduce collaborative friction and accommodate more autonomous work for complex projects.Finally, and perhaps most crucially for firms seeking to thrive in a business environment that is evolving at lightning speed, if in this study AI lowered the cost of exploration for coders who spent less time on their core work, CEOs should consider how their technology departments, or any departments where they implement AI, can drive innovation and experimentation with no adverse effects on exploitation of established projects.

References

[1] Manuel Hoffmann, Sam Boysel, Frank Nagle, Sida Peng, and Kevin Xu, 鈥淕enerative AI and the Nature of Work鈥, 性视界 Business School Strategy Unit Working Paper No. 25-021 (November 1, 2024): 1-71, 29.

[2] Hoffmann et al., 鈥淕enerative AI and the Nature of Work鈥, 23-24.

[3] Hoffmann et al., 鈥淕enerative AI and the Nature of Work鈥, 25.

[4] Hoffmann et al., 鈥淕enerative AI and the Nature of Work鈥, 26.

Meet the Authors

is a postdoctoral fellow at the Laboratory for Innovation Science at 性视界 (LISH). His research focuses on labor, innovation, and health economics while leveraging experimental, quasi-experimental, and structural methods to answer exciting research questions that can improve individual and social welfare.

is a postdoctoral fellow at the Laboratory for Innovation Science at 性视界. He is an applied microeconomist with research interests at the intersection of digital economics, labor and productivity, industrial organization, and socio-technical networks. Specifically, his work has centered around the private provision of public goods, productivity in open collaboration, and welfare effects within the context of open source software (OSS) ecosystems.

Frank Nagle Headshot

is an Assistant Professor in the Strategy Unit at 性视界 Business School, a faculty affiliate of the HBS AI Institute, the Managing the Future of Work Project, and LISH. He studies how competitors can collaborate on the creation of core technologies, while still competing on the products and services built on top of them. His research falls into the broader categories of the future of work, the economics of IT, and digital transformation and considers how technology is weakening firm boundaries.

is Senior Principal Economist in the Office of Chief Economist at Microsoft. His research interests include econometrics, industrial organization, machine learning and artificial intelligence. His work has been published in economics, statics and CS journals and conferences, including Biometrika, Marketing Science, Journal of Health Economics, and AISTAT. I received my Ph.D. in Economics from Cornell University in 2017. He received his M.S. in Statistics, B.S. in Mathematics and B.A. in Economics from University of Virginia in 2011.

is a software engineer at GitHub. He focuses on projects related to building trust through transparency, contributing his skills in data analysis/visualization, full stack engineering, and legal research.

The post The AI Revolution in Software Development: How Generative AI is Reshaping Coding Practices appeared first on 性视界 Business School AI Institute.

]]>
Navigating the Ripple Effects of Regulation: Lessons from China’s Tech Industry /navigating-the-ripple-effects-of-regulation-lessons-from-chinas-tech-industry/ Fri, 01 Nov 2024 20:59:30 +0000 /?p=23470 With the rise of tech giants across the globe, there has been increasing concern about the consequences their dominance creates for market competition. Governments in various countries have responded by implementing antitrust regulations to curtail the power of these digital platforms. In China, the introduction of the Anti-Monopoly Guidelines for the Platform Economy in February […]

The post Navigating the Ripple Effects of Regulation: Lessons from China’s Tech Industry appeared first on 性视界 Business School AI Institute.

]]>
With the rise of tech giants across the globe, there has been increasing concern about the consequences their dominance creates for market competition. Governments in various countries have responded by implementing antitrust regulations to curtail the power of these digital platforms. In China, the introduction of the Anti-Monopoly Guidelines for the Platform Economy in February 2021 offers a unique opportunity to study the effects of platform regulation on entrepreneurship. Research by , Professor at Tsinghua University, , Professor at University of Southern California, , Professor at Tongji University, and , Professor of Business Administration at 性视界 Business School and Principal Investigator of the 性视界 Business School AI Institute Platform Lab, titled 鈥溾, provides valuable insights into how these regulations have impacted competition, investment, and startup activity in the Chinese technology sector.

Key Insight: Platform Regulation Has Reduced Venture Capital Investment

“[A]fter the Platform Guidelines鈥 enactment, such investments experienced a modest decline of 1.14% per month.” [1]

The research demonstrates that the implementation of China’s Platform Guidelines resulted in a significant reduction in venture capital (VC) investments in industries dominated by large tech platforms like Alibaba, Tencent, and ByteDance. Prior to the enactment, these platforms had seen steady growth in corporate venture capital (CVC) investments. However, the regulations curtailed their ability to engage in certain activities, including mergers and acquisitions and strategic investments in startups. As a result, the monthly number of VC investments in affected industries decreased, indicating a chilling effect on venture financing. This suggests that regulatory changes can alter the flow of capital and may necessitate shifts in strategy to adapt to a more constrained investment environment.

Key Insight: Fewer Startups Enter Platform-Dominated Industries

“[A]fter the platform antitrust regulation implementation, the monthly number of investments in and startups entering the 41 influenced industries registered a substantial reduction, of 26.73% and 18.72%, respectively, in comparison to the 127 uninfluenced industries.” [2]

One of the most striking findings from the study is the significant decline in startup activity in the industries targeted by the Platform Guidelines. Startups, which play a crucial role in driving innovation and competition, have been deterred from entering these markets due to increased regulatory uncertainty and the perceived risks associated with competing against dominant platforms. The 18.72% reduction in new entries suggests that regulation, while intended to foster competition, may inadvertently stifle it by creating barriers for new players. For executives, this highlights the importance of anticipating how regulatory shifts might impact industry dynamics and the competitive landscape.

Key Insight: Regulations Have Not Increased Market Competition

“This study transcends a focus on the immediate repercussions of antitrust enforcement on platforms or platform complementors鈥 We find that China鈥檚 Platform Guidelines did not increase market competitiveness in tech related industries.” [3]

While the Platform Guidelines were designed to curtail monopolistic practices and enhance competition, the study reveals that these regulations have not achieved their intended effect. In fact, competition in the affected markets has weakened as venture capital investments and startup activity have both declined. Researchers found that new startups in affected industries tended to differentiate themselves from existing platforms rather than trying to directly compete. This outcome serves as a cautionary tale for policymakers and business leaders alike, suggesting that the unintended consequences of regulation must be carefully considered. Businesses may need to rethink their strategies for growth and competition in highly regulated environments, as traditional assumptions about increased regulation leading to greater competition may not always hold true.

Why This Matters

The case of China鈥檚 Platform Guidelines serves as an example of how government regulation can have far-reaching and unexpected consequences on market dynamics. While the goal of antitrust regulation is often to promote a more level playing field, the reality is that such policies can create additional barriers to the entrepreneurial activity they seek to encourage. Executives operating in industries subject to similar regulations should adapt their business strategies, focusing on differentiation, mitigating regulatory risk, and exploring alternative avenues for growth. At the same time, policymakers should carefully weigh the potential unintended consequences of regulation and consider more nuanced approaches to fostering competition in the platform economy.

References

[1] Ke Rong, D. Daniel Sokol, Di Zhou, and Feng Zhu 鈥淎ntitrust Platform Regulation and Entrepreneurship: Evidence from China鈥 性视界 Business School Working Paper 24-039 (January 16, 2024) 1-26, 3.

[2] Rong, Sokol, Zhou, and Zhu 鈥淎ntitrust Platform Regulation and Entrepreneurship: Evidence from China鈥 性视界 Business School Working Paper 24-039, 3.

[3] Rong, Sokol, Zhou, and Zhu 鈥淎ntitrust Platform Regulation and Entrepreneurship: Evidence from China鈥 性视界 Business School Working Paper 24-039, 4.

Meet the Authors

currently serves as the Director at the Institute of Economics, School of Social Sciences, Tsinghua University, China. He earned his Ph.D. from the University of Cambridge after completing his bachelor’s degree at Tsinghua University. Before his appointment at Tsinghua, he held positions as a Senior Lecturer at the University of Exeter and Bournemouth University in the UK, and as a Visiting Scholar at 性视界 Business School in the US.

is the Carolyn Craig Franklin Chair in Law and a Professor of Law and Business at the USC Gould School of Law and Marshall School of Business (marketing department). He holds a courtesy appointment in the Department of Economics. He also serves as faculty director of the and the co-director of the . Additionally, in a part time capacity, he serves as Senior Advisor at White & Case LLP.

is a research associate professor in the School of Economics and Management at Tongji University. He earned Ph.D. from the Institute of Economics, School of Social Sciences, Tsinghua University. Before joining Tongji, he was a research staff at Tsinghua University and a visiting fellow at the 性视界 Business School. His current research interests focus on Digital Economy and Governance, including Platform Ecosystems, Data Economy, and Digital Regulation.

Feng-Zhu

is a Professor of Business Administration at 性视界 Business School, faculty lead in the HBS AI Institute Platform Lab, and faculty co-chair of the . Zhu is a leading expert on platform strategy, digital innovation and transformation, competitive strategy, and business model innovation.


The post Navigating the Ripple Effects of Regulation: Lessons from China’s Tech Industry appeared first on 性视界 Business School AI Institute.

]]>
Elizabeth M. Adams on civic tech as advocacy work /elizabeth-m-adams-on-civic-tech-as-advocacy-work/ /elizabeth-m-adams-on-civic-tech-as-advocacy-work/#respond Mon, 12 Apr 2021 13:30:00 +0000 https://pr-373-hbsdi.pantheonsite.io/?p=13875 In this episode, we are speaking with Elizabeth M. Adams from Stanford University's Digital Civil Society Lab about the roles and responsibility of government in tech, the ethical implications of technology, and the long game of advocacy work.

The post Elizabeth M. Adams on civic tech as advocacy work appeared first on 性视界 Business School AI Institute.

]]>
Civic tech aims to enhance the relationship between people, their community, and government by centering and amplifying the public鈥檚 voice in the design and implementation processes of AI-enabled technology. Without public oversight, communities face over-policing, loss of data privacy protections, and the consequences of human bias directing technology used to govern society. It is therefore essential to include diverse perspectives in civic tech solutions to ensure proper representation and consideration for communities of color and other vulnerable populations that are most negatively impacted.

In this episode, our hosts Colleen Ammerman and David Homa speak with Elizabeth M. Adams about the roles and responsibility of government in tech, the ethical implications of technology, and the long game of advocacy work. Elizabeth is a technology integrator working at the intersection of cybersecurity, AI ethics, and AI governance with a focus on ethical tech design. Currently, Elizabeth is a fellow at Stanford University’s Digital Civil Society Lab in partnership with the Center for Race and Comparative Studies in Race & Ethnicity. 

Watch the episode with Elizabeth M. Adams

Read the transcript, which is lightly edited for clarity.

Colleen Ammerman (Gender Initiative director): Today, we are joined by Elizabeth Adams, a technology integrator working at the intersection of cybersecurity, AI ethics, and AI governance with a focus on ethical tech design. Currently, Elizabeth is a fellow at Stanford University’s Digital Civil Society Lab in partnership with the Center for Race and Comparative Studies in Race & Ethnicity. Welcome, Elizabeth. We are very excited to talk with you today.

Elizabeth M. Adams (Stanford University fellow of race & technology at the Center for Comparative Studies in Race and Ethnicity): Thank you. I’m super excited to be here.

David Homa (Digital Initiative director): Elizabeth, thanks so much for joining us. Let’s start with the big picture here. Share with us your perspective on what constitutes 鈥渃ivic tech,鈥 and what are some of the ways that intersects with efforts to foster racial justice?

EA: So, that’s actually a very good question. In my mind, 鈥渃ivic tech鈥 is really the process of bringing government, people, and community together to share in the decision-making process around services or technology that communities could be impacted by. And so, when we talk about a racial equity framework, I feel like I’m in the best place in Minneapolis because the city of Minneapolis has adopted a racial equity framework in all of the work that it does and all of the decisions that it makes. So, obviously, when we have conversations about technology, transparency, [and] the things that are going on in the city of Minneapolis, racial equity is at the top of mind. It makes my job a little bit easier, so I don’t have to work so hard to educate those at the city around racial equity. What I’ve spent most of my time doing from a civic tech perspective is educating people on why technology transparency is so important and why we need to break down the entire lifecycle 鈥 from how technology is designed by a company, to how it’s procured by the city, to how users are trained to use that technology. Because if they’re bringing their human bias and they’re using this technology to govern society, we need to just make sure that technology works for all.

DH: That’s really super interesting. Do you think it’s a greater challenge for some people to understand how bias seeps into technology than maybe other sectors and what may be driving some of that difference?

EA: I do think so and I’ll tell you why. Because when I’m seated at the table with elected officials, and appointees, and commissions, not many of them are technologists. So, I can speak [about] this to data scientists and engineers and they get it. But it takes a while. And out of all the elected officials and Minneapolis city council members that I’ve spoken with, there might be one or two who actually get how bias can creep into technology. Most of the time, when you’re talking about vulnerable populations and communities of color, and you’re talking about equity or equality, you’re talking about it from a housing perspective, or an education perspective, or for jobs, or health. And, what I think people don’t realize is that technology runs underneath all of that. It’s all about data and what happens with that data and how that data is harvested and archived and used and, in some cases, profiled. So, I think that just because people are not technologists by nature, or many people that are making decisions around data policy and other policy concerns [are not technologists], that’s part of the challenge for what I see in this space.

CA: I guess, to me, it’s sort of the next step to our initial question 鈥 what is civic tech and how does this relate to racial inequality? And, you just talked a bit about the fact that often people who are making policy decisions or in those discussions don’t really have a solid grasp on how bias shapes technology. So, [what] I’m curious to hear you talk about 鈥 and I’m sure this is kind of what your work is really about in a lot of ways 鈥 is how you bring them along. How do you educate people? What are the effective ways, right, to get everybody to a point where they can understand applications?

EA: So, that’s an excellent, excellent question. I will teach people across all spectrums. But unless I’m talking with a data scientist, I don’t get super technical. And, even when I’m talking with a data scientist, or an engineer, or an architect, I still don’t get super, super technical because when you start talking ones and zeros, then everyone is the same. So, what I started off doing a couple years ago was just really creating learning events. And, I created avatars and I created personalities for these avatars. But I did not show their faces. I did not show that the male avatar was a Black man or the female was a Black woman. I would just use examples that this person runs the soccer league, or this person is a champion in their community for food security or cleanup.

“when you鈥檙e talking about… equity or equality, you鈥檙e talking about it from a housing perspective, or education, or jobs, or health. And, what I think people don鈥檛 realize is that technology runs underneath all of that.”

Then at the end of the experience, I wanted people to understand that these are your neighbors, right? If technology is impacting them and these are people that you like, shouldn’t we have some conversations about this? So, I found those ways to be really effective. By having these very, kind of, educational experiences, it really helps to bring people along when you’re not talking over them, and you’re talking with them, and you’re allowing them to participate in that process.

CA: That’s great. It sounds like part of what you’re doing 鈥 and especially hearing you talk about creating these personas and profiles 鈥 is kind of helping people move from the purely abstract to something that feels a little bit more tangible, or [that] they can connect to more and understand. Then it sounds like that motivates them to realize how important this is. Like, you’re kind of bringing them along to get them sort of incentivized and to prioritize these issues.

EA: Yeah. And, you know what else is interesting? When you start having an initial conversation with someone about racism, people get defensive immediately. So, you have to kind of break down those barriers and talk about issues that are affecting all of us. That’s part of how I’m able to kind of navigate some of these really sticky conversations that really, at the heart of it, are about racism, about inequality, about human bias. They’re about biases from the folks who are developing the code, because maybe they don’t have enough lived experience with people of diverse backgrounds. But you have to just kind of… for me, that’s what I’ve done. I’ve just used the experiences to bring people along by helping them understand that this really needs to work for all. Technology needs to work for every single human. And, to really make sure that the conversation is human-centered.

DH: Facial recognition is obviously a big topic in the world today. Are there specific examples or situations you’ve come across where people at first thought like, 鈥渙h, well, this is a perfectly good use,鈥 and then you help them realize what some of the stumbling blocks might be?

EA: Yeah, and I still talk about that today. So, I actually don’t think all technology is bad. Let’s talk about facial recognition technology from that perspective. If a child is lost in the mall, right, and they can use facial recognition to see where that child might have gone, what store, or where they have navigated around the mall, obviously that would be a good use of facial recognition. If someone is coming into your building and they shouldn’t be coming into your building, and maybe you might need to identify them because they harm someone in the building, that would be, to me, an acceptable use of facial recognition technology. Or, if someone’s grandparent was lost on the street, right? You’d want to be able to find them and bring them back safely.

But when you start using technology to profile people and overreach into communities and start, as I mentioned, profiling and taking that data and aggregating it with, let’s say, license plate readers or an Amazon ring camera, that’s when it becomes harmful, and there are organizations that use [technology] for that purpose. That’s where my work begins 鈥 kind of helping people understand why these facial recognition systems don’t typically work for Black women. And, a lot of it has to do with the training data. There’s not enough diversity in the data once the technology is brought together and then it’s sold. Also, the people who are designing and developing it aren’t necessarily understanding of the second- and third-order consequences of their work. They are selling a product and then they are trusting that those who are using the product are equipped enough to understand if there is bias or artificial intelligence nudging happening within their technology.

DH: What advice would you give to people who are working on technologies, like you said, who may not be thinking about the second- or third-order ramifications? Someone, maybe a data scientist, is working on a project. They’re building models. How should they be thinking about that? Maybe they work for a big company. What would your advice be?

EA: Well, I think it’s interesting question to talk about from an individual perspective, because I think it’s a little harder to reach an individual than it is maybe an organization or an academic institution. Because individually, when I’ve talked to data scientists, they actually think that they are doing the right thing. They have no idea. One of the suggestions I do talk to them about is just do a search, an internet search, on some of the biases in technology and see if maybe that can’t inform your work. When I’ve talked to academic institutions, I’m like, maybe you can bring in a historian so you can see how some of what has happened in our country, or maybe across the world, might be impacting how technology is designed and what people think. Or, just bring in a guest, a guest lecturer. At the city, I spent a number of meetings with the coalition that I’m a part of, and we’ve just made our way around, and started having these learning events with the city attorney and the city clerk, and the division of race and equity, and again, those [people] are not necessarily technologists, to just help them understand these are some tools. Start with the internet. So, I would say that, to me, is the easiest thing, because that’s what I did exactly almost two and a half years ago when I saw a video called, . I knew instantly that my experience with racism, prejudice, discrimination, and then my love for technology 鈥 that this is how it would merge. And, I used the internet to figure out what was going on in the space and I followed my curiosity.

DH: That’s great. And when you bring in those experts, make sure you pay them.

EA: Pay them. [Laughter] Pay them well.

CA: That is a great point, right? Because people who have been doing this work have been doing it for a long time. This is a whole body of research and knowledge that people have been working on and that is important, right? And [it] is something that really can help make progress. I just watched that video that you referenced. Ethi, our creative director, found it for us and shared it with Dave and [me] before this interview. It was great. And, it was such a cool thing to see visualized, as something that we already know from years of scholarly research, which is that gender is racialized. So, you just can’t 鈥 gender is not separate from race, right? The way that we perceive and understand gender is highly racialized, right? Which you see then in this video with all of these faces of Black women being interpreted as male or masculine. It’s such a vivid illustration of that. I just found that very powerful because you can say that to somebody, right? You can say, “well, gender is racialized, you know, let me tell you why.” But to actually see that, I think it was very, very powerful. Really kind of drives that home.

EA: I agree. When I first started doing my events, I would share Joy’s video and people would be amazed, and it created a great opportunity for conversation at the end of every session about why this work is so important to unpack the entire design lifecycle. But, in addition to how individuals can learn more about it, there are lots of companies who are now standing up responsible AI teams, where they are working through the process of understanding what this means so that before their tool hits the streets, they’ve at least gone through some gates and some checks to balance it. But without legislation, we are really at the hands of these organizations and these companies deciding for themselves and policing themselves to make sure that their products are the best for all of us.

DH: And that brings us to an interesting point, where when they’re building these products, what are the best ways to get people whose lives are impacted by these technologies into the process? What can organizations be doing?

EA: So, this is such a good question, because I tell people this all the time, and we just kind of have this conversation, which is: you can find someone doing the work and bring them in on a consultant basis, like to consult with you. You don’t have to create this massive diversity and inclusion team and start asking your employees to come in and help you solve and solution these problems. There are organizations that have been doing this work for a very, very long time. I just honestly believe that it is around communication. There’s no pipeline issue. There’s no lack of organizations. I don’t care what city it is who are not doing racial equity work. Someone is doing racial equity work. And, in the life of Zoom now, you can certainly, certainly find someone across the world if you need to, to pull them into the conversation. So yeah, there’s many, many different ways. And so, for me, this is just so, so important to continue to have these kinds of conversations to educate people that it’s not as hard I think as we make it. It’s certainly not hard for me to find a group to have the conversation with. It wasn’t hard for me to find a data scientist to talk to and ask them some very, very basic questions. And, I think people have to want to, once they are aware that there are possibly issues in their technology.

CA: It sounds like part of what you’re saying, too, is get the motivation, identify the people with the knowledge and expertise that can then help you go from awareness and motivation to, okay, what do I need to know and understand and get a more sophisticated view on so I can then go in the right direction.

EA: Well, and I would agree. So, let’s just think about this. You want to design something in your house 鈥 you want a porch, or you want a deck. What do you do? You do your research, and you can kind of go find out and make sure that it’s appropriate. And, in this day and age, I just cannot believe that there are companies out here who are developing facial recognition technology or some other technology that is AI-enabled that don’t know that it could possibly harm portions of our communities. So, that to me is just… Here I am, though, still trying to live this double life of finding joy and happiness and doing that while leading in this space of digital justice and making sure that people are still aware, and it’s a struggle. But, if I can do it, I think others can, too. And I think we owe it to our world to just be 鈥 offer those skills so that we can all live in communities that thrive.

“maybe they don’t have enough lived experience with people of diverse backgrounds… Technology needs to work for every single human.”

So, if I could just take a step back. My family has been in Minnesota since the late 1800s. My great-great-grandfather was the first Black firefighter in St. Paul, Minnesota, and he served and eventually retired as a captain in 1926. So, think about what was going on in our country then. And, of course, I’ve had several family members since [then] that have been involved in racial equity work, and my mom was instrumental, before her untimely death, in getting the first urban playground established here in the city. And so, I come from a history and a legacy of people who’ve shown up for this work. So, when I show up to a conversation like this, I’m not showing up because of some recent tragic event.

I’m showing up because I have a legacy. I’m showing up because I have a lived experience. And so, it was very personal for me with what happened with George Floyd, the murder of George Floyd. Not only are we dealing with the pandemic of COVID, but now we have another racial injustice pandemic. And it was very, very difficult for me and my family.

I withdrew, because I needed to figure out how to center myself. What was my space going to be like if I was going to continue to do this work? Because, like I said, I spent a whole year and a half working really, really hard with the committee and helping them understand. And when I joined the Racial Equity Community Advisory Committee, they weren’t talking about technology. They weren’t talking about video cameras and video surveillance. And so, I spent a lot of time doing that legwork. In order for me to continue in this space, I cannot dip into trauma-filled conversations. I won’t dip into trauma-filled conversations because I have to selfishly take care of myself. So, yeah, and as a practitioner, it’s extremely hard. You don’t just wake up one day and say, 鈥淥h, I’m going to be a practitioner, and I’m going to help a city of a half a million people move towards a more tech transparent city where racial equity is at the top of the top of the conversation.鈥 And, I’m thankful that our city had done that work in 2017 before I got involved in this work. But, it has to come from within.

CA: So, would love to hear you talk a little bit about how you do create change, not just through describing the problem, but figuring out solutions 鈥 kind of doing that with people who are coming from lots of different perspectives and may not be well versed in the problem, different stakeholders, sort of the complexity of trying to do that day-to-day. Would love to hear you reflect on that.

EA: Well, there’s no short-term solution. So, before I actually got really into the data policy stuff, I spent a year on the Racial Equity Committee learning about civic tech, learning who were the players in the city of Minneapolis, learning what their concerns were, and showing up to these conversations 鈥 really sometimes not saying anything, even though I knew that I had some advocacy work that I wanted to discuss with them.

And so, it’s really about relationship management and respect and understanding what a particular city councilperson, or person who runs a division at the city of Minneapolis, like, what are their major challenges, and how can you help them while still advocating for what you believe could help improve the city? There’s not like a blueprint. You just show up, and you mess up sometimes, and you say things that maybe aren’t appropriate in the city meeting, and you don’t know, but kind of having the courage to learn out loud, as I say, and kind of learn forward. I don’t consider it falling forward or failing forward, but learning forward and just taking those chances.

“without legislation, we are really at the hands of these organizations and companies policing themselves to make sure their products are the best for all of us.”

And, there’s a lot of people that I work with that do the same things. We’re trying to figure it out together and sometimes are stumbling over each other, especially in the coalition. So, we started forming, and then we started storming. But you typically storm first. So, we form first, and then we storm first, and now we’re norming, and now we’re performing.

CA: I love that 鈥渓earning forward.鈥 That’s great. And, I think part of what I hear you saying is that to do this kind of work on the ground, like in city government and in the community, you have to have a learning mindset. Right? If you don’t have that learning mindset, then you’re going to get stuck, it sounds like. Is this kind of happening?

EA: Yes, to that point. But the other thing I want to make sure is that if it wasn’t for the folks out there protesting, the folks who are out on the streets, really raising the awareness of why these issues are so important for us to address, my work would be a lot harder. So, it takes so many people in the community for things to turn. It’s not just the folks behind the scenes, you know, working in the meetings. And because it is, it’s a lot of work.

CA: Technology is so powerful 鈥 and these tools like facial recognition technology and different kinds of surveillance tools, and just the technology, is ever more powerful. So, it seems very important to be doing this work to try to make sure there is a human-centered approach to the development.

EA: Technology is impacting all of our lives. I have been working on pay and gender equity for just about 20 years as a technologist in D.C. So, I ran a systems integration lab that was around $53 million and 200 employees. It was in D.C., so there wasn’t really an issue around diversity and inclusion in the technology, right? It was more around pay equity and gender equity, making sure that the right opportunities were given to everyone. But coming back to Minneapolis, it has been… it just seems like the topic. And that’s why I say it almost feels like a tour. You know, you can only do this for so long because it really can become a part of the fabric of who you are if you don’t help other leaders, you know, give them an opportunity in this space, as well as understand what your personal limits are.

And, I just want to say this. While this conversation is enjoyable, it still takes something out of me, right? Because we’re talking about a subject, we’re talking about technology that possibly could harm people that look like me. And, to continue to show up every day telling people [that] I don’t want technology to harm people that look like me. So, that’s why I do this work. But it’s good. I think that we’re recording it so that, again, people can kind of hear that others need to step into the space. We need more people to kind of show up and help with this work.

CA: So, we do have a wrap-up question that we ask everyone 鈥 is there anything that we haven’t asked you that you want to talk about or anything that you haven’t had a chance to speak to? Any resources that you want to share? What’s a takeaway you’d like to leave people with?

EA: So, if you would have asked me this question maybe earlier in the year, I would have told people to read as much as they can about biases in AI. I would have told people to go write articles, go host their own learning events, go do whatever they can, write a short e-book. But, here I am on the other side of George Floyd, the murder of George Floyd, and I think it’s been a common theme that we’ve talked about throughout this conversation, [which] is reaching for the highest point of happiness that you can. Because guess what? There’s going to be another murder, right? We’ve seen that. There’s going to be more protests. There’s going to be another company coming out with biased data, and they will wait until the community says it’s harming vulnerable populations or communities of color and then they may go try and fix it. There will be constantly folks working on data policy. There’ll be new elected officials. There’ll be, you know, new divisions that are created within city and state and federal governments. But reach for the highest point of happiness and work in that joy space, because that’s the only way you can keep showing up.

“I just cannot believe that there are companies out here who are developing facial recognition technology or some other technology that is AI-enabled that don’t know that it could possibly harm portions of our communities.”

And, I say that because, as I mentioned, in 1885, diversity and inclusion started for my family then, when my great-great-grandfather, William Gaudette, became the first Black firefighter and retired a captain. So, for over a 100 years, this is a conversation that’s been happening. Maybe it wasn’t directly around technology, but it’s still a lived experience for Black people in this particular country. That’s why I say if you want to do this work, you’re going to have to find a way to make sure that you can survive doing this work. And so, that would be my message to others. And, surviving doesn’t necessarily mean you can’t be happy and you can’t find joy. Conversations like this give me a lot of joy, because I can be myself. I can be a proud Black woman. I can stand here and say, 鈥淚 love being a proud Black woman,鈥 and still go off and have a difficult conversation.

Again, if you were to ask me six months ago, it would have been study, study, study, study, study, become an expert, and then that’s how you’ll make it. Now it’s, you know what, the stuff is going to be here. All these problems 鈥 it’ll be a new problem tomorrow. So, find your center and find that happiness and find that joy.

CA: It’s a long game.

EA: It’s a lifetime game for Black people. It’s… we get no generations off, we get no generations off.

DH: And with that powerful note, that’s a wrap on our interview, but the conversation continues.

CA: We want to hear from you. Please send us your questions, ideas, comments, suggestions. Reach out to us at justdigital@hbs.edu.

The post Elizabeth M. Adams on civic tech as advocacy work appeared first on 性视界 Business School AI Institute.

]]>
/elizabeth-m-adams-on-civic-tech-as-advocacy-work/feed/ 0
What food trucks can teach us about IT /what-food-trucks-can-teach-us-about-it/ /what-food-trucks-can-teach-us-about-it/#respond Wed, 12 Feb 2020 20:45:20 +0000 https://pr-373-hbsdi.pantheonsite.io/?p=9201 To make sure your products and platforms are used, try talking with your users before you develop them.

The post What food trucks can teach us about IT appeared first on 性视界 Business School AI Institute.

]]>
Large organizations have very poor track records when it comes to big IT projects. In the last decade, successful digital transformation efforts have helped organizations and government agencies make massive improvements to that record. Lauren Lockwood of Bloom Government Digital Services discusses the ways these organizations apply user research to more effectively spend their IT dollars and improve their interactions with customers and constituents.

The post What food trucks can teach us about IT appeared first on 性视界 Business School AI Institute.

]]>
/what-food-trucks-can-teach-us-about-it/feed/ 0
Tech governance shaped by humans /the-new-era-of-tech-governance/ /the-new-era-of-tech-governance/#respond Mon, 02 Dec 2019 14:00:10 +0000 https://pr-373-hbsdi.pantheonsite.io/?p=8097 Innovation follows a cyclical nature. The governance of technology, however, is shaped by humans 鈥 the ultimate decision makers in how technology is developed, deployed, and managed. Today, after several decades of unfettered innovation, we are witnessing the outcomes of letting tech run its course. Unequivocally, the digital age has improved the lives of citizens […]

The post Tech governance shaped by humans appeared first on 性视界 Business School AI Institute.

]]>
Innovation follows a cyclical nature. The governance of technology, however, is shaped by humans 鈥 the ultimate decision makers in how technology is developed, deployed, and managed.

Today, after several decades of unfettered innovation, we are witnessing the outcomes of letting tech run its course. Unequivocally, the digital age has improved the lives of citizens around the globe by increasing access to services and information. However, it is also increasingly clear that the opportunities for evil are no less frequent than those that advance humanity forward. Ash Carter, former Secretary of Defense and director of the 性视界 Kennedy School鈥檚 Belfer Center for Science and International Affairs, has said that 鈥淒isruptive scientific and technological progress is not inherently good or inherently evil. But its arc is for us to shape.鈥

So what has given us a collective moment of pause? The last century of technology was often developed by the few, for the many. Specific actors, privy to the necessary resources and expertise, wielded the power to experiment with cutting-edge innovation. In the 1990s and early 2000s, tech optimism championed the reversal of this trend: the so-called democratization of technology. This new era promised greater public accessibility and an opportunity to tailor technology to one鈥檚 individual experiences and problems. In short, technology was becoming decentralized and it promised to return power back to the people.

“The sheer expansion of access has, in many cases, made it difficult to set parameters around the impacts that technologies have…”

In the last few decades, many technologies have indeed become more accessible. But, in reality, some of the progress on the path to democratizing tech has been overshadowed by new challenges in governance. The sheer expansion of access has, in many cases, made it difficult to set parameters around the impacts that technologies have on diverse communities, sometimes with conflicting interests and incentives. For instance, the ubiquity of artificial intelligence, and its presence in industries ranging from healthcare to gaming, complicates the process of setting even the most fundamental ground rules.

Decentralized development, made possible by diminishing cost barriers, deregulation, and more accessible scientific knowledge, has also led to the rise of commercial tech firms. As Margaret O鈥橫ara describes it in her recent book , big tech has become 鈥渢he engine room of the American economy.鈥 In the process of dominating financial markets, the private tech sector has disproportionately surged ahead in the financing and investment of scientific research and development. Meanwhile, the in basic and applied science research has slipped to historic lows. And accordingly, the public sector is gradually losing a certain degree of power and its ability to govern along the way.

“The traditional role of governing a particular technology鈥檚 common interest or public purpose has become increasingly difficult for government to fulfill.”

The dispersion of power is leading to a displacement of public function. The traditional role of governing a particular technology鈥檚 common interest or public purpose has become increasingly difficult for government to fulfill. Historically, the development of nuclear arms or the global positioning system (GPS) 鈥 both products of the military-industrial complex 鈥 remained in the centralized quadrants, on the right-hand side, moving upward from early-stage development all the way through to maturity. New technologies, such as gene drives or small satellites, now begin in the decentralized, early-stage quadrant and increasingly mature over time. The horizontal direction of their maturity, however, is much less clear.

Maturity and Centralization of Power in Technological Development
Maturity and Centralization of Power in Technological Development

The difference between these two starting points not only introduces a greater number of vulnerable infliction points to public purpose, but it requires an immense effort to bring a decentralized technology to comply with acceptable norms, oversight, or regulation. Where a technology begins defines how we collaborate across business, government, academia, and the non-profit sector, to protect the values of privacy, inclusion, transparency and accountability, safety, and security to the ends of a fair and just society. And how it matures is a testament to our progress.

Decentralized power is now the new norm. Each era of technological advancement has required the collective action of society to put forward its long-standing values to govern newfound technologies and their consequences. The future of today鈥檚 technologies 鈥 whether it be solar geoengineering or quantum computing 鈥 requires an emphasis on innovative and inclusive governance approaches rather than the novelty of tech innovation itself.

The question that defines this era is not what these technologies can do, it is what we choose to do with them.

The post Tech governance shaped by humans appeared first on 性视界 Business School AI Institute.

]]>
/the-new-era-of-tech-governance/feed/ 0
The road map of the future: transportation /road-map-of-the-future-transportation/ /road-map-of-the-future-transportation/#respond Thu, 12 Sep 2019 15:28:15 +0000 https://pr-373-hbsdi.pantheonsite.io/?p=7933 WBUR hosted a five-part series, Business in the Era of Climate Change in collaboration with 性视界 Business School and Boston University Questrom School of Business.  Business is the main source of the greenhouse gases that are causing the Earth鈥檚 climate to change. Business is also the main source of new products, services and business models […]

The post The road map of the future: transportation appeared first on 性视界 Business School AI Institute.

]]>
WBUR hosted a in collaboration with 性视界 Business School and Boston University Questrom School of Business. 

Business is the main source of the greenhouse gases that are causing the Earth鈥檚 climate to change. Business is also the main source of new products, services and business models that may save us from wholesale climate calamity. This series, featuring leading thinkers from business, environmental advocacy groups and area universities, will explore what businesses are doing, can do and should do to confront climate change.

Featuring

  • Adam Gromis, Public Policy Manager, Sustainability & Environmental Impact, Uber
  • Chris Dempsey, Director, Transportation for Massachusetts
  • Nicole Freedman, Director of Transportation Planning, City of Newton
  • Moderator: Bruce Gellerman, WBUR Environmental Reporter

The post The road map of the future: transportation appeared first on 性视界 Business School AI Institute.

]]>
/road-map-of-the-future-transportation/feed/ 0