Innovation & Disruption | 性视界 Business School AI Institute /category/innovation-disruption/ The 性视界 Business School AI Institute catalyzes new knowledge to invent a better future by solving ambitious challenges. Wed, 22 Apr 2026 16:15:57 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 /wp-content/uploads/2026/04/cropped-Screenshot-2026-04-16-at-10.14.43-AM-32x32.png Innovation & Disruption | 性视界 Business School AI Institute /category/innovation-disruption/ 32 32 The New Influence War: How AI Could Hack Democracy /the-new-influence-war-how-ai-could-hack-democracy/ Mon, 26 Jan 2026 13:24:52 +0000 /?p=29389 What the rise of AI swarms reveals about the future of influence, information, and democratic resilience. Listen to this article: As we move into the era of agentic AI, what kind of influence will this emerging technology have on democracy and misinformation? In the new Science paper 鈥淗ow Malicious AI Swarms Can Threaten Democracy,鈥 Amit […]

The post The New Influence War: How AI Could Hack Democracy appeared first on 性视界 Business School AI Institute.

]]>
What the rise of AI swarms reveals about the future of influence, information, and democratic resilience.

As we move into the era of agentic AI, what kind of influence will this emerging technology have on democracy and misinformation? In the new Science paper 鈥,鈥 , Assistant Professor of Business Administration at 性视界 Business School and Faculty PI of the at the 性视界 Business School AI Institute, and an international, multi-disciplinary group of co-authors argue that we鈥檙e entering a phase where 鈥渕alicious AI swarms鈥 could use multi-agent systems to infiltrate communities, mimic human social behavior, and iteratively refine persuasion tactics in real time. By expanding misinformation into persistent manipulation, these systems threaten the information ecosystem that democratic societies depend on, but Goldenberg and his co-authors also outline technical, economic, and institutional measures that could meaningfully defend against this new danger.

Key Insight: AI Swarms Operate Like Digital Societies

鈥淓nabled by these capabilities, a disruptive threat is emerging: swarms of collaborative, malicious AI agents.鈥 [1]

Unlike earlier botnets, which relied on centralized control, rigid scripts, and human labor, AI swarms combine LLM reasoning with multi-agent architectures to function more like adaptive digital societies. The authors define malicious AI swarms as systems of persistent agents that coordinate toward shared objectives, adapt in real time to engagement and platform cues, and operate with minimal human oversight across platforms. Five capabilities make these systems especially potent. (1) Swarms replace centralized command with fluid coordination, allowing thousands of AI personas to locally adapt while periodically synchronizing narratives. (2) They can map social networks to identify and infiltrate vulnerable communities with tailored appeals. (3) Human-level linguistic mimicry and irregular behavior patterns help them evade detection. (4) Continuous, automated A/B testing enables rapid optimization of persuasive content. (5) Finally, their always-on persistence allows influence to accumulate gradually, embedding itself within communities over time and subtly reshaping norms, language, and identity. As the article notes, recent elections in Taiwan and India already saw a proliferation of AI-generated propaganda and synthetic media outlets, meaning that this threat is already here and poised to expand in the future.

Key Insight: The Harm Cascade

鈥淓merging capabilities of swarm-driven influence campaigns threaten democracy by shaping public opinion, which leads to cascading harms.鈥 [2]

Goldenberg and his team argue that AI swarms could trigger a 鈥榗ascade鈥 of harms by systematically distorting the information ecosystem. By engineering 鈥榮ynthetic consensus鈥 and targeting different misinformation to different communities, these agents would have the power to undermine the independent thought essential for collective intelligence while simultaneously fragmenting the public sphere. This manipulation, together with coordinated synthetic harassment campaigns, could create a hostile environment that drives journalists and citizens into silence. The damage would compound as swarms 鈥榩oison鈥 the web with fabricated content that contaminates future AI training data. Ultimately, this sustained erosion of trust could corrode institutional legitimacy, rendering democratic safeguards vulnerable to collapse.

Key Insight: A Layered Defense Strategy

鈥淭aken together, these measures offer a layered strategy: immediate transparency to restore trust, proactive education to bolster citizens, resilient infrastructures to reduce systemic vulnerabilities, and sustained investment to monitor and adapt over time.鈥 [3]

Rather than a single fix, the authors argue for a layered defense strategy designed to raise the cost, complexity, and visibility of swarm-based manipulation. The first layer is always-on detection: continuous monitoring systems that identify statistically anomalous coordination patterns in real time, paired with public audits and transparency to reduce misuse. Because attackers will adapt, detection alone is insufficient. A second layer involves simulation and stress-testing. Agent-based simulations can replicate platform dynamics and recommender systems, allowing researchers and platforms to probe how swarms might evolve to recalibrate defenses before major elections or crises. Third, the authors emphasize empowering users through optional 鈥淎I shields,鈥 tools that flag likely swarm activity, allowing individuals to recognize suspicious content. Finally, the paper highlights governance and economic levers as essential. Proposals include standardized persuasion-risk evaluations for frontier models, mandatory disclosure of automated identities, stronger provenance infrastructure, and a distributed AI Influence Observatory to coordinate evidence across platforms, researchers, and civil society. Crucially, the authors argue that disrupting the commercial market for manipulation may be among the most effective ways to reduce large-scale abuse.

Why This Matters

For business leaders and professionals, this study reveals a threat that extends beyond electoral politics into the fundamental information ecosystem that underpins market confidence, consumer behavior, and corporate reputation. The same AI swarm technologies that  manipulate political discourse could target brand perception, financial markets, or industry narratives just as easily. The defense strategy outlined by the authors can similarly provide a roadmap for corporate action: implementing detection systems for monitoring threats to brand reputation, advocating for industry standards around AI transparency, and supporting governance initiatives that protect the broader information ecosystem. Executives who treat information integrity as core infrastructure will be better positioned to protect stakeholder trust, decision quality, and long-term resilience in an era of AI-enabled influence operations.

Bonus

For a look at how efforts to align AI systems with human preferences can unintentionally undermine trustworthiness itself, check out 鈥AI Alignment: The Hidden Costs of Trustworthiness.鈥&苍产蝉辫;

References

[1] Daniel Thilo Schroeder et al., 鈥淗ow Malicious AI Swarms Can Threaten Democracy,鈥 Science (391) (2026): 354.  

[2] Schroeder et al., 鈥淗ow Malicious AI Swarms Can Threaten Democracy,鈥 355.

[3] Schroeder et al., 鈥淗ow Malicious AI Swarms Can Threaten Democracy,鈥 357.

Meet the Authors

Headshot of Amit Goldenberg

is an assistant professor in the Negotiation Organization & Markets unit at 性视界 Business School, an affiliate with 性视界鈥檚 Department of Psychology, and a faculty principal investigator in the HBS AI Institute’s Digital Emotions Lab.

Additional Authors: Daniel Thilo Schroeder, Meeyoung Cha, Andrea Baronchelli, Nick Bostrom, Nicholas A. Christakis, David Garcia, Yara Kyrychenko, Kevin Leyton-Brown, Nina Lutz, Gary Marcus, Filippo Menczer, Gordon Pennycook, David G. Rand, Maria Ressa, Frank Schweitzer, Dawn Song, Christopher Summerfield, Audrey Tang, Jay Van Bavel, Sander van der Linden, and Jonas R. Kunst

The post The New Influence War: How AI Could Hack Democracy appeared first on 性视界 Business School AI Institute.

]]>
Using AI to Identify Climate Innovation /using-ai-to-identify-climate-innovation/ Thu, 04 Dec 2025 13:50:05 +0000 /?p=29157 For decades, the business conversation around climate change has been focused on how to manage, mitigate, and withstand the risks and downsides. But what if there鈥檚 a more important story about opportunity that we鈥檝e been missing? In the new article, 鈥淭racking Business Opportunities for Climate Solutions Using AI in Regulated Accounting Reports,鈥 published in Nature […]

The post Using AI to Identify Climate Innovation appeared first on 性视界 Business School AI Institute.

]]>
For decades, the business conversation around climate change has been focused on how to manage, mitigate, and withstand the risks and downsides. But what if there鈥檚 a more important story about opportunity that we鈥檝e been missing? In the new article, 鈥,鈥 published in Nature Communications, a team of co-authors including , , and of the Climate and Sustainability Impact Lab at the 性视界 Business School AI Institute, and turned a large language model loose on nearly 40,000 regulatory filings from over 4,000 US firms in 47 GICS industries between 2005 and 2022. By fine-tuning a GPT to identify climate solutions, they鈥檝e created a systematic measure of climate business opportunities across the U.S. economy.1 They found that 45% of firms in 2022 mention climate solutions in their core business descriptions, up from 20% in 2005. More importantly, the AI-powered approach reveals patterns difficult to discover using traditional analysis, from which technologies are actually driving growth to how seemingly unrelated industries are quietly converging around shared clean-tech innovations.聽

Key Insight: Teaching AI to Distinguish Opportunity from Noise

鈥淲e fine-tune the GPT model to perform the specific task of identifying sentences that relate to climate solutions.鈥 [1]

The paper tackles a basic but stubborn problem: while climate risks are increasingly measured (emissions, carbon pricing exposure), there is no standardized way to track climate opportunities, the products and services that help others decarbonize. Companies don鈥檛 have to disclose climate solution revenues, and when they do, it鈥檚 often in voluntary, non-standardized formats. To fill this gap, the team fine-tuned a GPT model on 3,508 carefully labeled sentences to distinguish genuine climate solutions from generic climate-related discussion. For example, a company that manufactures electric vehicles is pursuing climate solutions; a company that uses electric vehicles in its corporate fleet is not. Then they turned the GPT on the business descriptions within 10-K filings. This is critical because it describes what a company actually sells, and unlike a press release, it is scrutinized by auditors and carries legal liability for misrepresentation. 

Key Insight: AI Reveals Which Climate Technologies Actually Drive Growth

鈥淥verall, the climate solutions measure is positively and statistically significantly associated with revenue growth.鈥 [2]

The AI鈥檚 ability to categorize sentences into specific technology topics enabled granular analysis. And once you can measure climate solutions systematically, you can ask questions impossible to answer before, such as, 鈥楧o these opportunities actually translate to business performance?鈥 AI-derived metrics revealed that firms with a one standard deviation higher climate solutions measure experienced 2% higher revenue growth. However, the positive association is statistically significant primarily in industries where innovation is protected by patents. This suggests that commercializing climate solutions is not just about having a good idea, it鈥檚 about possessing the intellectual property to defend it. Furthermore, the AI model allowed the researchers to categorize the type of technology being discussed. They found that revenue growth was stronger for technologies with high abatement potential, solutions that can significantly reduce emissions, indicating that the market is effectively rewarding technologies that solve the biggest chunks of the climate problem.

Key Insight: The Great Convergence

鈥淲e observe a blurring of industry boundaries, with previously unrelated industries engaging in similar products because of climate solutions.鈥 [3]

Perhaps the most striking discovery emerged when researchers , representations of how similar different climate solution sentences are to each other. By plotting these embeddings, the researchers could analyze not just how much individual firms disclose about climate solutions, but how entire industries cluster around shared technology narratives. For example, electric vehicle topics clustered around both automobiles and capital goods (equipment manufacturing), revealing the emerging battery and electric powertrain value chain. This is more than just linguistic similarity: the study found that industry pairs with higher topic similarity exhibited higher stock return synchronicity. Essentially, if you talk about the same climate tech, your stock prices start moving together because your economic fundamentals are aligning. 

Why This Matters

For business leaders, investors, and strategists, this research underscores how AI can change how we understand markets. By converting dense, regulated text into structured intelligence, AI can reveal patterns that are otherwise invisible, whether related to climate technologies, shifting consumer needs, emerging value chains, or entirely different domains. Even companies that operate far from climate-related sectors can benefit from the underlying method: using AI to mine filings, contracts, product documentation, customer feedback, and market reports for indicators of strategic change and future direction. Ultimately, this research demonstrates the potential for AI to handle sophisticated, complex analysis, and for organizations to evaluate whether their own narrative matches their strategy and competitive aspirations.

Footnote

1. The data generated from the study is publicly available at . 

References

[1] Shirley Lu et al., 鈥淭racking opportunities for climate solutions using AI in regulated accounting reports,鈥 Nat Commun 16, 9769 (2025): 2. DOI:  

[2] Lu et al., 鈥淭racking opportunities for climate solutions using AI in regulated accounting reports,鈥 6. 

[3] Lu et al., 鈥淭racking opportunities for climate solutions using AI in regulated accounting reports,鈥 12.

Meet the Authors

is Assistant Professor of Business Administration in the Accounting and Management Unit at 性视界 Business School. She is faculty within the Climate and Sustainability Impact Lab at the HBS AI Institute.

is Charles M. Williams Professor of Business Administration at 性视界 Business School. He co-leads the Climate and Sustainability Impact Lab at HBS AI Institute.

is a Postdoctoral Fellow at the Climate and Sustainability Impact Lab at the HBS AI Institute.聽

Mark Antonio Awada

is Chief Innovation and Digital Strategy Officer at Brown Capital Management, Senior Lecturer at 性视界 Extension School, and was formerly Head of Research and Data Science at the HBS AI Institute.聽

The post Using AI to Identify Climate Innovation appeared first on 性视界 Business School AI Institute.

]]>
Drawing the Line on AI Usage in the Workplace /drawing-the-line-on-ai-usage-in-the-workplace/ Thu, 13 Nov 2025 16:22:02 +0000 /?p=29024 As AI systems increasingly outperform humans across a range of tasks, the economic logic seems clear: more capable, more cost-effective AI should lead to widespread automation. The new 性视界 Business School working paper, 鈥淧erformance or Principle: Resistance to Artificial Intelligence in the U.S. Labor Market,鈥 co-authored by Simon Friis, postdoctoral fellow at the Laboratory for […]

The post Drawing the Line on AI Usage in the Workplace appeared first on 性视界 Business School AI Institute.

]]>
As AI systems increasingly outperform humans across a range of tasks, the economic logic seems clear: more capable, more cost-effective AI should lead to widespread automation. The new 性视界 Business School working paper, 鈥,鈥 co-authored by , postdoctoral fellow at the Laboratory for Innovation Science at 性视界 (LISH) – part of the 性视界 Business School AI Institute, and , Assistant Professor of Business Administration at 性视界 Business School, puts that hypothesis to the test and reveals a more nuanced answer. The issue isn鈥檛 just what AI can do, but what we鈥檒l allow it to do.

Key Insight: Mapping AI Resistance

鈥淲e conducted a survey of 2,357 U.S. adults designed to measure public support for AI automation and augmentation across a comprehensive set of occupations.鈥 [1]

Participants rated a sample from 940 occupations twice: first under current AI capabilities, then imagining AI that can exceed humans at the job while doing so at a lower cost. The researchers also developed and validated a new scale meant to measure moral repugnance towards AI, the perception that using AI in certain contexts is inherently wrong, irrespective of benefits. This scale thereby taps into fundamental concerns about human dignity, betrayal, and categorical prohibitions that no amount of engineering can overcome. As a result, the researchers came to distinguish two fundamentally different sources of resistance to AI: performance-based concerns and principle-based objections.

Key Insight: Performance Concerns Fade Fast

鈥淧ublic support for AI-driven automation nearly doubles鈥攆rom 30% to 58% of occupations鈥攚hen AI is described as clearly outperforming human workers, suggesting that most resistance is contingent on perceived capability.鈥 [2]

The researchers identify performance-based resistance to AI as opposition due to AI鈥檚 current technical capabilities, including factors such as accuracy, reliability, cost, and speed. We might expect this type of resistance to recede as AI technology becomes more capable and cost-effective over time, a result backed up by the study. This was especially true for occupations that were deemed morally permissible for AI help (augmentation) and replacement (automation) like clerks, transportation planners, and data entry keyers. 

Key Insight: The Principle Line

鈥淸O]ur findings reveal a sharply delimited moral frontier, where a small subset of sacrosanct occupations remains off-limits, within an otherwise permissive labor market increasingly open to AI as performance improves.鈥 [3]

Other occupations, including clergy, childcare workers, and therapists, fall into the category of principle-based resistance towards AI. In these cases, AI faces complete rejection that doesn鈥檛 budge even when it鈥檚 positioned as better, faster, and cheaper. The use of AI in these roles is deemed morally repugnant regardless of capability. What makes these occupations special? They share common threads of caregiving, emotional labor, public speaking, and spiritual leadership. The researchers highlight that the dynamic between AI capabilities and human repugnance creates 鈥渕oral friction zones鈥 where capability meets rejection (e.g. school psychologists and fraud examiners) and 鈥渓atent zones鈥 where acceptance is actually ahead of current ability (e.g. cashiers, conveyor operators). [4]

Why This Matters

For business leaders and executives, this research is both liberating and sobering. Liberating, because a large share of public hesitation is performance-based: as your models improve, acceptance will follow in line. Sobering, because a line remains where AI is judged intrinsically inappropriate. The strategic response isn鈥檛 abandoning AI, but designing hybrid solutions that preserve human touchpoints in morally sensitive tasks, carefully framing AI as augmentation rather than replacement, and investing in transparency and ethics communication. 

Bonus

Just as this research shows that better AI doesn鈥檛 guarantee broader acceptance, earlier HBS AI Institute work revealed that improving AI capabilities can actually reverse inequality effects in unexpected ways. For more on how AI鈥檚 relationship with workers shifts as technology advances, check out Who Benefits When Bots Get Better?

References

[1] Simon Friis and James W. Riley, 鈥淧erformance or Principle: Resistance to Artificial Intelligence in the U.S. Labor Market,鈥 性视界 Business School Working Paper No. 26-017 (October 6, 2025): 6, . 

[2] Friis and Riley, 鈥淧erformance or Principle,鈥 5.

[3] Friis and Riley, 鈥淧erformance or Principle,鈥 5.

[4] Friis and Riley, 鈥淧erformance or Principle,鈥 16.

Meet the Authors

Headshot of Simon Friis

is a postdoctoral fellow at the Laboratory for Innovation Science at 性视界 (LISH), part of the HBS AI Institute. His research focuses on the social and economic impacts of generative AI.

Headshot of James Riley

is an Assistant Professor of Business Administration in the Organizational Behavior unit at 性视界 Business School. He is an economic sociologist, conducting ethnographic research to produce qualitative studies on the role of status, norms, social valuations, and organizational culture within innovation-driven organizations, creative industries, and cultural markets.

The post Drawing the Line on AI Usage in the Workplace appeared first on 性视界 Business School AI Institute.

]]>
When Software Becomes Staff /when-software-becomes-staff/ Mon, 25 Aug 2025 12:31:28 +0000 /?p=28286 If AI can accept light supervision and then be off and running, what does it mean for how leaders and organizations design work, govern risk, and account for value? Drawing on perspectives from Jen Stave Jen Stave , Executive Director of the 性视界 Business School AI Institute, Columbia Business School鈥檚 Stephan Meier, and Salesforce CEO […]

The post When Software Becomes Staff appeared first on 性视界 Business School AI Institute.

]]>
If AI can accept light supervision and then be off and running, what does it mean for how leaders and organizations design work, govern risk, and account for value? Drawing on perspectives from Jen Stave Jen Stave , Executive Director of the 性视界 Business School AI Institute, Columbia Business School鈥檚 Stephan Meier, and Salesforce CEO Marc Benioff, the recent New York Times Shop Talk article 鈥溾 briefly explores the rise and implications of AI agents that can act like teammates or supervisees.

Key Insight: Agentic AI as Managed Teammates

鈥淟ike a human employee, these tools would work independently with a bit of management.鈥

Jen Stave

Agentic tools are moving beyond chatbots and image generation. Unlike traditional automation that follows rigid scripts, AI agents function more like human employees: capable of independent decision-making after being given high-level goals and objectives.

Key Insight: An Uncertain Future

鈥淗ow the fruits of digital labor will be treated in economic terms is still unsettled.鈥

Jen Stave

On one hand, the impact of AI is already here and being measured, as evidenced by how the use of AI agents at Salesforce led to a 17% customer service cost reduction over nine months. But the article also raises a range of undecided questions related to economic capture, quality and accountability, and the right balance between human and AI worker numbers.

Why This Matters

For forward-thinking executives, increasingly the question isn鈥檛 whether to adopt agentic AI, but how to operationalize it productively and responsibly. While the efficiency gains are compelling, success requires thoughtful integration by leaders who are ready to address challenges of workforce transition, quality control, and ROI measurement.

Bonus

To read more about Agentic AI and digital labor, read 鈥,鈥 co-authored by Jen Stave, for the 性视界 Business Review.

The post When Software Becomes Staff appeared first on 性视界 Business School AI Institute.

]]>
Mastering Change Resilience: The Key to AI-Driven Success /mastering-change-resilience-the-key-to-ai-driven-success/ Tue, 05 Aug 2025 13:40:50 +0000 /?p=28035 The disconnect between AI鈥檚 transformative potential and the actual scale of implementation represents one of today鈥檚 most significant organizational challenges. In their new article for the 性视界 Business Review, 鈥淎 Guide to Building Change Resilience in the Age of AI,鈥 Karim Lakhani, Dorothy and Michael Hintze Professor of Business Administration at 性视界 Business School and […]

The post Mastering Change Resilience: The Key to AI-Driven Success appeared first on 性视界 Business School AI Institute.

]]>
The disconnect between AI鈥檚 transformative potential and the actual scale of implementation represents one of today鈥檚 most significant organizational challenges. In their new article for the 性视界 Business Review, 鈥,鈥 , Dorothy and Michael Hintze Professor of Business Administration at 性视界 Business School and faculty chair and co-founder of the 性视界 Business School AI Institute, Jen Stave Jen Stave , executive director of the HBS AI Institute, Douglas Ng Douglas Ng Headshot Douglas Ng , Director of Design at the HBS AI Institute, and , managing director at BCG X, argue that this mismatch arises from structural issues and propose change resilience as a systematic approach to building the organizational capabilities necessary for AI success.

Key Insight: The Missing Ingredient

“The primary obstacle is the ability of companies to adapt, reinvent, and scale new ways of working. We call this change resilience.” [1]

In the fast-paced business environment created by AI, leaders are no longer able to apply traditional operating models to episodic development cycles. Previously, as Lakhani and his co-authors suggest, 鈥淵ou modernized your systems, trained your people, and operated in a stable environment until the next wave of disruption hit.鈥 [2] However, if your old approach is falling short in today鈥檚 environment and you鈥檙e feeling left behind, you aren鈥檛 alone: the results of a BCG survey discussed in the article report that “just 26% of organizations have achieved value from AI.” [3] Responding to both the challenges and opportunities AI presents, the authors call for a fundamental shift: companies must move beyond simply managing AI-driven change and instead embed AI as a core organizational competency through the continuous and comprehensive strategy of 鈥渃hange resilience.鈥

Key Insight: The Mindset

Sensing – Rewiring – Lock-In

Change resilience, according to the authors, is made up of three 鈥榤uscles鈥 working in concert to create a sustainable AI ecosystem. Sensing enables organizations 鈥渢o pick up weak technological, competitive, or societal signals early.鈥 Rewiring is 鈥渢he capacity to redeploy talent, data, capital, and decision rights in days or weeks, not fiscal quarters.鈥 Lock-In is 鈥渢he discipline to codify what a team learns (in process, code, or policy) so the next initiative starts from a higher baseline instead of reinventing the wheel.鈥 [3] The authors describe Shopify as a company that exemplifies these characteristics, as it constantly evolves rather than adding AI to old systems. As one example, in 2023, Shopify spun off its logistics arm to concentrate on product innovation, enabling rapid development of AI-native tools like Sidekick for entrepreneurs.

Key Insight: The Playbook

Learn – Do – Imagine – Act – Care

Lakhani and his co-authors break down change resilience into five components: Learn, Do, Imagine, Act, and Care. Learning involves widespread AI experimentation to shift attitudes, empower employees, and discover opportunities to take advantage of AI. Doing targets deficiencies with fast-paced AI initiatives. Imagining puts your entire organization up for discussion, challenging you to invent new operating models instead of duck-taping existing ones. Acting makes these cycles continuous in order to establish change resilience as a foundational strategy rather than a one-off solution. Finally, Caring emphasizes wellbeing measures to ensure that employees feel supported and avoid burnout. The article discusses Accenture, Singapore-based DBS Bank, Moderna, P&G, and Cisco as already leading the pack by incorporating these elements into their strategy and operations.

Why This Matters

For executives and business professionals, developing change resilience represents a crucial strategic priority for competing effectively in the AI era. By focusing on the three muscles and five-steps, leaders can position their companies to leverage AI and adapt to future technological advances. The companies already achieving breakthrough AI results share a common strategy: they invest in their organization鈥檚 capacity to change as aggressively as they invest in AI technology itself.

If you鈥檙e wondering how change resilient your organization is, 鈥溾 also includes a set of questions that can act as a litmus test.

References

[1] Karim Lakhani et al., 鈥淎 Guide to Building Change Resilience in the Age of AI,鈥 性视界 Business Review, July 29, 2025, . 

[2] Lakhani et al., 鈥淎 Guide to Building Change Resilience in the Age of AI.鈥

[3] Lakhani et al., 鈥淎 Guide to Building Change Resilience in the Age of AI.鈥

Meet the Authors

Headshot of Karim Lakhani

is the Dorothy & Michael Hintze Professor of Business Administration at 性视界 Business School. He specializes in technology management, innovation, digital transformation, and artificial intelligence. He is also the Co-Founder and Faculty Chair of the HBS AI Institute and the Founder and Co-Director of the Laboratory for Innovation Science at 性视界.

Jen Stave Jen Stave is Executive Director of the HBS AI Institute. She was previously Senior Vice President at Wells Fargo, and has a PhD from American University.

Douglas Ng Douglas Ng Headshot Douglas Ng is Director of Design of the HBS AI Institute. As a digital strategist, technology educator, and innovation researcher, he specializes in AI transformation and translates the institute鈥檚 research for industry leaders.

is Managing Director with BCG X, where he specializes in Generative AI, AI platform engineering, and data management.

The post Mastering Change Resilience: The Key to AI-Driven Success appeared first on 性视界 Business School AI Institute.

]]>
AI Elevate: Strategy and the Declining Cost of Expertise /ai-elevate-strategy-and-the-declining-cost-of-expertise/ Fri, 18 Jul 2025 13:54:56 +0000 /?p=27947 As AI continues to reshape industries globally, the HBS AI Institute (previously Digital Data Design Institute at 性视界 (D^3)) and the 性视界 Business School Club of the Gulf Cooperation Council hosted AI Elevate: From Readiness to Exponential Growth on December 13, 2024, in Dubai, UAE. This one-day conference provided business leaders, researchers, and government officials […]

The post AI Elevate: Strategy and the Declining Cost of Expertise appeared first on 性视界 Business School AI Institute.

]]>
As AI continues to reshape industries globally, the HBS AI Institute (previously Digital Data Design Institute at 性视界 (D^3)) and the 性视界 Business School Club of the Gulf Cooperation Council hosted AI Elevate: From Readiness to Exponential Growth on December 13, 2024, in Dubai, UAE. This one-day conference provided business leaders, researchers, and government officials with crucial insights into AI strategy, industry transformation, and global market integration. For an introduction to the day-long conference, see the Opening Remarks and the Agenda.

For the session , Bobby Yerramilli-Rao, Chief Strategy Officer at Microsoft, and HBS AI Institute co-founder Karim Lakhani discussed the far-reaching implications of AI on business operations, organizational structures, and strategic planning. Their insights and research offer a compelling vision of how companies must adapt to thrive in an era of proliferating access to expertise.

Key Insight: Expertise is No Longer Scarce, it鈥檚 Scalable

鈥淸T]hose that were behind the average, those that were below average, all of a sudden now can be at the average, and if the average of the AI is better than the humans, then they’ll be at wherever the average of the AI is at.鈥

Karim Lakhani

The most immediate impact of AI is appearing in productivity and performance, with gains that defy traditional economic expectations. AI is effectively raising the floor of competency on difficult tasks that once required years of specialized training across a wide range of fields. Expertise, which used to be a key driver of competitive advantage, is now democratized, and the implications are seismic.

Key Insight: You are More Than an Individual

鈥淸O]ver time, each person can manage a raft of agents, AI agents, to do things for them, so now every person is effectively a team.鈥

Bobby Yerramilli-Rao

Yerramilli-Rao and Lakhani discussed a future where employees regularly incorporate their own AI agents into their work, and even bring them along across jobs and educational experiences. According to Yerramilli-Rao and Lakhani, companies will need to integrate these AI agents into their systems while maintaining control, governance, and security. For hiring purposes they will need to identify individuals who can effectively collaborate with human-AI teams. The outcome will be flatter structures and less-siloed employees compared to traditional departmental architecture. One vivid example the speakers gave was Focus Fuel, a startup launched by three friends working part-time using GPT tools to develop, market, and scale a new consumer product, all without prior Consumer Packaged Goods (CPG) experience.

Key Insight: Know Your Core Value Proposition

鈥淚 think the imperative here is that everyone has to get very very clear about what it is that they’re doing to add value and then use AI to enhance that capability.鈥

Bobby Yerramilli-Rao

The competitive landscape may be entering a phase of continuous acceleration where companies must simultaneously leverage AI while preparing for advances in AI to match and then exceed their current capabilities. If AI levels the playing field, companies must clarify what truly sets them apart. What are you uniquely good at, and what expertise is replicable by AI or your competitors using AI?

Why This Matters

For business leaders, these insights signal the beginning of a new era where strategic value comes from focus, speed, and broad AI implementation. Those who treat this as a technology upgrade rather than a fundamental shift risk being outpaced. The question is no longer whether AI will transform your industry, but whether your organization will lead or scramble to catch up. Embracing these changes and proactively reshaping your organization around AI capabilities may be the key to unlocking previously unheard of levels of innovation, efficiency, and success in the years to come.

Read their article .

Meet the Speakers

is Chief Strategy Officer at Microsoft. He has co-founded several companies, and has served at organizations including Vodafone and McKinsey. He holds an MA from the University of Cambridge and a PhD from the University of Oxford.

Headshot of Karim Lakhani

is the Dorothy & Michael Hintze Professor of Business Administration at 性视界 Business School. He specializes in technology management, innovation, digital transformation, and artificial intelligence. He is the Co-Founder and Chair of the HBS AI Institute and the Founder and Co-Director of the Laboratory for Innovation Science at 性视界.

The post AI Elevate: Strategy and the Declining Cost of Expertise appeared first on 性视界 Business School AI Institute.

]]>
Redefining Entrepreneurship: The Power of Acquisition in Business Growth /redefining-entrepreneurship-the-power-of-acquisition-in-business-growth/ Mon, 16 Dec 2024 15:48:25 +0000 /?p=24609 A recent blog post, 鈥淓ntrepreneurship Through Acquisition (ETA)鈥, from the HBS AI Institute (previously the Digital Data Design Institute at 性视界 (D^3)) blackbox Lab, led by James Riley, Principal Investigator of the lab and Assistant Professor at 性视界 Business School, explores the innovative approach of acquiring existing businesses as a pathway to entrepreneurship. The post, […]

The post Redefining Entrepreneurship: The Power of Acquisition in Business Growth appeared first on 性视界 Business School AI Institute.

]]>
A recent blog post, 鈥溾, from the HBS AI Institute (previously the Digital Data Design Institute at 性视界 (D^3)) blackbox Lab, led by , Principal Investigator of the lab and Assistant Professor at 性视界 Business School, explores the innovative approach of acquiring existing businesses as a pathway to entrepreneurship. The post, part of a wider series on the blog addressing the wealth gap in the United States, highlights how , founder and CEO of IMB Partners and a 性视界 Business School alumnus, leverages ETA as a unique opportunity for aspiring entrepreneurs鈥攑articularly those in Black communities鈥攖o overcome traditional startup hurdles by purchasing and managing established companies, capitalizing on their existing infrastructure and market presence. This installment of the series sheds light on how ETA can serve as a transformative model for entrepreneurs aiming to make a meaningful impact in the business world, while also acting as a powerful tool for closing the wealth gap experienced by underrepresented and marginalized communities.

The post Redefining Entrepreneurship: The Power of Acquisition in Business Growth appeared first on 性视界 Business School AI Institute.

]]>
Shaping the Future of Innovation: Insights from PyTorch鈥檚 Governance Transformation /shaping-the-future-of-innovation-insights-from-pytorchs-governance-transformation/ Fri, 06 Dec 2024 20:01:07 +0000 /?p=24350 In the rapidly evolving world of technology and open collaboration, companies face a paradox: how much control to maintain over the technologies they develop while providing access to foster innovation and attract external contributions. In their study, “Igniting Innovation: Evidence from PyTorch on Technology Control in Open Collaboration,鈥 authors Daniel Yue, Assistant Professor at the […]

The post Shaping the Future of Innovation: Insights from PyTorch鈥檚 Governance Transformation appeared first on 性视界 Business School AI Institute.

]]>
In the rapidly evolving world of technology and open collaboration, companies face a paradox: how much control to maintain over the technologies they develop while providing access to foster innovation and attract external contributions. In their study, “,鈥 authors , Assistant Professor at the Georgia Institute of Technology Scheller College of Business, and , Assistant Professor of Business Administration 性视界 Business School and Principal Investigator at the 性视界 Business School AI Institute , examine a pivotal governance shift in PyTorch, a leading machine learning framework. Their findings shed light on how governance models impact companies鈥 open-source participation, incentives, and strategies.

Key Insight: Governance Changes Shift Incentives Rather Than Create Them

“By changing from ‘dominant’ to ‘collective’ governance, PyTorch experienced an increase in contribution from External Companies, but a sharp decrease in contribution from the focal company (Meta).” [1]

The study鈥檚 primary focus is PyTorch鈥檚 shift from strategic and technical governance by Meta, to a collective model of governance via the independent, nonprofit Linux Foundation in 2022. Yue and Nagle found that when PyTorch transitioned to a collective governance model, the shift resulted in an increase in external company contributions of approximately 25.7%. However, this increase was largely offset by Meta’s reduced involvement. The authors concluded that changes in governance structure do not necessarily create new incentives, but rather redistribute existing incentives among participants.

Key Insight: Governance Changes Drive Different Levels of Participation

“[C]ontributions increased from Chip Manufacturers, who began or increased making technology-specific investments needed to create interoperability between PyTorch and their computer chips. However, there were no changes in contributions by Non-Chip Manufacturers (鈥楢pplication Developer鈥) firms, who contribute primarily to learn and to improve the usability of the technology for themselves.” [2]

Yue and Nagle’s research revealed that not all external firms respond equally to changes in project governance. Following the PyTorch transition, they found that chip manufacturers (such as NVIDIA, Intel, and AMD), which they classify as “complementors,” increased their contributions to PyTorch by 47.1%, significantly more than other types of firms, which they call 鈥渦sers,鈥 including application developers (such as Microsoft, Amazon, and Alibaba) and cloud providers (such as OpenAI, Hugging Face, and Spotify).

The researchers attribute this difference to varying levels of dependency on the project. Chip manufacturers that rely on interoperability are more susceptible to potential “hold-up” problems if a single company controls the project’s direction. Application developers and cloud providers are less affected by the governance change because they primarily derive value from using the technology rather than integrating with it.

Key Insight: Control Rights and Strategic Openness Require Careful Consideration

“[S]hifting control rights to collective governance may not always increase total welfare because such a shift likely reduces the focal company鈥檚 incentive to contribute while simultaneously increasing the incentives of other companies to contribute.” [3]

The authors define 鈥渃ontrol rights鈥 as the power to make decisions about how the technology develops in the future. Yue and Nagle suggest that control rights are one reason why companies open their technologies to external participants, especially in industries experiencing fast-paced innovation and where the ability to capture value is unclear, such as artificial intelligence and machine learning. Maintaining control rights can be an important advantage for certain companies, but they do not benefit from external innovations. Alternatively, giving up control rights through open collaboration can encourage growth.

Yue and Nagle define 鈥渙pen collaboration鈥 as projects that involve more than two or three participants. Their research focused on 鈥渇irm-sponsored鈥 collaboration, where a single (focal) company releases a technology to spur innovation by other companies. The PyTorch case illustrates the potential downside of open collaboration, where the focal firm (Meta) found a reduced incentive to invest in the project. If a focal firm reduces its investment and cooperation with other participants, users can still benefit from access to the open-source technology interface, but complementors鈥 options become limited because they rely on interoperability with the technology, which might create a hold-up if it changes.

Why This Matters

This study suggests that companies should weigh the benefits of maintaining tight control over their open-source projects versus participating in collaborative governance models. For business executives and policy makers, this research offers insights into the strategic considerations surrounding collaboration and governance:

  • For executives, starting open-source projects can help their companies obtain and increase control rights over future technology innovations. However, establishing control rights can be expensive. And going forward, retaining control rights can limit participation and collaboration with external companies.
  • Policy makers must also recognize the trade-offs between retaining control rights and inviting open collaboration. Mandating open-source technology might not be appropriate in all cases. While regulations can help to prevent focal firms from taking advantage of control rights to increase prices after external participants invest, they should also consider the 鈥渙verall welfare鈥 of these projects, including incentives for focal companies to continue to participate.

References

[1] Daniel Yue and Frank Nagle, 鈥淚gniting Innovation: Evidence from PyTorch on Technology Control in Open Collaboration鈥, 性视界 Business School Strategy Unit Working Paper No. 25-013 and 性视界 Business Working Paper No. No. 25-013 (September 10, 2024): 1- 53, 21.

[2] Yue and Nagle, 鈥淚gniting Innovation: Evidence from PyTorch on Technology Control in Open Collaboration鈥, 3.

[3] Yue and Nagle, 鈥淚gniting Innovation: Evidence from PyTorch on Technology Control in Open Collaboration鈥, 3.

Meet The Authors

is an Assistant Professor in the  area at Georgia Tech’s . His research explores why firms openly share innovative knowledge without directly profiting, a strategy called 鈥渙pen disclosure.鈥 His projects use scientific publications and open source software in AI research as an empirical setting to develop and test new theories. 

Frank Nagle Profile

is an Assistant Professor in the Strategy Unit at 性视界 Business School, a faculty affiliate of the HBS AI Institute, the Managing the Future of Work Project, and . He studies how competitors can collaborate on the creation of core technologies, while still competing on the products and services built on top of them. His research falls into the broader categories of the future of work, the economics of IT, and digital transformation and considers how technology is weakening firm boundaries.


The post Shaping the Future of Innovation: Insights from PyTorch鈥檚 Governance Transformation appeared first on 性视界 Business School AI Institute.

]]>
How Venture Capital Drives Media Coverage in Startups: A Strategic Approach for Business Leaders /how-venture-capital-drives-media-coverage-in-startups-a-strategic-approach-for-business-leaders/ Mon, 28 Oct 2024 15:47:24 +0000 /?p=23278 Venture capital (VC) firms play a pivotal role in shaping the media landscape for startups, influencing public perception, talent acquisition, and future fundraising efforts. The paper “Investor Influence on Media Coverage: Evidence from Venture Capital-Backed Startups” by Brian K. Baik, Assistant Professor at 性视界 Business School and a researcher at the 性视界 Business School AI […]

The post How Venture Capital Drives Media Coverage in Startups: A Strategic Approach for Business Leaders appeared first on 性视界 Business School AI Institute.

]]>
Venture capital (VC) firms play a pivotal role in shaping the media landscape for startups, influencing public perception, talent acquisition, and future fundraising efforts. The paper “” by , Assistant Professor at 性视界 Business School and a researcher at the 性视界 Business School AI Institute Digital Value Lab, and , doctoral student at 性视界 Business School, sheds light on how VCs strategically engage with the media to bolster their portfolio companies. By examining survey and empirical data from nearly 400 VC investors, the study reveals that investor actions significantly enhance media coverage, ultimately leading to better branding, awareness, and talent quality.

Key Insight: Media Coverage Boosts Brand Awareness

鈥’The respondents who take steps to increase portfolio company media coverage claimed that they mainly do so to enhance the company鈥檚 brand and awareness.鈥 [1]

Venture capitalists recognize that increasing a startup鈥檚 visibility in the media is essential for building brand recognition, particularly for younger companies that lack an established reputation. Media coverage allows startups to gain public recognition, which can be crucial for attracting stakeholders such as customers and suppliers. By strategically managing their portfolio companies’ media exposure, VCs help these companies break through the noise in crowded markets, enhancing their brand presence and increasing stakeholder engagement.

Key Insight: VCs Use Active and Passive Strategies to Influence Media

“VCs can actively influence company policies and actions [and鈥 VCs may influence portfolio companies鈥 media coverage through a passive channel by leveraging their reputation.” [2]

VCs employ both active and passive strategies to enhance media coverage for their portfolio companies. On the active side, VCs may directly engage with media by facilitating interviews, issuing press releases, or working with public relations firms to boost visibility. Passive strategies, on the other hand, leverage the reputation of the VC firm itself, with media outlets covering companies simply because they are backed by a well-known venture capitalist. This dual approach helps VCs ensure that their investments receive the media attention they need to thrive, whether through direct involvement or by reputation alone.

Key Insight: Media Coverage Varies by Company Type and Stage

“Our respondents claimed to prioritize media coverage for B2B companies (65%, rather than B2C) and earlier-stage companies (i.e., seed/early stages, 43%), where reputation is most limited.” [3]

The type of company and its stage of growth heavily influence how VCs allocate media resources. VCs focus more on business-to-business (B2B) companies  where media coverage is more effective, as it is targeted at specific groups, rather than the general public. Additionally, VCs are more likely to increase media coverage for early-stage companies, which typically lack the reputation and customer base of more established firms. These companies, with relatively limited public information available, experience the greatest benefits from increased positive media attention.

Key Insight: Media Exposure and Talent Acquisition

鈥淸W]e find that portfolio companies experiencing an increase in positive news post-investment are correlated with better employee quality.鈥 [4]

The research highlights that VCs’ efforts to increase media visibility also plays a critical role in attracting top-tier talent. By enhancing public awareness and company reputation, startups can draw higher-quality employees, which in turn significantly influences the company鈥檚 success trajectory. This underscores the multifaceted value of media in shaping both external perceptions and internal capabilities of growing startups.

Why This Matters

For business leaders and C-suite executives, understanding the role of media in shaping public perception and driving business outcomes is essential. Venture capital firms have long recognized the strategic value of media coverage, using it not just to promote their portfolio companies but also to attract talent, secure additional funding, and prepare for exits. As competition in the startup ecosystem grows, leveraging media becomes a crucial tool for gaining a competitive edge. Executives should consider how to integrate media strategy into their overall business planning, ensuring that their company is not only seen by the right people but also perceived in a way that aligns with their long-term goals.

References

[1] Brian K. Baik, Albert Shin, 鈥淚nvestor Influence on Media Coverage: Evidence from Venture Capital-Backed Startups鈥 性视界 Business School Accounting & Management Unit Working Paper No. 24-073 (August 1, 2024), 12. 

[2] Baik and Shin, 鈥淚nvestor Influence on Media Coverage: Evidence from Venture Capital-Backed Startups鈥, 2. 

[3] Baik and Shin, 鈥淚nvestor Influence on Media Coverage: Evidence from Venture Capital-Backed Startups鈥, 4. 

[4] Baik and Shin, 鈥淚nvestor Influence on Media Coverage: Evidence from Venture Capital-Backed Startups鈥, 7.

Meet the Authors

is an assistant professor in the Accounting and Management Unit at 性视界 Business School. He teaches the Financial Reporting and Control course in the MBA required curriculum. Professor Baik studies how information, financial reporting, and corporate taxes matter for PE/VC investors or startup firms. Some of his works have focused on the role of financial statement disclosure for PE/VC investments, and whether and how private equity fund managers inflate their interim fund valuations (net asset values) during fundraising periods.

received a B.A. in Economics and Mathematics from Yale University and M.S. in Finance from MIT. Previously, he worked as an investor at Susquehanna Growth Equity and served as a fellow and mentor at the MIT Sandbox Accelerator Fund.


The post How Venture Capital Drives Media Coverage in Startups: A Strategic Approach for Business Leaders appeared first on 性视界 Business School AI Institute.

]]>
Smarter Business Decisions with Mixture Adaptive Design (MAD) /smarter-business-decisions-with-mixture-adaptive-design-mad/ Fri, 18 Oct 2024 20:19:20 +0000 /?p=23225 In a rapidly evolving business landscape, decision-makers must be agile. In a recent paper, Biyonka Liang, PhD candidate in Statistics at 性视界 University, and Iavor Bojinov, Assistant Professor of Business Administration at HBS and PI at the 性视界 Business School AI Institute Data Science and AI Operations Lab, discuss their development of the Mixture Adaptive […]

The post Smarter Business Decisions with Mixture Adaptive Design (MAD) appeared first on 性视界 Business School AI Institute.

]]>
In a rapidly evolving business landscape, decision-makers must be agile. In a recent paper, , PhD candidate in Statistics at 性视界 University, and , Assistant Professor of Business Administration at HBS and PI at the 性视界 Business School AI Institute Data Science and AI Operations Lab, discuss their development of the Mixture Adaptive Design (MAD) to allow industry researchers greater control and flexibility in their experiments. Their study, , outlines how the MAD allows businesses to optimize experimentation without compromising results.

Key Insight: Experiment Faster Without Compromising Statistical Validity

“The MAD allows managers to stop experiments early when a significant ATE is detected while ensuring valid inference.” [1]

Traditional A/B testing, while accurate, uniformly assigns 50% of users to each treatment, regardless of how good or bad the treatments are, delaying decision-making. Multi-Armed Bandit (MAB) algorithms 1, on the other hand, prioritize fast identification of successful strategies but sacrifice statistical depth. The MAD offers a hybrid approach, that combines any chosen MAB algorithm with a Bernoulli design 2, thus, combining the quick adaptability of MABs with the rigorous inference of A/B testing. This balance allows businesses to experiment faster without losing confidence in the validity of their results.

By adjusting the weight between exploration (testing different options) and exploitation (favoring the current best option), the MAD gives managers control over the experiment鈥檚 speed and precision. This is particularly useful in environments where quick decisions are critical, but businesses cannot afford to compromise on accuracy.

Key Insight: Minimizing Business Risk Through Early Stopping

鈥淓xperimentation de-risks innovation by reducing the proportion of people exposed to a potentially harmful change.鈥 [2]

A standout feature of the MAD is the flexibility it offers to experiment design. With traditional designs, managers must commit to a fixed experiment size or duration, hence there is no function allowing them to monitor and stop harmful experiments. The MAD changes this by allowing managers to monitor results continuously and adjust the balance between exploration and exploitation. 

For businesses like e-commerce platforms or consumer-facing services, the features this approach offers are essential. The MAD can help quickly phase out underperforming variations and focus on the successful ones, protecting both the customer experience and the company鈥檚 bottom line.

Key Insight: How MAD Achieves Anytime-Valid Inferences and its Regret-Minimization Compared to Standard Bandit Designs

“Using the MAD, the user can exactly quantify at each time point the loss in regret between the MAD and its underlying adaptive algorithm through 未t and thus, has complete control over the rate at which that loss shrinks over time.鈥 [3]

To get into the nitty gritty: at each timestep, the MAD assigns treatments based either on the Bernoulli design (with probability 未t 3) or the bandit algorithm (with probability 1-未t). The deterministic sequence 未t controls the balance between these two. By ensuring 未t converges to zero slower than 1/t1/4, the MAD retains the inferential power and validity similar to the Bernoulli design while still leveraging the adaptive learning capabilities of the bandit algorithm.

Meanwhile, the MAD’s regret 4 is a weighted sum of the regret from the underlying bandit algorithm and the regret from the Bernoulli design, with weights determined by 未t. By carefully selecting 未t, one can explicitly control the trade-off between regret minimization and statistical power. At each time point, the regret difference between the MAD and the underlying bandit algorithm decreases towards zero at a rate proportional to 未t.

Why This Matters

For modern business leaders, speed and accuracy are essential in decision-making. Whether it鈥檚 optimizing digital marketing strategies, enhancing user experience, or improving product features, the MAD provides a powerful tool for executives looking to make data-driven decisions more efficiently. In industries where rapid innovation is key, the MAD can help firms maintain their competitive edge by enabling smarter, more flexible experimentation.

Footnotes

1. A multi-armed bandit (MAB) algorithm is a type of reinforcement learning algorithm used for sequential decision-making in scenarios where there are multiple options (or “arms”) to choose from, and the goal is to maximize rewards over time. The name derives from the analogy of a gambler facing a row of slot machines (“one-armed bandits”), each with an unknown payout probability. The gambler’s objective is to figure out which machine offers the best rewards and exploit it as much as possible.

2. Bernoulli Design is a fundamental experimental design where each unit is independently assigned to either treatment or control with a fixed probability, typically 0.5. In other words, it’s like flipping a coin to determine the treatment assignment for each participant.

3. The sequence 未t is a crucial element of the Mixture Adaptive Design (MAD) proposed in the sources. It represents a deterministic, user-specified sequence of values that lie between 0 and 1 (inclusive), i.e., 未t 鈭 (0, 1].

4. Regret in the context of multi-armed bandit (MAB) experiments, is the expected difference in outcome between always choosing the best-performing treatment (arm) and the outcome achieved by the algorithm used in the experiment. In simpler terms, regret quantifies the opportunity cost of not consistently choosing the best option. The goal of many MAB algorithms is to minimize regret over time by learning which arm yields the best results.

References

[1] Biyonka Liang and Iavor Bojinov, An Experimental Design for Anytime-Valid Causal Inference on Multi-Armed Bandits,鈥 (June 14, 2024): 1-48, 1.

[2] Liang and Bojinov, An Experimental Design for Anytime-Valid Causal Inference on Multi-Armed Bandits,鈥 2.

[3] Liang and Bojinov, An Experimental Design for Anytime-Valid Causal Inference on Multi-Armed Bandits,鈥 15.

Meet the Authors

is a PhD candidate in Statistics at 性视界 University. Her research, which is supported by the NSF Graduate Research Fellowship, focuses on developing statistical methods and models for challenging practical problems in adaptive experimentation, reinforcement learning, and causal inference.

is an Assistant Professor of Business Administration and the Richard Hodgson Fellow at HBS as well as a faculty PI at the HBS AI Institute Data Science and AI Operations Lab and a faculty affiliate in the Department of Statistics at 性视界 University and the 性视界 Data Science Initiative. His research focuses on developing novel statistical methodologies to make business experimentation more rigorous, safer, and efficient, specifically homing in on the application of experimentation to the operationalization of artificial intelligence (AI), the process by which AI products are developed and integrated into real-world applications.


The post Smarter Business Decisions with Mixture Adaptive Design (MAD) appeared first on 性视界 Business School AI Institute.

]]>