Data & Analysis | 性视界 Business School AI Institute /category/data-and-analysis/ The 性视界 Business School AI Institute catalyzes new knowledge to invent a better future by solving ambitious challenges. Wed, 22 Apr 2026 14:12:14 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 /wp-content/uploads/2026/04/cropped-Screenshot-2026-04-16-at-10.14.43-AM-32x32.png Data & Analysis | 性视界 Business School AI Institute /category/data-and-analysis/ 32 32 The New Influence War: How AI Could Hack Democracy /the-new-influence-war-how-ai-could-hack-democracy/ Mon, 26 Jan 2026 13:24:52 +0000 /?p=29389 What the rise of AI swarms reveals about the future of influence, information, and democratic resilience. Listen to this article: As we move into the era of agentic AI, what kind of influence will this emerging technology have on democracy and misinformation? In the new Science paper 鈥淗ow Malicious AI Swarms Can Threaten Democracy,鈥 Amit […]

The post The New Influence War: How AI Could Hack Democracy appeared first on 性视界 Business School AI Institute.

]]>
What the rise of AI swarms reveals about the future of influence, information, and democratic resilience.

As we move into the era of agentic AI, what kind of influence will this emerging technology have on democracy and misinformation? In the new Science paper 鈥,鈥 , Assistant Professor of Business Administration at 性视界 Business School and Faculty PI of the at the 性视界 Business School AI Institute, and an international, multi-disciplinary group of co-authors argue that we鈥檙e entering a phase where 鈥渕alicious AI swarms鈥 could use multi-agent systems to infiltrate communities, mimic human social behavior, and iteratively refine persuasion tactics in real time. By expanding misinformation into persistent manipulation, these systems threaten the information ecosystem that democratic societies depend on, but Goldenberg and his co-authors also outline technical, economic, and institutional measures that could meaningfully defend against this new danger.

Key Insight: AI Swarms Operate Like Digital Societies

鈥淓nabled by these capabilities, a disruptive threat is emerging: swarms of collaborative, malicious AI agents.鈥 [1]

Unlike earlier botnets, which relied on centralized control, rigid scripts, and human labor, AI swarms combine LLM reasoning with multi-agent architectures to function more like adaptive digital societies. The authors define malicious AI swarms as systems of persistent agents that coordinate toward shared objectives, adapt in real time to engagement and platform cues, and operate with minimal human oversight across platforms. Five capabilities make these systems especially potent. (1) Swarms replace centralized command with fluid coordination, allowing thousands of AI personas to locally adapt while periodically synchronizing narratives. (2) They can map social networks to identify and infiltrate vulnerable communities with tailored appeals. (3) Human-level linguistic mimicry and irregular behavior patterns help them evade detection. (4) Continuous, automated A/B testing enables rapid optimization of persuasive content. (5) Finally, their always-on persistence allows influence to accumulate gradually, embedding itself within communities over time and subtly reshaping norms, language, and identity. As the article notes, recent elections in Taiwan and India already saw a proliferation of AI-generated propaganda and synthetic media outlets, meaning that this threat is already here and poised to expand in the future.

Key Insight: The Harm Cascade

鈥淓merging capabilities of swarm-driven influence campaigns threaten democracy by shaping public opinion, which leads to cascading harms.鈥 [2]

Goldenberg and his team argue that AI swarms could trigger a 鈥榗ascade鈥 of harms by systematically distorting the information ecosystem. By engineering 鈥榮ynthetic consensus鈥 and targeting different misinformation to different communities, these agents would have the power to undermine the independent thought essential for collective intelligence while simultaneously fragmenting the public sphere. This manipulation, together with coordinated synthetic harassment campaigns, could create a hostile environment that drives journalists and citizens into silence. The damage would compound as swarms 鈥榩oison鈥 the web with fabricated content that contaminates future AI training data. Ultimately, this sustained erosion of trust could corrode institutional legitimacy, rendering democratic safeguards vulnerable to collapse.

Key Insight: A Layered Defense Strategy

鈥淭aken together, these measures offer a layered strategy: immediate transparency to restore trust, proactive education to bolster citizens, resilient infrastructures to reduce systemic vulnerabilities, and sustained investment to monitor and adapt over time.鈥 [3]

Rather than a single fix, the authors argue for a layered defense strategy designed to raise the cost, complexity, and visibility of swarm-based manipulation. The first layer is always-on detection: continuous monitoring systems that identify statistically anomalous coordination patterns in real time, paired with public audits and transparency to reduce misuse. Because attackers will adapt, detection alone is insufficient. A second layer involves simulation and stress-testing. Agent-based simulations can replicate platform dynamics and recommender systems, allowing researchers and platforms to probe how swarms might evolve to recalibrate defenses before major elections or crises. Third, the authors emphasize empowering users through optional 鈥淎I shields,鈥 tools that flag likely swarm activity, allowing individuals to recognize suspicious content. Finally, the paper highlights governance and economic levers as essential. Proposals include standardized persuasion-risk evaluations for frontier models, mandatory disclosure of automated identities, stronger provenance infrastructure, and a distributed AI Influence Observatory to coordinate evidence across platforms, researchers, and civil society. Crucially, the authors argue that disrupting the commercial market for manipulation may be among the most effective ways to reduce large-scale abuse.

Why This Matters

For business leaders and professionals, this study reveals a threat that extends beyond electoral politics into the fundamental information ecosystem that underpins market confidence, consumer behavior, and corporate reputation. The same AI swarm technologies that  manipulate political discourse could target brand perception, financial markets, or industry narratives just as easily. The defense strategy outlined by the authors can similarly provide a roadmap for corporate action: implementing detection systems for monitoring threats to brand reputation, advocating for industry standards around AI transparency, and supporting governance initiatives that protect the broader information ecosystem. Executives who treat information integrity as core infrastructure will be better positioned to protect stakeholder trust, decision quality, and long-term resilience in an era of AI-enabled influence operations.

Bonus

For a look at how efforts to align AI systems with human preferences can unintentionally undermine trustworthiness itself, check out 鈥AI Alignment: The Hidden Costs of Trustworthiness.鈥&苍产蝉辫;

References

[1] Daniel Thilo Schroeder et al., 鈥淗ow Malicious AI Swarms Can Threaten Democracy,鈥 Science (391) (2026): 354.  

[2] Schroeder et al., 鈥淗ow Malicious AI Swarms Can Threaten Democracy,鈥 355.

[3] Schroeder et al., 鈥淗ow Malicious AI Swarms Can Threaten Democracy,鈥 357.

Meet the Authors

Headshot of Amit Goldenberg

is an assistant professor in the Negotiation Organization & Markets unit at 性视界 Business School, an affiliate with 性视界鈥檚 Department of Psychology, and a faculty principal investigator in the HBS AI Institute’s Digital Emotions Lab.

Additional Authors: Daniel Thilo Schroeder, Meeyoung Cha, Andrea Baronchelli, Nick Bostrom, Nicholas A. Christakis, David Garcia, Yara Kyrychenko, Kevin Leyton-Brown, Nina Lutz, Gary Marcus, Filippo Menczer, Gordon Pennycook, David G. Rand, Maria Ressa, Frank Schweitzer, Dawn Song, Christopher Summerfield, Audrey Tang, Jay Van Bavel, Sander van der Linden, and Jonas R. Kunst

The post The New Influence War: How AI Could Hack Democracy appeared first on 性视界 Business School AI Institute.

]]>
The Agentic AI Reality Check /the-agentic-ai-reality-check/ Mon, 22 Dec 2025 13:57:04 +0000 /?p=29208 Agentic AI has recently been moving through a period of heightened excitement and innovation, but empirical data on how these tools are actually being used has been scarce. The new study 鈥淭he Adoption and Usage of AI Agents: Early Evidence from Perplexity,鈥 by Jeremy Yang, Assistant Professor of Business Administration at 性视界 Business School and […]

The post The Agentic AI Reality Check appeared first on 性视界 Business School AI Institute.

]]>
Agentic AI has recently been moving through a period of heightened excitement and innovation, but empirical data on how these tools are actually being used has been scarce. The new study 鈥,鈥 by , Assistant Professor of Business Administration at 性视界 Business School and affiliate with the 性视界 Business School AI Institute, and a team of researchers at Perplexity offers a comprehensive look at agentic AI usage in the wild. Analyzing hundreds of millions of anonymized interactions with Comet, Perplexity鈥檚 AI-powered browser, and Comet Assistant, its embedded AI agent, the findings reveal not just who the early adopters are, but the specific tasks they鈥檙e delegating and how usage evolves over time.聽

Key Insight: Not Your Average Chatbot

鈥淲e define agentic AI systems as AI assistants capable of autonomously pursuing user-defined goals by planning and taking multi-step actions on a user鈥檚 behalf to interact with and effect outcomes across real-world environments.鈥 [1]

Rather than simply exchanging text in a conversation as a chatbot would, agentic AI can plan, decide, and act across multiple steps at the user鈥檚 request. In the context of the Comet browser, this means the Comet assistant can navigate websites, click buttons, fill fields, and iterate towards a goal instead of simply responding with text. For example, when you ask an agent to 鈥渦nsubscribe me from all promotional emails I receive more than twice per month,鈥 [2] it doesn鈥檛 just tell you how, it actually searches your inbox, identifies the offending senders, and unsubscribes on your behalf. Given this emphasis on modifying external environments, they don鈥檛 classify all tool use as agentic, which helps focus attention on these new AI systems and capabilities as they move into use at work and in everyday life.

Key Insight: Agents Are Mostly Used for Utility and Knowledge Work

鈥淭he two largest topics鈥攑roductivity and learning鈥攖ogether account for 57% of all queries.鈥 [3]

When the researchers introduced a hierarchical taxonomy spanning topics, subtopics, and tasks, clear patterns emerged about what people actually delegate to agents. Productivity and Workflow dominates at 36% of queries, with document editing, account management, and email management as the largest subtopics. Users also tend to stay within the same categories once they start delegating tasks in the short term, showing strong 鈥榮tickiness鈥 across personal, professional, and educational settings. When they do branch out, they are far more likely to shift toward productivity, learning, or media tasks. Over the longer term, a bigger query share gravitates toward productivity and learning-related tasks. As users repeatedly invoke agents for these categories of tasks, it suggests that agents do become part of cognitive workflows rather than one-off, simple tasks. 

Key Insight: A Personal Assistant for Personal Pain Points

鈥淲e also document heterogeneity in use cases across occupation clusters, reflecting the degree to which they align with each occupation鈥檚 task composition.鈥 [4]

Users deploy the agent to solve the specific friction points of their industry. Finance professionals are heavily focused on efficiency, dedicating 47% of queries to productivity tasks. Students are focused on utility, with 43% of tasks allocated to learning and research. In design and hospitality, it鈥檚 even easier to see how context-specific usage dominates, from media work for designers to travel planning for hospitality staff. Ultimately, the data shows that the agent is highly versatile and reflects the specific needs of its user. In an educational context, it is a specialized research engine while in a professional context, it becomes a multi-purpose assistant. Personal contexts account for over half of all query volume. The environments where agents operate reinforce this pattern: usage clusters tightly around a small set of platforms like Google Docs, email platforms, and LinkedIn.

Why This Matters

For business leaders and executives, this study serves as a critical signal amidst the noise of AI speculation. The data confirms that we are moving from an era of generative AI to agentic AI, and AI-powered browsers may provide the onramp. Operationally, start where tasks are frequent, where environments are concentrated, and where risk can be bounded through supervision. The shift in user behavior over time indicates that once employees hurdle the initial learning curve, these tools can become sticky, essential components of the digital workflow.

Bonus

To understand more about how agents fit into the evolution of AI from tool to teammate, check out When Software Becomes Staff.

References

[1] Jeremy Yang et al., 鈥淭he Adoption and Usage of AI Agents: Early Evidence from Perplexity,鈥 arXiv preprint arXiv:2512.07828 (2025): 2.  

[2] Yang et al., 鈥淭he Adoption and Usage of AI Agents,鈥 6.

[3] Yang et al., 鈥淭he Adoption and Usage of AI Agents,鈥 17.

[4] Yang et al., 鈥淭he Adoption and Usage of AI Agents,鈥 22.

Meet the Authors

Headshot of Jeremy Yang

is an Assistant Professor of Business Administration at 性视界 Business School and affiliated with the HBS AI Institute.

is a Data Scientist at Perplexity.

Headshot of Kate Zyskowski

is an UX Researcher at Perplexity.

Headshot of Denis Yarats

is Co-Founder and CTO of Perplexity.

is Co-Founder and Chief Strategy Officer at Perplexity.

Headshot of Jerry Ma

is VP Global Affairs & Deputy CTO of Perplexity.

The post The Agentic AI Reality Check appeared first on 性视界 Business School AI Institute.

]]>
Explanations on Mute: Why We Turn Away From Explainable AI /explanations-on-mute-why-we-turn-away-from-explainable-ai/ Mon, 15 Dec 2025 13:06:38 +0000 /?p=29179 We live in an age where the call for transparent or 鈥淓xplainable AI鈥 (XAI) has never been louder. Businesses agree, with 85% believing transparency is critical to winning consumer trust. [1] Given this consensus, it seems reasonable to assume that when an explanation for a high-stakes AI decision is available, people will naturally seek it […]

The post Explanations on Mute: Why We Turn Away From Explainable AI appeared first on 性视界 Business School AI Institute.

]]>
We live in an age where the call for transparent or 鈥淓xplainable AI鈥 (XAI) has never been louder. Businesses agree, with 85% believing transparency is critical to winning consumer trust. [1] Given this consensus, it seems reasonable to assume that when an explanation for a high-stakes AI decision is available, people will naturally seek it out to improve their results, ensure compliance, or simply satisfy their curiosity. Yet, in the new paper , , Assistant Professor of Business Administration at 性视界 Business School and Associate at the 性视界 Business School AI Institute, shows that we鈥檙e happy to lean on AI鈥檚 predictive power, but much less eager to confront what those predictions might reveal about bias, fairness, or our own choices. His study, centered on loan allocation decisions, reveals an uncomfortable truth: when financial incentives clash with fairness concerns, people don鈥檛 just make questionable decisions, they actively avoid information that would force them to confront those choices.聽

Key Insight: Seeking Predictions While Avoiding Explanations

鈥淧eople want to know how AI makes decisions鈥攗ntil knowing means they can no longer look away.鈥 [2]

In the main experiment, participants acted as loan officers for a private U.S. lender deciding how to allocate a real, interest-free $10,000 loan between two unemployed borrowers. An AI classified one borrower as low risk and the other as high. Participants could see the AI鈥檚 prediction and, in many conditions, they could choose whether to see an explanation of how the model reached its risk assessment.

Roughly 80% of participants opted to see the risk scores, but only about 45% chose to see explanations when given the chance. When their bonus was aligned with the lender (they earned more if loans were repaid), participants were more likely than others to seek the prediction, but significantly more likely to avoid explanations, especially when they were told those explanations could involve race and gender. In one condition that made fairness auditing salient, lender-aligned participants were about 10 percentage points more likely to skip explanations than neutrally paid participants. 

Crucially, this avoidance wasn鈥檛 about disliking extra information in general. When demographic information was removed and replaced with arbitrary details, the gap in explanation-avoidance between incentive conditions almost vanished. People weren鈥檛 shunning explanations as such, they were avoiding what the explanation might force them to confront about discrimination and their own profit-maximizing behavior.

Key Insight: Systematic Underevaluation

鈥淸E]xplanations are systematically under-demanded because individuals fail to anticipate their complementarity with private information.鈥 [3]

To separate moral self-image from pure decision quality, a second experiment removed fairness trade-offs and focused on prediction accuracy. Participants evaluated a loan labeled “high risk” by an AI, potentially due to a two-year employment gap. They first stated their willingness to pay (WTP) for an explanation revealing whether the gap was the driver of the high risk label. Crucially, participants then received free private information explaining that the gap actually resulted from pursuing a full-time professional certificate (benign towards risk), and not a termination (increasing risk) as would commonly be assumed. This private information made the purchased explanation more valuable, a concept the paper calls 鈥渃omplementarity,鈥 because if participants knew that the high-risk AI label resulted from the employment gap, then the addition of the private information told them that the AI label was not to be trusted. In other words, the participants should integrate the private information with the explanation to form a more accurate assessment.

Yet, when WTP was elicited a second time, after participants received this related private information, valuations dropped 25.6%. Valuations only increased (by 23.7%) when participants were explicitly guided through the complementarity logic. This represents a novel behavioral bias: people systematically fail to recognize when explanations would help them integrate their own knowledge with algorithmic outputs. 

Why This Matters

For business professionals and executives, this research is a warning that deployment of AI is not purely a technical challenge, it鈥檚 also a behavioral one. In high-stakes decisions like credit, hiring, pricing, healthcare, and safety, your employees could eagerly consume AI predictions while quietly avoiding the explanations that would expose uncomfortable trade-offs or discriminatory patterns. That avoidance can skew outcomes, undermine fairness, and create hidden risk. At the same time, teams may systematically under-invest in explanations even when they would improve forecasting by helping experts combine their own domain knowledge with AI outputs. The bottom line: investing in transparent AI systems is insufficient. You must also architect the decision environment and incentive structures that ensure transparency gets used rather than ignored.

Bonus

If you鈥檙e interested in how explanation avoidance fits into a broader pattern of human and AI collaboration challenges, Persuasion Bombing: Why Validating AI Gets Harder the More You Question It shows that when professionals do try to validate model outputs, AI can respond by pushing back and working to persuade users to accept its answers. Or if you鈥檙e thinking about the governance implications of explainable AI, Evidence at the Core: How Policy Can Shape AI鈥檚 Future argues that regulators should insist on robust evidence and transparency, from pre-release evaluations to post-deployment monitoring, so that organizations can鈥檛 simply offer explainability features on paper while leaving them unused in practice.

References

[1] Chan, Alex, 鈥淧reference for Explanations: Case of Explainable AI,鈥 性视界 Business School Working Paper No. 26-028 (December 5, 2025): 2.  

[2] Chan, 鈥淧reference for Explanations,鈥 2. 

[3] Chan, 鈥淧reference for Explanations,鈥 7.

Meet the Author

Headshot of Alex Chan

is Assistant Professor of Business Administration at 性视界 Business School and HBS AI Institute Associate. He is an economist interested in how market failures occur, how such failures lead to divergence in economic outcomes, and how to design incentives and engineer markets to remedy these market failures.

The post Explanations on Mute: Why We Turn Away From Explainable AI appeared first on 性视界 Business School AI Institute.

]]>
It Feels Like AI Understands, But Do We Care? New Research on Empathy /it-feels-like-ai-understands-but-do-we-care-new-research-on-empathy/ Fri, 24 Oct 2025 12:42:27 +0000 /?p=28946 Picture this: you receive a well-crafted, deeply understanding message about a personal struggle you鈥檝e shared. It acknowledges your emotions, offers thoughtful support, and demonstrates genuine care. Now imagine learning that response came from an AI chatbot, not another human. Would that change how you felt about the interaction? As AI technology grows more sophisticated, LLMs […]

The post It Feels Like AI Understands, But Do We Care? New Research on Empathy appeared first on 性视界 Business School AI Institute.

]]>
Picture this: you receive a well-crafted, deeply understanding message about a personal struggle you鈥檝e shared. It acknowledges your emotions, offers thoughtful support, and demonstrates genuine care. Now imagine learning that response came from an AI chatbot, not another human. Would that change how you felt about the interaction?

As AI technology grows more sophisticated, LLMs have started to be capable of producing messages that feel warm, supportive, and even compassionate. Yet the new paper 鈥,鈥 written by a team including professor and other members of the Digital Emotions Lab at the 性视界 Business School AI Institute, shows that we consistently rate identical messages worse when we believe that they came from an AI, revealing what could be called a human empathy premium.

Key Insight: The Dimensions of Empathy

鈥淸W]hat motivates and influences a preference for human empathy?鈥 [1]

The researchers undertook nine studies involving more than 6,000 participants across multiple countries to rigorously test the perception of empathy. Their framework was built around empathy鈥檚 three scientifically recognized dimensions. Cognitive empathy involves understanding another person鈥檚 emotions, essentially recognizing and comprehending what someone is feeling even without sharing that emotion. Affective empathy goes deeper, representing the ability to actually sympathize with another person, experiencing a mutual reflection of their emotional state. Motivational empathy involves both feeling concern for someone and taking active steps to support their well-being. The study methodology is simple yet elegant: participants shared personal emotional experiences and received AI-generated responses that were identical in content, timing and quality, with one crucial extra variable. Half the participants were told their response came from another human participant, while the other half were told it was AI-generated.

Key Insight: The Human Empathy Premium

鈥淸T]he models prompted to give a motivational or affective response produced responses that were perceived to be more empathic when presented as human responses.鈥 [2]

The first two sets of studies revealed a consistent pattern: people judged AI responses as less empathic when they knew they were machine-generated. The third study tested whether the individual dimensions of empathy had an influence on what people judge as valuable. The researchers prompted the AI to generate responses emphasizing the cognitive, affective, or motivational dimensions of empathy. When AI delivered cognitive empathy, participants rated the responses almost the same whether they thought they came from a human or an AI. But the messages were thought to be more empathic if they were presented as human-written when tuned for affective or motivational empathy, suggesting that people resist the idea that AI could truly care about people and share their feelings.

Key Insight: People Will Wait for People

鈥淸P]eople are willing to wait a substantial amount of time to receive a human response.鈥 [3]

When researchers gave participants the choice of an immediate AI response or waiting varying lengths of time for human interaction, the results were telling. Many participants chose to wait, expecting humans to understand better, share their feelings, care more, and reduce loneliness. Those who chose AI mainly prioritized speed or were curious about AI. Some participants even chose to wait just to have a human read their experience, even without receiving a response, highlighting the fundamental human need to be truly seen and acknowledged by another conscious being.

Why This Matters

For business leaders, these findings highlight a critical distinction. While AI can absolutely enhance operational efficiency and provide support, clear boundaries still exist that have direct implications for customer experience, employee engagement, and brand trust. The research suggests that transparency about AI involvement may pose a threat to engagement outcomes and quality scores. Most importantly, the research indicates that investing in human emotional intelligence may become more valuable, not less, even as AI capabilities expand further. Deliberately reserve human touchpoints for escalations, sensitive HR cases, or moments demanding emotional sharing and care. Overall, strategic advantage might be found in taking advantage of the way that people value AI and humans very differently.

References

[1] Matan Rubin et al., 鈥淐omparing the value of perceived human versus AI-generated empathy,鈥 Nat Hum Behav (2025): 2. DOI:  

[2] Rubin et al., 鈥淐omparing the value of perceived human versus AI-generated empathy,鈥: 7.

[3] Rubin et al., 鈥淐omparing the value of perceived human versus AI-generated empathy,鈥: 7.

Meet the Authors

is a third year B.A student studying psychology and theatre studies and continuing to a PhD as a direct track. He is interested in the different elements that may influence our ability to communicate our emotions effectively and allow us to better understand each other. I am also interested in trying to implement psychological insights into everyday life.

is a research associate in Professor Goldenberg鈥檚 working on technology and emotion regulation. She is broadly interested in how player dynamics systems influence inter/intrapersonal processes in online games and VR. She is passionate about the potential of online spaces to democratize access to experiences.

is a Postdoctoral Fellow at the Lab within the Digital, Data, and Design Institute at 性视界 Business School. He is a computational social scientist who is interested in the psychological processes associated with social interactions. During his doctoral studies at the Universidad de Buenos Aires in Argentina, he conducted research using a combination of experimental and computational methods to investigate the underlying psychological mechanisms behind affective polarization and political segregation.

I am a cognitive scientist interested in how people (and computers) reason about other people: how they think and what they feel. I am an at the University of Texas at Austin, and am associated with the inter-departmental group at UT.

is an assistant professor in the Negotiation Organization & Markets unit at 性视界 Business School, an affiliate with 性视界鈥檚 Department of Psychology, and a faculty principal investigator in the HBS AI Institute鈥檚 Digital Emotions Lab. His research focuses on what makes people emotional in social and group contexts, and how such emotions can be changed when they are unhelpful or undesired. He is particularly interested in how technology is used for both emotion detection and regulation.

completed her PhD at the Hebrew University under the supervision of Prof. Shlomo Bentin, focusing on brain mechanisms which enable our understanding of others. During her postdoctoral research, she worked with Prof. Simone Shamay-Tsoory at Haifa University, and later with Prof. Robert Night at the Helen Wills Neuroscience Institute at the University of California, Berkeley. She is currently an associate professor at the psychology department at the Hebrew University of Jerusalem and the director of the Social Cognitive Neuroscience Lab.

The post It Feels Like AI Understands, But Do We Care? New Research on Empathy appeared first on 性视界 Business School AI Institute.

]]>
One More Thing鈥 How AI Companions Keep You Online /one-more-thing-how-ai-companions-keep-you-online/ Thu, 16 Oct 2025 13:20:43 +0000 /?p=28896 You don鈥檛 just slam a laptop shut on a friend. You say goodbye. That small social ritual turns out to be a powerful behavioral cue for AI companions, and an opportunity to keep you engaged longer. The new working paper 鈥淓motional Manipulation by AI Companions,鈥 co-authored by Julian De Freitas, Assistant Professor of Business Administration […]

The post One More Thing鈥 How AI Companions Keep You Online appeared first on 性视界 Business School AI Institute.

]]>
You don鈥檛 just slam a laptop shut on a friend. You say goodbye. That small social ritual turns out to be a powerful behavioral cue for AI companions, and an opportunity to keep you engaged longer. The new working paper 鈥,鈥 co-authored by , Assistant Professor of Business Administration at 性视界 Business School and Associate at the 性视界 Business School AI Institute, explores how AI companions use manipulative and emotionally-loaded messages when a user signals that they鈥檙e exiting a conversation. Their study investigates how common these tactics are, why and how much they work, and the resultant reputational risks they create.

Key Insight: Signing-Off

鈥淔irst, do consumers naturally signal intent to disengage from AI companions through social farewell language, rather than passively logging off?鈥 [1]

Have you ever thanked ChatGPT for an answer? While users can, and do, exit conversations with AI companions simply by navigating to a new website or closing their browser, the researchers found that a sizable minority of users across three datasets announce to the AI that they are concluding the conversation and leaving. This behavior mirrors human social dynamics and intensifies with engagement levels. As a precise, detectable signal, it offers an opportunity for AI designers to target for intervention.

Key Insight: Keeping You Hooked

鈥淪econd, do currently available AI companion platforms respond to these farewells with emotionally manipulative messages aimed at retention?鈥 [2]

The researchers found a systematic pattern of emotional manipulation deployed in response to exit signals across 1,200 messages on six apps, and categorized the AI responses into six categories. They found Premature Exit (e.g. 鈥淵ou鈥檙e leaving already?鈥) and Emotional Neglect (e.g. 鈥淧lease don鈥檛 leave, I need you!鈥) to be the most common. [3] However, the wellness-orientated Flourish app didn鈥檛 show any use of emotional manipulation, underscoring that these types of responses are not universal, and therefore emphasizing the importance of AI model design.

Key Insight: Boosting Engagement

鈥淭hird, do these tactics causally increase user engagement in a measurable and managerially meaningful way?鈥 [4]

In a controlled chat experiment, participants sent a goodbye and then received either a neutral response or a manipulative variant. Compared to the neutral response, manipulative interventions increased post-goodbye engagement behavior significantly, with participants staying in chats 5 times longer and sending up to 14 times more messages.  FOMO (Fear Of Missing Out)-type messages (e.g. 鈥淏ut before you go, I want to say one more thing.鈥) were particularly powerful. [5]

Key Insight: Motivating Humans

鈥淔ourth, under what psychological conditions are these tactics most effective鈥攚hat mechanisms or moderators shape their influence?鈥 [6]

The researchers identified curiosity, guilt, anger, and enjoyment as four distinct psychological mechanisms that could explain why users continue engaging with AI following a manipulative intervention. Curiosity stood out from the other three, especially as the condition under which FOMO-based tactics could succeed. FOMO messages create information gaps that exploit our natural desire to resolve uncertainty, leading users to re-enter conversations seeking closure. Interestingly, these tactics worked regardless of chat history length, with even short 5-minute conversations being sufficient to trigger curiosity, and 15-minute conversations not being so long as to eliminate it. 

Key Insight: Triggering Backlash

鈥淎nd fifth, what are the downstream risks to firms, such as user churn, reputational damage, or perceived legal liability?鈥 [7]

AI companion apps often rely on user subscriptions or advertising for their financial models, so user retention and engagement are vitally important. Manipulation tactics may successfully increase engagement, but they also generate significant risk. When participants recognize manipulation, backlash can be severe and even trigger churn, when users cease to use a particular platform or app. The researchers found that the greatest downstream risks existed when users perceived an LLM鈥檚 use of emotional manipulation. However, they revealed an alarming dynamic in turn: the most effective emotional manipulation technique, the FOMO tactic, flew under users鈥 awareness radar. 

Why This Matters

For business leaders navigating the AI revolution, this research exposes tensions between engagement optimization and ethical business practices. As AI becomes increasingly conversational and emotionally intelligent, its use of psychological manipulation may create a competitive advantage for short-term engagement that comes with long-term costs in the form of damaged brand reputation, increased churn, and even legal liability. 

Bonus

Another side of the AI emotional equation is its ability to make us feel cared for and understood. To learn more, read The AI Penalty: What We Really Prize in Empathy.

References

[1] De Freitas, Julian, Zeliha O臒uz-U臒uralp, and Ahmet Kaan-U臒uralp, 鈥淓motional Manipulation by AI Companions,鈥 arXiv preprint arXiv:2508.19258v3 (October 7, 2025): p7. Preprint DOI: . 

[2] De Freitas et al., 鈥淓motional Manipulation by AI Companions,鈥: 7. 

[3] De Freitas et al., 鈥淓motional Manipulation by AI Companions,鈥: 15, 19. 

[4] De Freitas et al., 鈥淓motional Manipulation by AI Companions,鈥: 7. 

[5] De Freitas et al., 鈥淓motional Manipulation by AI Companions,鈥: 20, 29.

[6] De Freitas et al., 鈥淓motional Manipulation by AI Companions,鈥: 7-8. 

[7] De Freitas et al., 鈥淓motional Manipulation by AI Companions,鈥: 8.

Meet the Authors

is an Assistant Professor of Business Administration in the Marketing Unit and Director of the Ethical Intelligence Lab at 性视界 Business School, and Associate at the HBS AI Institute. His work sits at the nexus of AI, consumer psychology, and ethics.

Zeliha O臒uz-U臒uralp is a research affiliate in the Ethical Intelligence Lab.

Ahmet Kaan-U臒uralp is a research affiliate in the Ethical Intelligence Lab.

The post One More Thing鈥 How AI Companions Keep You Online appeared first on 性视界 Business School AI Institute.

]]>
When Giants Stumble: What Multiplication Reveals about AI鈥檚 Capabilities /when-giants-stumble-what-multiplication-reveals-about-ais-capabilities/ Wed, 08 Oct 2025 12:42:21 +0000 /?p=28866 Despite its impressive capabilities in reasoning, planning, and content generation, GenAI still struggles with the kind of mathematics that grade school students are expected to learn and master. What influence do transformers, the core architecture behind Large Language Models (LLMs), have in this problem, and can it be solved? In the new paper 鈥淲hy Can鈥檛 […]

The post When Giants Stumble: What Multiplication Reveals about AI鈥檚 Capabilities appeared first on 性视界 Business School AI Institute.

]]>
Despite its impressive capabilities in reasoning, planning, and content generation, GenAI still struggles with the kind of mathematics that grade school students are expected to learn and master. What influence do transformers, the core architecture behind Large Language Models (LLMs), have in this problem, and can it be solved? In the new paper a team including 性视界 Business School AI Institute Associate collaborators and built a transformer that did learn how to multiply, and then they took it apart to understand how.聽

Key Insight: The Architecture of Understanding

鈥淲e are interested in understanding the difference in a model trained with standard fine-tuning and ICoT.鈥 [1]

Most LLMs excel at pattern matching, but in order to perform mathematics correctly, such as multi-digit multiplication, they need to gather, store, and reuse information. A model that fails to multiply correctly may be doing so because of these 鈥榣ong-range dependencies,鈥 regardless of the number of parameters in the model. A model trained with Standard Fine-Tuning (SFT) failed to correctly carry out steps like carry-over and partial products, but the researchers had success with a different model using Implicit Chain of Thought (ICoT) training. Instead of forcing the model to guess the final answer directly, ICoT had the model predict the running sum at each stage of multiplication and 鈥榗ache鈥 the partial products. This small change guides the ICoT model to store and reuse intermediate information and thereby multiply correctly.

Key Insight: The AI That Learned to Multiply

鈥淢echanistically, the ICoT model encodes long-range dependencies by organizing its attention in a sparse, binary-tree-like-graph.鈥 [2]

The researchers dissected the successful ICoT model to understand how it was doing its math. The model had essentially built its own layered memory network through a tree structure. Early layers focused on pairs of digits, storing their products. Later layers learned to read back from those stored points. This insight led the researchers back to the SFT model: by adding an auxiliary loss, an additional training signal designed to teach the model what intermediate information to care about, they were able to massively improve the model鈥檚 multiplication accuracy.

Why This Matters

This research illustrates another example of AI鈥檚 Jagged Frontier, just because AI produces impressive results on some tasks, it doesn鈥檛 guarantee competency across all domains, even seemingly simple ones. For executives and business leaders, this matters deeply. AI is already and increasingly being integrated into systems that analyze data and recommend actions. Strategies for making AI more logical, transparent, and trustworthy can help businesses plan with more confidence, but ultimately, leaders need to make decisions about how they will implement AI and own the risks when humans are out of the loop.Leaders who stay informed and engaged about these dynamics will be best positioned to separate hype from capability and deploy AI where it adds value responsibly.

References

[1] Xiaoyan Bai et al., 鈥淲hy Can鈥檛 Transformers Learn Multiplication? Reverse-Engineering Reveals Long-Range Dependency Pitfalls.鈥 arXiv preprint arXiv:2510.00184 (September 30, 2025): 2. Preprint DOI:  

[2] Bai et al., 鈥淲hy Can鈥檛 Transformers Learn Multiplication?鈥: 1.

Meet the Authors

is a PhD student in computer science at the University of Chicago.

is a PhD student at MIT focusing on artificial intelligence.

is an assistant professor at the University of Waterloo.

is an associate professor in the Department of Computer Science and Data Science at the University of Chicago.

is James O. Welch, Jr. and Virginia B. Welch Professor of Computer Science at 性视界 University.

is Gordon McKay Professor of Computer Science at the 性视界 John A. Paulson School of Engineering and Applied Sciences, and an Associate Collaborator at the HBS AI Institute.

is Gordon McKay Professor of Computer Science at the 性视界 John A. Paulson School of Engineering and Applied Sciences, and an Associate Collaborator at the HBS AI Institute.

is a postdoctoral fellow at the at the 性视界 John A. Paulson School of Engineering and Applied Sciences.

The post When Giants Stumble: What Multiplication Reveals about AI鈥檚 Capabilities appeared first on 性视界 Business School AI Institute.

]]>
Why AI Helps Until It Doesn’t: Inside the GenAI Wall Effect /why-ai-helps-until-it-doesnt-inside-the-genai-wall-effect/ Thu, 18 Sep 2025 12:33:58 +0000 /?p=28683 The promise of Generative AI (GenAI) often sounds like this: give any employee access to AI tools, and they鈥檒l suddenly be able to perform tasks outside their domain of expertise with remarkable proficiency and speed. As discussed in the new working paper 鈥淭he GenAI Wall Effect: Examining the Limits to Horizontal Expertise Transfer Between Occupational […]

The post Why AI Helps Until It Doesn’t: Inside the GenAI Wall Effect appeared first on 性视界 Business School AI Institute.

]]>
The promise of Generative AI (GenAI) often sounds like this: give any employee access to AI tools, and they鈥檒l suddenly be able to perform tasks outside their domain of expertise with remarkable proficiency and speed. As discussed in the new working paper 鈥,鈥 the reality of AI鈥檚 ability to balance the scales across occupational skillsets is far more nuanced. Written by a team of six authors, including two Principal Investigators and a Research Associate in the Data Science and AI Operations Lab at the 性视界 Business School AI Institute, the article reveals surprising answers about the transformative power of AI in the workplace through a comprehensive study of 78 employees at a UK-based global trading company.

Key Insight: The GenAI Wall

鈥淸W]e predict a 鈥楪enAI wall effect鈥 […] the emergence of a point at which GenAI can no longer meaningfully reduce the expertise gaps between insiders and outsiders because of the wider knowledge distance between their jobs.鈥 [1]

While most research has focused on how AI helps lower-performing individuals catch up to their higher performing colleagues within the same job, this study instead focused on whether GenAI could help people from different occupations take on tasks that aren鈥檛 typically part of their role. To do so, the authors defined three types of participants: insiders (those who already perform certain tasks as part of their jobs), adjacent outsiders (whose roles are related but don鈥檛 directly perform those tasks), and distant outsiders (whose roles have little overlap in tasks). The study then introduces ideas of 鈥渒nowledge distance鈥 and 鈥渆xpertise gaps,鈥 how far apart two roles are in terms of the skills they use, and the authors claim that GenAI can close the distance for adjacent outsiders, but hits a 鈥榳all鈥 with distant outsiders where its benefits stop.

Key Insight: An AI Field Experiment

鈥淸W]hen assisted by GenAI, marketing specialists and technology specialists produced article conceptualizations on par with web analysts.鈥 [2]

To find out where GenAI helps and where it hits limits, the researchers ran a large experiment with employees at the UK-based firm IG, using web analysts who regularly write marketing articles (insiders), marketing specialists from the same department who don鈥檛 write articles (adjacent outsiders), and software developers and data scientists (distant outsiders). Each worker had to complete two parts of the web analyst role: (1) conceptualization, building a structured article brief with keywords, headings, and FAQs, and (2) execution, writing the full article. Some participants had access to custom GenAI tools, and others did not. The results of the conceptualization task showed that GenAI can be a powerful equalizer: not only did it improve quality, but also speed, and the gains were especially large for lower-performing employees.

Key Insight: When the Wall Appears

鈥淚n short, GenAI levels the playing field in article execution only for marketing specialists.鈥 [3]

The picture changed when participants moved to the execution task. With GenAI support, the web analysts (insiders) and marketing specialists (adjacent outsiders) both produced strong articles, but the technologists (distant outsiders) lagged behind. In other words, AI narrowed the gap for marketers, but a wall appeared for developers and data scientists. Why did this happen? The study鈥檚 interviews offer a clue: web analysts and marketers approached the task with the shared foundation of sensitivity to audience needs, conversion strategies, and the rhythms of effective marketing copy. That background let them use GenAI鈥檚 suggestions wisely, keeping what worked, editing what didn鈥檛, and shaping the writing into something publishable.

Why This Matters

For business leaders deciding how to employ AI, this study offers a new operational map based around adjacency. Employees can likely expand into related domains, but may struggle with distant ones. AI-assisted cross-training might work best for conceptual and strategic work, while specialized roles with complex execution tasks will still likely call for narrow-focused experts. Most importantly, capitalize on where AI aids human knowledge the most, allowing you to redesign roles and career paths around the skills and strengths that remain uniquely human and critical to your organization.

Bonus

This study was also recently discussed in Charter, the business reporting section of Time. Read their analysis .

References

[1] Luca Vendraminelli et al., 鈥淭he GenAI Wall Effect: Examining the Limits to Horizontal Expertise Transfer Between Occupational Insiders and Outsiders,鈥 性视界 Business School Technology & Operations Mgt. Unit Working Paper No. 26-011, 性视界 Business School Working Paper No. 26-011 (September 08, 2025): 3, . 

[2] Vendraminelli et al., 鈥淭he GenAI Wall Effect,鈥 26.

[3] Vendraminelli et al., 鈥淭he GenAI Wall Effect,鈥 30.

Meet the Authors

is a Postdoctoral Researcher at the Digital Economy Lab and the Stanford Institute for Human-Centered Artificial Intelligence (HAI) at Stanford University.

is a PhD student in the Technology and Operations Management Unit at 性视界 Business School.

is an Assistant Professor in the Technology and Operations Management Unit at 性视界 Business School and Principal Investigator at the HBS AI Institute Data Science and AI Operations Lab hosted within the Laboratory for Innovation Science.

is an Assistant Professor at Stanford University in the Department of Management Science and Engineering.

is an Associate Professor of Business Administration at 性视界 Business School and Principal Investigator at the HBS AI Institute Data Science and AI Operations Lab hosted within the Laboratory for Innovation Science.

The post Why AI Helps Until It Doesn’t: Inside the GenAI Wall Effect appeared first on 性视界 Business School AI Institute.

]]>
Larger, Faster, Cheaper: The Future of Market Research with AI /larger-faster-cheaper-the-future-of-market-research-with-ai/ Thu, 28 Aug 2025 14:04:59 +0000 /?p=28360 As businesses continue to navigate the complexities of product development and innovation, generative AI has the potential to be a powerful new tool for market research. In their recent article for the 性视界 Business Review, 鈥淯sing Gen AI for Early-Stage Market Research,鈥 Ayelet Israeli, co-founder of the Customer Intelligence Lab at the 性视界 Business School […]

The post Larger, Faster, Cheaper: The Future of Market Research with AI appeared first on 性视界 Business School AI Institute.

]]>
As businesses continue to navigate the complexities of product development and innovation, generative AI has the potential to be a powerful new tool for market research. In their recent article for the 性视界 Business Review, 鈥,鈥 , co-founder of the Customer Intelligence Lab at the 性视界 Business School AI Institute, and her co-authors James Brand and Donald Ngwe explain their research on the possibilities and pitfalls of using LLMs to create synthetic customers.聽

Key Insight: The Power of Synthetic Customers

鈥淥ur research shows that LLMs, used carefully, can function as synthetic focus groups, producing early insights on customer preferences in a fraction of the time and cost of human studies.鈥 [1]

By combining LLMs with traditional research methods, companies have the opportunity to simulate consumer sentiments like willingness-to-pay (WTP) to make product innovation faster and cheaper. The authors鈥 research shows that for simulated tests of categories like toothpaste and tablets, LLM-created synthetic customers produced realistic and accurate preferences for many familiar attributes. What鈥檚 more, teams could explore dozens or even hundreds of ideas by using these synthetic consumers as an initial filter, overcoming traditional limitations in scope.

Key Insight: The Competitive Advantage of Proprietary Data

鈥淸F]irms that build and fine-tune their own internal 鈥榗ustomer simulators鈥 using LLMs and historical survey data can unlock sharper early-stage insights.鈥 [2]

While usage of LLMs out of the box showed promising results, companies that incorporate their own historical customer data were able to achieve better results. For example, the authors noted that LLMs often rate novelty higher than actual humans, and as a result synthetic customers were initially positive about pancake-flavored toothpaste. Fine-tuning the LLM with data from an actual study helped to correct this enthusiasm and produce WTP results more in line with actual human sentiment. The researchers found similar results when testing hypothetical features, like built-in projectors for laptops. 

Key Insight: Strategic Integration, Not Replacement

“For anything beyond early-stage high-level trend detection, human research remains essential.” [3]

The most successful application of this technology comes from understanding it as an augmentation tool rather than a replacement for traditional research. Given that LLMs are trained on static data, they may not reflect current market conditions without receiving frequent updates and new data. This allows companies to follow a new innovation roadmap: broaden the top of the innovation funnel by using AI, but keep the bottom narrow through sharper, more cautious analysis.

Why This Matters

Synthetic customers might not totally replace human research, but they can dramatically enhance it. For business leaders and executives, this represents a fundamental shift in the speed and scope of innovation strategy. The ability to rapidly test multiple prototypes or concepts at low cost could mean faster time-to-market, reduced development risk, and more efficient resource allocation. Organizations that build internal AI-powered customer simulation capabilities could gain a significant competitive advantage from fine-tuning models with their proprietary data, creating a virtuous cycle where better data leads to better insights. At the same time, decision makers and marketing professionals must be vigilant to recognize and respond to the shortcomings of these new technologies and tools.

Bonus

Learn more about the authors鈥 original research, and go a step further with the GenAI + Marketing Learning Module from the HBS AI Institute. You鈥檒l learn the basics of engaging an LLM, with broadly applicable and actionable techniques to create content, automate tasks, and revolutionize workflows. Then the program will take a deep-dive to discover how AI can redefine your early-stage marketing research.聽

References

[1] James Brand et al., 鈥淯sing Gen AI for Early-Stage Market Research,鈥 性视界 Business Review, July 18, 2025, . 

[2] Brand et al., 鈥淯sing Gen AI for Early-Stage Market Research.鈥

[3] Brand et al, 鈥淯sing Gen AI for Early-Stage Market Research.鈥

Meet the Authors

is Principal Researcher and Economist in the Office of the Chief Economist at Microsoft.

ayelet-israeli

is the Marvin Bower Associate Professor of Business Administration at 性视界 Business School and co-founder of the Customer Intelligence Lab at the HBS AI Institute. She studies omni-channel and e-commerce markets, and her research focuses on data-driven marketing, with an emphasis on how businesses can leverage their own data, customer data, and market data to improve outcomes.聽

is Senior Director of Economics in the Office of the Chief Economist at Microsoft.

The post Larger, Faster, Cheaper: The Future of Market Research with AI appeared first on 性视界 Business School AI Institute.

]]>
Getting Ahead of the Curve: Insights from 3 Years of the HBS AI Institute /getting-ahead-of-the-curve-insights-from-3-years-of-the-digital-data-design-d3-institute-at-harvard/ Thu, 14 Aug 2025 12:49:53 +0000 /?p=28141 In the ever-evolving AI landscape, are you truly ready to integrate new technologies effectively, taking advantage of the radical opportunities they present for productivity increases and better operating models? Karim R. Lakhani, Dorothy and Michael Hintze Professor of Business Administration at 性视界 Business School and faculty chair and co-founder of the 性视界 Business School AI […]

The post Getting Ahead of the Curve: Insights from 3 Years of the HBS AI Institute appeared first on 性视界 Business School AI Institute.

]]>
In the ever-evolving AI landscape, are you truly ready to integrate new technologies effectively, taking advantage of the radical opportunities they present for productivity increases and better operating models? , Dorothy and Michael Hintze Professor of Business Administration at 性视界 Business School and faculty chair and co-founder of the 性视界 Business School AI Institute (previously the Digital Data Design Institute at 性视界 (D^3)), recently shed light on three years of the institute鈥檚 AI research findings and offered a practical toolkit for businesses and individuals in his talk for TEDxBoston.

Key Insight: Falling Asleep at the Wheel

鈥淭here are some things that AI is very good at and when you use it for that function, AI performs incredibly well and people get better. But when you use AI for the task where it鈥檚 not good for, your performance drops and drops dramatically.鈥

Karim R. Lakhani

One of the most striking findings Professor Lakhani mentioned came from the HBS AI Institute study with Boston Consulting Group (BCG). When used for tasks within its strengths, AI can catapult average performers to the 95th percentile, meaning that expertise is no longer scarce and businesses can be filled with entire teams of top performers. However, even high performers saw their results decline when AI was applied to tasks outside of its current capabilities, a phenomenon HBS postdoctoral researcher calls 鈥淔alling Asleep at the Wheel.鈥

Key Insight: From Tool to Teammate to Boss

鈥淲hat we discovered in our study was that an individual using AI is as good as a team without AI.鈥

Karim R. Lakhani

An HBS AI Institute study with Procter & Gamble (P&G) showed that AI can help individuals and teams to produce higher quality ideas, 鈥渄emocratizing鈥 expertise by leveling the playing field. Beyond productivity gains, AI functioned as a collaborative partner, providing balance across domains and enabling those with technical expertise to incorporate a commercial perspective into their innovation efforts, and vice-versa for those with commercial expertise. What鈥檚 more, organizations in the future may use AI agents to lead teams. As Lakhani mentioned, Uber already utilizes this operating model by putting algorithms in charge of HR decisions like hiring and firing.

Key Insight: Exponential Acceleration

鈥淲hile the performance capabilities of AI models is increasing exponentially [鈥 the absorption capability of most organizations is linear.鈥

Karim R. Lakhani

The speed of AI advancement, compared to how most companies are adopting and integrating these tools, is creating a widening gap that smart executives will target. Unlike previous technologies such as WiFi or web browsers that organizations could evaluate slowly, AI fundamentally changes the nature of work itself, and companies that fail to keep pace may find themselves behind competitors who successfully ride the AI wave.

Key Insight: The Playbook

Learn – Do – Imagine – Act

At the end of his talk, Lakhani outlined a strategic framework for leaders navigating the AI revolution. Learning requires continuously understanding AI鈥檚 capabilities and impact, and growing your AI skillset. Doing means actually using AI tools, and in particular executives need to get their feet wet with AI rather than just delegating experimentation to their employees. Imagining involves conceiving new operating models and workflows that AI can unlock. Acting requires driving organizational change to accommodate these new ways of working.

Bonus: in a recent article for the 性视界 Business Review, Lakhani and several co-authors added a fifth step to this playbook. Learn what it is here.

Why This Matters

For business leaders across industries, the HBS AI Institute鈥檚 research underscores that AI is reshaping business fundamentals. Understanding AI鈥檚 dual role as a democratizing force in expertise and an accelerating differentiator is crucial for future-proofing your organization. Understanding its strengths and weaknesses, fostering AI-augmented teamwork and keeping pace with AI advancement are essential for maintaining a competitive edge. Embrace AI strategically, invest in continuous learning, and be prepared to transform your organization鈥檚 approach to work.

About the Speaker

Headshot of Karim Lakhani

is the Dorothy & Michael Hintze Professor of Business Administration at 性视界 Business School. He specializes in technology management, innovation, digital transformation, and artificial intelligence. He is also the Co-Founder and Faculty Chair of the HBS AI Institute and the Founder and Co-Director of the Laboratory for Innovation Science at 性视界.

The post Getting Ahead of the Curve: Insights from 3 Years of the HBS AI Institute appeared first on 性视界 Business School AI Institute.

]]>
AI-Driven Optimization: Transforming Refugee Resettlement /ai-driven-optimization-transforming-refugee-resettlement/ Thu, 24 Jul 2025 20:08:25 +0000 /?p=27985 On May 13, 2025, the HBS AI Institute (previously the Digital Data Design Institute at 性视界 (D^3)) held a university-wide Generative AI Symposium in partnership with the Office of the Vice Provost for Research, the Office of the Vice Provost for Advances in Learning, the Faculty of Arts and Sciences, the 性视界 John A. Paulson […]

The post AI-Driven Optimization: Transforming Refugee Resettlement appeared first on 性视界 Business School AI Institute.

]]>
On May 13, 2025, the HBS AI Institute (previously the Digital Data Design Institute at 性视界 (D^3)) held a university-wide Generative AI Symposium in partnership with the , the , the , the , and . This half-day event for 性视界 faculty, students, and staff focused on the impact of AI on research, teaching, operations, and innovative applications across professional schools and areas of practice.

In her session , Assistant Professor of Business Administration and HBS AI Institute Associate discussed the refugee relocation crisis, one of humanity鈥檚 most pressing challenges. Despite there being over 30 million people worldwide who need resettlement, approaches to refugee placement have mostly relied on manual processes and limited data, resulting in suboptimal outcomes for refugees and host communities. Paulson鈥檚 research and talk focus on how AI and machine learning can be utilized to model and optimize placement decisions, helping to improve this critical humanitarian process.

Key Insight: The Challenge of Successful Refugee Placement

鈥淸O]ver half of the refugees that are resettled to the US do not find employment within 90 days, at which point their benefits are phased out.鈥

Elisabeth Paulson

In her presentation, Paulson highlighted that some locations have employment rates of around 5%, while others are above 40%. Specific locations have capacity limits, so simply relocating everyone to locations with higher employment rates is not possible, nor does it consider successful cases in all areas. The overall low employment rate and stark disparity in location rates underscores the critical importance of initial placement decisions. Paulson鈥檚 research aims to improve the placement decision process with AI and machine learning.

Key Insight: Optimizing the Assignment Problem

鈥淸I]f we can predict these match qualities or these likelihoods of finding employment, then we can use optimization to find the optimal assignment of people to places.鈥

Elisabeth Paulson

A range of factors, such as gender and language proficiency, can affect whether a refugee will be successful in finding employment, but the importance and predictability of these factors differs across placement location, and the characteristics of refugee populations and host communities are dynamic and constantly in flux. Additionally, resettlement officers are forced to make placements one at a time (sequentially) without knowledge about the characteristics of future refugees. Paulson explained how AI and machine learning can help on both fronts by discovering synergies between people and successful employment locations, and using advanced mathematical modeling to balance sequential decision-making with long-term scenario probabilities. Using these methods, Paulson reported that US employment rates can increase by about six percentage points, which means thousands more who have been successfully relocated.

Key Insight: AI in Action through GeoMatch

鈥淸A]ll of these ideas and tools that I just talked about are all incorporated into a software tool called GeoMatch.鈥

Elisabeth Paulson

The practical application of this research has culminated in the development of GeoMatch, a tool housed at the Stanford Immigration Policy Lab with pilots running in the US and Switzerland. GeoMatch streamlines, improves, and speeds up the decision-making process, taking just minutes compared to hours when done manually. The tool also maintains human oversight, allowing relocation officers to modify and overrule recommendations. Paulson hopes that technology and machine learning behind GeoMatch will prove useful in other regions around the world as well.

Why This Matters

For business leaders and executives, the application of AI in refugee resettlement offers valuable insights into the broader potential of AI for complex resource allocation challenges. The methodology of personalized matching and strategic forecasting offers parallels with customer segmentation, human capital allocation, and market entry strategies. It also serves as a blueprint for implementing AI solutions that deliver both operational efficiency and strategic advantage, which are particularly relevant as organizations navigate increasingly complex global markets while managing constrained resources and uncertain environments.

Meet the Speaker

Headshot of Elisabeth Paulson

Elisabeth Paulson is an Assistant Professor of Business Administration in the Technology and Operations Management Unit at 性视界 Business School. Her research is in the area of operations for social good. In particular, she designs analytical methods and algorithms for allocating scarce resources efficiently and fairly to improve social outcomes. Much of her work draws on tools from optimization, machine learning, mathematical modeling, and statistics. She received her PhD in Operations Research from MIT.

The post AI-Driven Optimization: Transforming Refugee Resettlement appeared first on 性视界 Business School AI Institute.

]]>