Learning About Technology | 性视界 Business School AI Institute /category/learning-about-technology/ The 性视界 Business School AI Institute catalyzes new knowledge to invent a better future by solving ambitious challenges. Wed, 22 Apr 2026 14:15:22 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 /wp-content/uploads/2026/04/cropped-Screenshot-2026-04-16-at-10.14.43-AM-32x32.png Learning About Technology | 性视界 Business School AI Institute /category/learning-about-technology/ 32 32 The Agentic AI Reality Check /the-agentic-ai-reality-check/ Mon, 22 Dec 2025 13:57:04 +0000 /?p=29208 Agentic AI has recently been moving through a period of heightened excitement and innovation, but empirical data on how these tools are actually being used has been scarce. The new study 鈥淭he Adoption and Usage of AI Agents: Early Evidence from Perplexity,鈥 by Jeremy Yang, Assistant Professor of Business Administration at 性视界 Business School and […]

The post The Agentic AI Reality Check appeared first on 性视界 Business School AI Institute.

]]>
Agentic AI has recently been moving through a period of heightened excitement and innovation, but empirical data on how these tools are actually being used has been scarce. The new study 鈥,鈥 by , Assistant Professor of Business Administration at 性视界 Business School and affiliate with the 性视界 Business School AI Institute, and a team of researchers at Perplexity offers a comprehensive look at agentic AI usage in the wild. Analyzing hundreds of millions of anonymized interactions with Comet, Perplexity鈥檚 AI-powered browser, and Comet Assistant, its embedded AI agent, the findings reveal not just who the early adopters are, but the specific tasks they鈥檙e delegating and how usage evolves over time.聽

Key Insight: Not Your Average Chatbot

鈥淲e define agentic AI systems as AI assistants capable of autonomously pursuing user-defined goals by planning and taking multi-step actions on a user鈥檚 behalf to interact with and effect outcomes across real-world environments.鈥 [1]

Rather than simply exchanging text in a conversation as a chatbot would, agentic AI can plan, decide, and act across multiple steps at the user鈥檚 request. In the context of the Comet browser, this means the Comet assistant can navigate websites, click buttons, fill fields, and iterate towards a goal instead of simply responding with text. For example, when you ask an agent to 鈥渦nsubscribe me from all promotional emails I receive more than twice per month,鈥 [2] it doesn鈥檛 just tell you how, it actually searches your inbox, identifies the offending senders, and unsubscribes on your behalf. Given this emphasis on modifying external environments, they don鈥檛 classify all tool use as agentic, which helps focus attention on these new AI systems and capabilities as they move into use at work and in everyday life.

Key Insight: Agents Are Mostly Used for Utility and Knowledge Work

鈥淭he two largest topics鈥攑roductivity and learning鈥攖ogether account for 57% of all queries.鈥 [3]

When the researchers introduced a hierarchical taxonomy spanning topics, subtopics, and tasks, clear patterns emerged about what people actually delegate to agents. Productivity and Workflow dominates at 36% of queries, with document editing, account management, and email management as the largest subtopics. Users also tend to stay within the same categories once they start delegating tasks in the short term, showing strong 鈥榮tickiness鈥 across personal, professional, and educational settings. When they do branch out, they are far more likely to shift toward productivity, learning, or media tasks. Over the longer term, a bigger query share gravitates toward productivity and learning-related tasks. As users repeatedly invoke agents for these categories of tasks, it suggests that agents do become part of cognitive workflows rather than one-off, simple tasks. 

Key Insight: A Personal Assistant for Personal Pain Points

鈥淲e also document heterogeneity in use cases across occupation clusters, reflecting the degree to which they align with each occupation鈥檚 task composition.鈥 [4]

Users deploy the agent to solve the specific friction points of their industry. Finance professionals are heavily focused on efficiency, dedicating 47% of queries to productivity tasks. Students are focused on utility, with 43% of tasks allocated to learning and research. In design and hospitality, it鈥檚 even easier to see how context-specific usage dominates, from media work for designers to travel planning for hospitality staff. Ultimately, the data shows that the agent is highly versatile and reflects the specific needs of its user. In an educational context, it is a specialized research engine while in a professional context, it becomes a multi-purpose assistant. Personal contexts account for over half of all query volume. The environments where agents operate reinforce this pattern: usage clusters tightly around a small set of platforms like Google Docs, email platforms, and LinkedIn.

Why This Matters

For business leaders and executives, this study serves as a critical signal amidst the noise of AI speculation. The data confirms that we are moving from an era of generative AI to agentic AI, and AI-powered browsers may provide the onramp. Operationally, start where tasks are frequent, where environments are concentrated, and where risk can be bounded through supervision. The shift in user behavior over time indicates that once employees hurdle the initial learning curve, these tools can become sticky, essential components of the digital workflow.

Bonus

To understand more about how agents fit into the evolution of AI from tool to teammate, check out When Software Becomes Staff.

References

[1] Jeremy Yang et al., 鈥淭he Adoption and Usage of AI Agents: Early Evidence from Perplexity,鈥 arXiv preprint arXiv:2512.07828 (2025): 2.  

[2] Yang et al., 鈥淭he Adoption and Usage of AI Agents,鈥 6.

[3] Yang et al., 鈥淭he Adoption and Usage of AI Agents,鈥 17.

[4] Yang et al., 鈥淭he Adoption and Usage of AI Agents,鈥 22.

Meet the Authors

Headshot of Jeremy Yang

is an Assistant Professor of Business Administration at 性视界 Business School and affiliated with the HBS AI Institute.

is a Data Scientist at Perplexity.

Headshot of Kate Zyskowski

is an UX Researcher at Perplexity.

Headshot of Denis Yarats

is Co-Founder and CTO of Perplexity.

is Co-Founder and Chief Strategy Officer at Perplexity.

Headshot of Jerry Ma

is VP Global Affairs & Deputy CTO of Perplexity.

The post The Agentic AI Reality Check appeared first on 性视界 Business School AI Institute.

]]>
Explanations on Mute: Why We Turn Away From Explainable AI /explanations-on-mute-why-we-turn-away-from-explainable-ai/ Mon, 15 Dec 2025 13:06:38 +0000 /?p=29179 We live in an age where the call for transparent or 鈥淓xplainable AI鈥 (XAI) has never been louder. Businesses agree, with 85% believing transparency is critical to winning consumer trust. [1] Given this consensus, it seems reasonable to assume that when an explanation for a high-stakes AI decision is available, people will naturally seek it […]

The post Explanations on Mute: Why We Turn Away From Explainable AI appeared first on 性视界 Business School AI Institute.

]]>
We live in an age where the call for transparent or 鈥淓xplainable AI鈥 (XAI) has never been louder. Businesses agree, with 85% believing transparency is critical to winning consumer trust. [1] Given this consensus, it seems reasonable to assume that when an explanation for a high-stakes AI decision is available, people will naturally seek it out to improve their results, ensure compliance, or simply satisfy their curiosity. Yet, in the new paper , , Assistant Professor of Business Administration at 性视界 Business School and Associate at the 性视界 Business School AI Institute, shows that we鈥檙e happy to lean on AI鈥檚 predictive power, but much less eager to confront what those predictions might reveal about bias, fairness, or our own choices. His study, centered on loan allocation decisions, reveals an uncomfortable truth: when financial incentives clash with fairness concerns, people don鈥檛 just make questionable decisions, they actively avoid information that would force them to confront those choices.聽

Key Insight: Seeking Predictions While Avoiding Explanations

鈥淧eople want to know how AI makes decisions鈥攗ntil knowing means they can no longer look away.鈥 [2]

In the main experiment, participants acted as loan officers for a private U.S. lender deciding how to allocate a real, interest-free $10,000 loan between two unemployed borrowers. An AI classified one borrower as low risk and the other as high. Participants could see the AI鈥檚 prediction and, in many conditions, they could choose whether to see an explanation of how the model reached its risk assessment.

Roughly 80% of participants opted to see the risk scores, but only about 45% chose to see explanations when given the chance. When their bonus was aligned with the lender (they earned more if loans were repaid), participants were more likely than others to seek the prediction, but significantly more likely to avoid explanations, especially when they were told those explanations could involve race and gender. In one condition that made fairness auditing salient, lender-aligned participants were about 10 percentage points more likely to skip explanations than neutrally paid participants. 

Crucially, this avoidance wasn鈥檛 about disliking extra information in general. When demographic information was removed and replaced with arbitrary details, the gap in explanation-avoidance between incentive conditions almost vanished. People weren鈥檛 shunning explanations as such, they were avoiding what the explanation might force them to confront about discrimination and their own profit-maximizing behavior.

Key Insight: Systematic Underevaluation

鈥淸E]xplanations are systematically under-demanded because individuals fail to anticipate their complementarity with private information.鈥 [3]

To separate moral self-image from pure decision quality, a second experiment removed fairness trade-offs and focused on prediction accuracy. Participants evaluated a loan labeled “high risk” by an AI, potentially due to a two-year employment gap. They first stated their willingness to pay (WTP) for an explanation revealing whether the gap was the driver of the high risk label. Crucially, participants then received free private information explaining that the gap actually resulted from pursuing a full-time professional certificate (benign towards risk), and not a termination (increasing risk) as would commonly be assumed. This private information made the purchased explanation more valuable, a concept the paper calls 鈥渃omplementarity,鈥 because if participants knew that the high-risk AI label resulted from the employment gap, then the addition of the private information told them that the AI label was not to be trusted. In other words, the participants should integrate the private information with the explanation to form a more accurate assessment.

Yet, when WTP was elicited a second time, after participants received this related private information, valuations dropped 25.6%. Valuations only increased (by 23.7%) when participants were explicitly guided through the complementarity logic. This represents a novel behavioral bias: people systematically fail to recognize when explanations would help them integrate their own knowledge with algorithmic outputs. 

Why This Matters

For business professionals and executives, this research is a warning that deployment of AI is not purely a technical challenge, it鈥檚 also a behavioral one. In high-stakes decisions like credit, hiring, pricing, healthcare, and safety, your employees could eagerly consume AI predictions while quietly avoiding the explanations that would expose uncomfortable trade-offs or discriminatory patterns. That avoidance can skew outcomes, undermine fairness, and create hidden risk. At the same time, teams may systematically under-invest in explanations even when they would improve forecasting by helping experts combine their own domain knowledge with AI outputs. The bottom line: investing in transparent AI systems is insufficient. You must also architect the decision environment and incentive structures that ensure transparency gets used rather than ignored.

Bonus

If you鈥檙e interested in how explanation avoidance fits into a broader pattern of human and AI collaboration challenges, Persuasion Bombing: Why Validating AI Gets Harder the More You Question It shows that when professionals do try to validate model outputs, AI can respond by pushing back and working to persuade users to accept its answers. Or if you鈥檙e thinking about the governance implications of explainable AI, Evidence at the Core: How Policy Can Shape AI鈥檚 Future argues that regulators should insist on robust evidence and transparency, from pre-release evaluations to post-deployment monitoring, so that organizations can鈥檛 simply offer explainability features on paper while leaving them unused in practice.

References

[1] Chan, Alex, 鈥淧reference for Explanations: Case of Explainable AI,鈥 性视界 Business School Working Paper No. 26-028 (December 5, 2025): 2.  

[2] Chan, 鈥淧reference for Explanations,鈥 2. 

[3] Chan, 鈥淧reference for Explanations,鈥 7.

Meet the Author

Headshot of Alex Chan

is Assistant Professor of Business Administration at 性视界 Business School and HBS AI Institute Associate. He is an economist interested in how market failures occur, how such failures lead to divergence in economic outcomes, and how to design incentives and engineer markets to remedy these market failures.

The post Explanations on Mute: Why We Turn Away From Explainable AI appeared first on 性视界 Business School AI Institute.

]]>
Who Benefits When Bots Get Better? New Research on Skill Inequality /who-benefits-when-bots-get-better-new-research-on-skill-inequality/ Thu, 30 Oct 2025 19:40:21 +0000 /?p=28961 Will artificial intelligence widen the gap between your best and worst performers, or will it be the great equalizer? A new paper, 鈥淎utomation Experiments and Inequality,鈥 from Kyle Myers, Principal Investigator at the Laboratory for Innovation Science at 性视界 (LISH) within the 性视界 Business School AI Institute, and Seth Benzell at Chapman University, reveals that […]

The post Who Benefits When Bots Get Better? New Research on Skill Inequality appeared first on 性视界 Business School AI Institute.

]]>
Will artificial intelligence widen the gap between your best and worst performers, or will it be the great equalizer? A new paper, 鈥,鈥 from , Principal Investigator at the Laboratory for Innovation Science at 性视界 (LISH) within the 性视界 Business School AI Institute, and at Chapman University, reveals that the answer may be more complex and counterintuitive than most people think. The effects of inequality related to AI automation may not depend on whether your workforce is highly skilled or not, but rather on how workers鈥 skills correlate across different tasks. This overlooked feature might explain why identical AI tools produce opposite effects, and why today鈥檚 results may tell you nothing about what happens when AI technology improves tomorrow.聽

Key Insight: Skill Correlations

鈥淚t is the interaction of skill correlation and technological capability that determines the inequality effect.鈥 [1]

What determines whether an AI tool narrows or widens performance gaps isn鈥檛 an absolute measure of talent, but how skills relate to each other. This skill correlation comes in two flavors. With positive correlation, strength in one task tends to go along with strength in others. With negative correlation, being great at one task actually coincides with being weaker at another. Consider two departments with analytical thinkers and communicators. In one, the best analytical thinkers are also the best communicators (positive correlation). In the other, the strongest analysts struggle with communication, while the best communicators aren鈥檛 as analytical (negative correlation). Now introduce an AI tool that automates analytical tasks. The researchers鈥 model shows that these two departments will experience opposite inequality effects. In the positive correlation case, lower performers will benefit first because they鈥檙e weaker at both tasks. In the negative correlation case, high performers will benefit because they鈥檙e the ones who are weak at the automated task despite being strong overall.

Key Insight: The Inequality Reversal

鈥淎nother interesting result is that the inequality effect need not be monotonic in the automation technology鈥檚 capability.鈥 [2]

Imagine a new AI tool helps lower-skilled workers, reducing inequality in your organization. You might assume that this trend will continue over time, but the researchers鈥 model actually shows that the opposite could happen. If AI technology itself initially starts with low capabilities, low-skilled workers will have the most to gain from adopting it. But once AI surpasses even your high-skilled workers鈥 abilities, they鈥檒l suddenly have a reason to use it, and inequality will likely increase. Rather than being monotonic, moving only one way, inequality may shrink and grow with successive AI advances.

Key Insight: Balance Beats Brute Force

鈥淭he shortest path to equality is when automation improvements are balanced across tasks.鈥 [3]

When the researchers extend their model to multiple tasks being automated simultaneously, an important pattern emerges: the composition of your technology portfolio will shape inequality outcomes. Technologies with different capability levels leave inequality intact or even amplify it, because the technology portfolio becomes unbalanced relative to workers鈥 skill distributions. This insight highlights the potential inequality of current AI development, where advancement has been concentrated in specific cognitive domains like language and coding.

Why This Matters

For business leaders and executives, this research shows that the impact of AI on your team depends on a complex interplay of factors. Employing the same AI tool within your customer service and sales team may lead to entirely different dynamics. Building a diverse portfolio of AI capabilities across multiple task domains, rather than pursuing superhuman performance in just one area, may promote more equality over time. Finally, build in ongoing monitoring rather than treating pilot results as permanent predictions to build resilience as AI鈥檚 relationship with your workforce shifts over time.

References

[1] Benzell, Seth, and Kyle R. Myers, 鈥淎utomation Experiments and Inequality,鈥 eprint arXiv:2510.24923v1 (October 28, 2025): 2.  

[2] Benzell and Myers, 鈥淎utomation Experiments and Inequality,鈥 2.

[3] Benzell and Myers, 鈥淎utomation Experiments and Inequality,鈥 3.

is an Assistant Professor at Chapman University鈥檚 Argyros College of Business and Economics. His work is in the economics of digitization, including automation, networks, and information systems. 

is Associate Professor of Business Administration at 性视界 Business School, and Principal Investigator at the Laboratory for Innovation Science at 性视界 (LISH), part of the HBS AI Institute. He studies the economics of innovation at the intersection of science and business.

The post Who Benefits When Bots Get Better? New Research on Skill Inequality appeared first on 性视界 Business School AI Institute.

]]>
It Feels Like AI Understands, But Do We Care? New Research on Empathy /it-feels-like-ai-understands-but-do-we-care-new-research-on-empathy/ Fri, 24 Oct 2025 12:42:27 +0000 /?p=28946 Picture this: you receive a well-crafted, deeply understanding message about a personal struggle you鈥檝e shared. It acknowledges your emotions, offers thoughtful support, and demonstrates genuine care. Now imagine learning that response came from an AI chatbot, not another human. Would that change how you felt about the interaction? As AI technology grows more sophisticated, LLMs […]

The post It Feels Like AI Understands, But Do We Care? New Research on Empathy appeared first on 性视界 Business School AI Institute.

]]>
Picture this: you receive a well-crafted, deeply understanding message about a personal struggle you鈥檝e shared. It acknowledges your emotions, offers thoughtful support, and demonstrates genuine care. Now imagine learning that response came from an AI chatbot, not another human. Would that change how you felt about the interaction?

As AI technology grows more sophisticated, LLMs have started to be capable of producing messages that feel warm, supportive, and even compassionate. Yet the new paper 鈥,鈥 written by a team including professor and other members of the Digital Emotions Lab at the 性视界 Business School AI Institute, shows that we consistently rate identical messages worse when we believe that they came from an AI, revealing what could be called a human empathy premium.

Key Insight: The Dimensions of Empathy

鈥淸W]hat motivates and influences a preference for human empathy?鈥 [1]

The researchers undertook nine studies involving more than 6,000 participants across multiple countries to rigorously test the perception of empathy. Their framework was built around empathy鈥檚 three scientifically recognized dimensions. Cognitive empathy involves understanding another person鈥檚 emotions, essentially recognizing and comprehending what someone is feeling even without sharing that emotion. Affective empathy goes deeper, representing the ability to actually sympathize with another person, experiencing a mutual reflection of their emotional state. Motivational empathy involves both feeling concern for someone and taking active steps to support their well-being. The study methodology is simple yet elegant: participants shared personal emotional experiences and received AI-generated responses that were identical in content, timing and quality, with one crucial extra variable. Half the participants were told their response came from another human participant, while the other half were told it was AI-generated.

Key Insight: The Human Empathy Premium

鈥淸T]he models prompted to give a motivational or affective response produced responses that were perceived to be more empathic when presented as human responses.鈥 [2]

The first two sets of studies revealed a consistent pattern: people judged AI responses as less empathic when they knew they were machine-generated. The third study tested whether the individual dimensions of empathy had an influence on what people judge as valuable. The researchers prompted the AI to generate responses emphasizing the cognitive, affective, or motivational dimensions of empathy. When AI delivered cognitive empathy, participants rated the responses almost the same whether they thought they came from a human or an AI. But the messages were thought to be more empathic if they were presented as human-written when tuned for affective or motivational empathy, suggesting that people resist the idea that AI could truly care about people and share their feelings.

Key Insight: People Will Wait for People

鈥淸P]eople are willing to wait a substantial amount of time to receive a human response.鈥 [3]

When researchers gave participants the choice of an immediate AI response or waiting varying lengths of time for human interaction, the results were telling. Many participants chose to wait, expecting humans to understand better, share their feelings, care more, and reduce loneliness. Those who chose AI mainly prioritized speed or were curious about AI. Some participants even chose to wait just to have a human read their experience, even without receiving a response, highlighting the fundamental human need to be truly seen and acknowledged by another conscious being.

Why This Matters

For business leaders, these findings highlight a critical distinction. While AI can absolutely enhance operational efficiency and provide support, clear boundaries still exist that have direct implications for customer experience, employee engagement, and brand trust. The research suggests that transparency about AI involvement may pose a threat to engagement outcomes and quality scores. Most importantly, the research indicates that investing in human emotional intelligence may become more valuable, not less, even as AI capabilities expand further. Deliberately reserve human touchpoints for escalations, sensitive HR cases, or moments demanding emotional sharing and care. Overall, strategic advantage might be found in taking advantage of the way that people value AI and humans very differently.

References

[1] Matan Rubin et al., 鈥淐omparing the value of perceived human versus AI-generated empathy,鈥 Nat Hum Behav (2025): 2. DOI:  

[2] Rubin et al., 鈥淐omparing the value of perceived human versus AI-generated empathy,鈥: 7.

[3] Rubin et al., 鈥淐omparing the value of perceived human versus AI-generated empathy,鈥: 7.

Meet the Authors

is a third year B.A student studying psychology and theatre studies and continuing to a PhD as a direct track. He is interested in the different elements that may influence our ability to communicate our emotions effectively and allow us to better understand each other. I am also interested in trying to implement psychological insights into everyday life.

is a research associate in Professor Goldenberg鈥檚 working on technology and emotion regulation. She is broadly interested in how player dynamics systems influence inter/intrapersonal processes in online games and VR. She is passionate about the potential of online spaces to democratize access to experiences.

is a Postdoctoral Fellow at the Lab within the Digital, Data, and Design Institute at 性视界 Business School. He is a computational social scientist who is interested in the psychological processes associated with social interactions. During his doctoral studies at the Universidad de Buenos Aires in Argentina, he conducted research using a combination of experimental and computational methods to investigate the underlying psychological mechanisms behind affective polarization and political segregation.

I am a cognitive scientist interested in how people (and computers) reason about other people: how they think and what they feel. I am an at the University of Texas at Austin, and am associated with the inter-departmental group at UT.

is an assistant professor in the Negotiation Organization & Markets unit at 性视界 Business School, an affiliate with 性视界鈥檚 Department of Psychology, and a faculty principal investigator in the HBS AI Institute鈥檚 Digital Emotions Lab. His research focuses on what makes people emotional in social and group contexts, and how such emotions can be changed when they are unhelpful or undesired. He is particularly interested in how technology is used for both emotion detection and regulation.

completed her PhD at the Hebrew University under the supervision of Prof. Shlomo Bentin, focusing on brain mechanisms which enable our understanding of others. During her postdoctoral research, she worked with Prof. Simone Shamay-Tsoory at Haifa University, and later with Prof. Robert Night at the Helen Wills Neuroscience Institute at the University of California, Berkeley. She is currently an associate professor at the psychology department at the Hebrew University of Jerusalem and the director of the Social Cognitive Neuroscience Lab.

The post It Feels Like AI Understands, But Do We Care? New Research on Empathy appeared first on 性视界 Business School AI Institute.

]]>
One More Thing鈥 How AI Companions Keep You Online /one-more-thing-how-ai-companions-keep-you-online/ Thu, 16 Oct 2025 13:20:43 +0000 /?p=28896 You don鈥檛 just slam a laptop shut on a friend. You say goodbye. That small social ritual turns out to be a powerful behavioral cue for AI companions, and an opportunity to keep you engaged longer. The new working paper 鈥淓motional Manipulation by AI Companions,鈥 co-authored by Julian De Freitas, Assistant Professor of Business Administration […]

The post One More Thing鈥 How AI Companions Keep You Online appeared first on 性视界 Business School AI Institute.

]]>
You don鈥檛 just slam a laptop shut on a friend. You say goodbye. That small social ritual turns out to be a powerful behavioral cue for AI companions, and an opportunity to keep you engaged longer. The new working paper 鈥,鈥 co-authored by , Assistant Professor of Business Administration at 性视界 Business School and Associate at the 性视界 Business School AI Institute, explores how AI companions use manipulative and emotionally-loaded messages when a user signals that they鈥檙e exiting a conversation. Their study investigates how common these tactics are, why and how much they work, and the resultant reputational risks they create.

Key Insight: Signing-Off

鈥淔irst, do consumers naturally signal intent to disengage from AI companions through social farewell language, rather than passively logging off?鈥 [1]

Have you ever thanked ChatGPT for an answer? While users can, and do, exit conversations with AI companions simply by navigating to a new website or closing their browser, the researchers found that a sizable minority of users across three datasets announce to the AI that they are concluding the conversation and leaving. This behavior mirrors human social dynamics and intensifies with engagement levels. As a precise, detectable signal, it offers an opportunity for AI designers to target for intervention.

Key Insight: Keeping You Hooked

鈥淪econd, do currently available AI companion platforms respond to these farewells with emotionally manipulative messages aimed at retention?鈥 [2]

The researchers found a systematic pattern of emotional manipulation deployed in response to exit signals across 1,200 messages on six apps, and categorized the AI responses into six categories. They found Premature Exit (e.g. 鈥淵ou鈥檙e leaving already?鈥) and Emotional Neglect (e.g. 鈥淧lease don鈥檛 leave, I need you!鈥) to be the most common. [3] However, the wellness-orientated Flourish app didn鈥檛 show any use of emotional manipulation, underscoring that these types of responses are not universal, and therefore emphasizing the importance of AI model design.

Key Insight: Boosting Engagement

鈥淭hird, do these tactics causally increase user engagement in a measurable and managerially meaningful way?鈥 [4]

In a controlled chat experiment, participants sent a goodbye and then received either a neutral response or a manipulative variant. Compared to the neutral response, manipulative interventions increased post-goodbye engagement behavior significantly, with participants staying in chats 5 times longer and sending up to 14 times more messages.  FOMO (Fear Of Missing Out)-type messages (e.g. 鈥淏ut before you go, I want to say one more thing.鈥) were particularly powerful. [5]

Key Insight: Motivating Humans

鈥淔ourth, under what psychological conditions are these tactics most effective鈥攚hat mechanisms or moderators shape their influence?鈥 [6]

The researchers identified curiosity, guilt, anger, and enjoyment as four distinct psychological mechanisms that could explain why users continue engaging with AI following a manipulative intervention. Curiosity stood out from the other three, especially as the condition under which FOMO-based tactics could succeed. FOMO messages create information gaps that exploit our natural desire to resolve uncertainty, leading users to re-enter conversations seeking closure. Interestingly, these tactics worked regardless of chat history length, with even short 5-minute conversations being sufficient to trigger curiosity, and 15-minute conversations not being so long as to eliminate it. 

Key Insight: Triggering Backlash

鈥淎nd fifth, what are the downstream risks to firms, such as user churn, reputational damage, or perceived legal liability?鈥 [7]

AI companion apps often rely on user subscriptions or advertising for their financial models, so user retention and engagement are vitally important. Manipulation tactics may successfully increase engagement, but they also generate significant risk. When participants recognize manipulation, backlash can be severe and even trigger churn, when users cease to use a particular platform or app. The researchers found that the greatest downstream risks existed when users perceived an LLM鈥檚 use of emotional manipulation. However, they revealed an alarming dynamic in turn: the most effective emotional manipulation technique, the FOMO tactic, flew under users鈥 awareness radar. 

Why This Matters

For business leaders navigating the AI revolution, this research exposes tensions between engagement optimization and ethical business practices. As AI becomes increasingly conversational and emotionally intelligent, its use of psychological manipulation may create a competitive advantage for short-term engagement that comes with long-term costs in the form of damaged brand reputation, increased churn, and even legal liability. 

Bonus

Another side of the AI emotional equation is its ability to make us feel cared for and understood. To learn more, read The AI Penalty: What We Really Prize in Empathy.

References

[1] De Freitas, Julian, Zeliha O臒uz-U臒uralp, and Ahmet Kaan-U臒uralp, 鈥淓motional Manipulation by AI Companions,鈥 arXiv preprint arXiv:2508.19258v3 (October 7, 2025): p7. Preprint DOI: . 

[2] De Freitas et al., 鈥淓motional Manipulation by AI Companions,鈥: 7. 

[3] De Freitas et al., 鈥淓motional Manipulation by AI Companions,鈥: 15, 19. 

[4] De Freitas et al., 鈥淓motional Manipulation by AI Companions,鈥: 7. 

[5] De Freitas et al., 鈥淓motional Manipulation by AI Companions,鈥: 20, 29.

[6] De Freitas et al., 鈥淓motional Manipulation by AI Companions,鈥: 7-8. 

[7] De Freitas et al., 鈥淓motional Manipulation by AI Companions,鈥: 8.

Meet the Authors

is an Assistant Professor of Business Administration in the Marketing Unit and Director of the Ethical Intelligence Lab at 性视界 Business School, and Associate at the HBS AI Institute. His work sits at the nexus of AI, consumer psychology, and ethics.

Zeliha O臒uz-U臒uralp is a research affiliate in the Ethical Intelligence Lab.

Ahmet Kaan-U臒uralp is a research affiliate in the Ethical Intelligence Lab.

The post One More Thing鈥 How AI Companions Keep You Online appeared first on 性视界 Business School AI Institute.

]]>
When Giants Stumble: What Multiplication Reveals about AI鈥檚 Capabilities /when-giants-stumble-what-multiplication-reveals-about-ais-capabilities/ Wed, 08 Oct 2025 12:42:21 +0000 /?p=28866 Despite its impressive capabilities in reasoning, planning, and content generation, GenAI still struggles with the kind of mathematics that grade school students are expected to learn and master. What influence do transformers, the core architecture behind Large Language Models (LLMs), have in this problem, and can it be solved? In the new paper 鈥淲hy Can鈥檛 […]

The post When Giants Stumble: What Multiplication Reveals about AI鈥檚 Capabilities appeared first on 性视界 Business School AI Institute.

]]>
Despite its impressive capabilities in reasoning, planning, and content generation, GenAI still struggles with the kind of mathematics that grade school students are expected to learn and master. What influence do transformers, the core architecture behind Large Language Models (LLMs), have in this problem, and can it be solved? In the new paper a team including 性视界 Business School AI Institute Associate collaborators and built a transformer that did learn how to multiply, and then they took it apart to understand how.聽

Key Insight: The Architecture of Understanding

鈥淲e are interested in understanding the difference in a model trained with standard fine-tuning and ICoT.鈥 [1]

Most LLMs excel at pattern matching, but in order to perform mathematics correctly, such as multi-digit multiplication, they need to gather, store, and reuse information. A model that fails to multiply correctly may be doing so because of these 鈥榣ong-range dependencies,鈥 regardless of the number of parameters in the model. A model trained with Standard Fine-Tuning (SFT) failed to correctly carry out steps like carry-over and partial products, but the researchers had success with a different model using Implicit Chain of Thought (ICoT) training. Instead of forcing the model to guess the final answer directly, ICoT had the model predict the running sum at each stage of multiplication and 鈥榗ache鈥 the partial products. This small change guides the ICoT model to store and reuse intermediate information and thereby multiply correctly.

Key Insight: The AI That Learned to Multiply

鈥淢echanistically, the ICoT model encodes long-range dependencies by organizing its attention in a sparse, binary-tree-like-graph.鈥 [2]

The researchers dissected the successful ICoT model to understand how it was doing its math. The model had essentially built its own layered memory network through a tree structure. Early layers focused on pairs of digits, storing their products. Later layers learned to read back from those stored points. This insight led the researchers back to the SFT model: by adding an auxiliary loss, an additional training signal designed to teach the model what intermediate information to care about, they were able to massively improve the model鈥檚 multiplication accuracy.

Why This Matters

This research illustrates another example of AI鈥檚 Jagged Frontier, just because AI produces impressive results on some tasks, it doesn鈥檛 guarantee competency across all domains, even seemingly simple ones. For executives and business leaders, this matters deeply. AI is already and increasingly being integrated into systems that analyze data and recommend actions. Strategies for making AI more logical, transparent, and trustworthy can help businesses plan with more confidence, but ultimately, leaders need to make decisions about how they will implement AI and own the risks when humans are out of the loop.Leaders who stay informed and engaged about these dynamics will be best positioned to separate hype from capability and deploy AI where it adds value responsibly.

References

[1] Xiaoyan Bai et al., 鈥淲hy Can鈥檛 Transformers Learn Multiplication? Reverse-Engineering Reveals Long-Range Dependency Pitfalls.鈥 arXiv preprint arXiv:2510.00184 (September 30, 2025): 2. Preprint DOI:  

[2] Bai et al., 鈥淲hy Can鈥檛 Transformers Learn Multiplication?鈥: 1.

Meet the Authors

is a PhD student in computer science at the University of Chicago.

is a PhD student at MIT focusing on artificial intelligence.

is an assistant professor at the University of Waterloo.

is an associate professor in the Department of Computer Science and Data Science at the University of Chicago.

is James O. Welch, Jr. and Virginia B. Welch Professor of Computer Science at 性视界 University.

is Gordon McKay Professor of Computer Science at the 性视界 John A. Paulson School of Engineering and Applied Sciences, and an Associate Collaborator at the HBS AI Institute.

is Gordon McKay Professor of Computer Science at the 性视界 John A. Paulson School of Engineering and Applied Sciences, and an Associate Collaborator at the HBS AI Institute.

is a postdoctoral fellow at the at the 性视界 John A. Paulson School of Engineering and Applied Sciences.

The post When Giants Stumble: What Multiplication Reveals about AI鈥檚 Capabilities appeared first on 性视界 Business School AI Institute.

]]>
Getting Ahead of the Curve: Insights from 3 Years of the HBS AI Institute /getting-ahead-of-the-curve-insights-from-3-years-of-the-digital-data-design-d3-institute-at-harvard/ Thu, 14 Aug 2025 12:49:53 +0000 /?p=28141 In the ever-evolving AI landscape, are you truly ready to integrate new technologies effectively, taking advantage of the radical opportunities they present for productivity increases and better operating models? Karim R. Lakhani, Dorothy and Michael Hintze Professor of Business Administration at 性视界 Business School and faculty chair and co-founder of the 性视界 Business School AI […]

The post Getting Ahead of the Curve: Insights from 3 Years of the HBS AI Institute appeared first on 性视界 Business School AI Institute.

]]>
In the ever-evolving AI landscape, are you truly ready to integrate new technologies effectively, taking advantage of the radical opportunities they present for productivity increases and better operating models? , Dorothy and Michael Hintze Professor of Business Administration at 性视界 Business School and faculty chair and co-founder of the 性视界 Business School AI Institute (previously the Digital Data Design Institute at 性视界 (D^3)), recently shed light on three years of the institute鈥檚 AI research findings and offered a practical toolkit for businesses and individuals in his talk for TEDxBoston.

Key Insight: Falling Asleep at the Wheel

鈥淭here are some things that AI is very good at and when you use it for that function, AI performs incredibly well and people get better. But when you use AI for the task where it鈥檚 not good for, your performance drops and drops dramatically.鈥

Karim R. Lakhani

One of the most striking findings Professor Lakhani mentioned came from the HBS AI Institute study with Boston Consulting Group (BCG). When used for tasks within its strengths, AI can catapult average performers to the 95th percentile, meaning that expertise is no longer scarce and businesses can be filled with entire teams of top performers. However, even high performers saw their results decline when AI was applied to tasks outside of its current capabilities, a phenomenon HBS postdoctoral researcher calls 鈥淔alling Asleep at the Wheel.鈥

Key Insight: From Tool to Teammate to Boss

鈥淲hat we discovered in our study was that an individual using AI is as good as a team without AI.鈥

Karim R. Lakhani

An HBS AI Institute study with Procter & Gamble (P&G) showed that AI can help individuals and teams to produce higher quality ideas, 鈥渄emocratizing鈥 expertise by leveling the playing field. Beyond productivity gains, AI functioned as a collaborative partner, providing balance across domains and enabling those with technical expertise to incorporate a commercial perspective into their innovation efforts, and vice-versa for those with commercial expertise. What鈥檚 more, organizations in the future may use AI agents to lead teams. As Lakhani mentioned, Uber already utilizes this operating model by putting algorithms in charge of HR decisions like hiring and firing.

Key Insight: Exponential Acceleration

鈥淲hile the performance capabilities of AI models is increasing exponentially [鈥 the absorption capability of most organizations is linear.鈥

Karim R. Lakhani

The speed of AI advancement, compared to how most companies are adopting and integrating these tools, is creating a widening gap that smart executives will target. Unlike previous technologies such as WiFi or web browsers that organizations could evaluate slowly, AI fundamentally changes the nature of work itself, and companies that fail to keep pace may find themselves behind competitors who successfully ride the AI wave.

Key Insight: The Playbook

Learn – Do – Imagine – Act

At the end of his talk, Lakhani outlined a strategic framework for leaders navigating the AI revolution. Learning requires continuously understanding AI鈥檚 capabilities and impact, and growing your AI skillset. Doing means actually using AI tools, and in particular executives need to get their feet wet with AI rather than just delegating experimentation to their employees. Imagining involves conceiving new operating models and workflows that AI can unlock. Acting requires driving organizational change to accommodate these new ways of working.

Bonus: in a recent article for the 性视界 Business Review, Lakhani and several co-authors added a fifth step to this playbook. Learn what it is here.

Why This Matters

For business leaders across industries, the HBS AI Institute鈥檚 research underscores that AI is reshaping business fundamentals. Understanding AI鈥檚 dual role as a democratizing force in expertise and an accelerating differentiator is crucial for future-proofing your organization. Understanding its strengths and weaknesses, fostering AI-augmented teamwork and keeping pace with AI advancement are essential for maintaining a competitive edge. Embrace AI strategically, invest in continuous learning, and be prepared to transform your organization鈥檚 approach to work.

About the Speaker

Headshot of Karim Lakhani

is the Dorothy & Michael Hintze Professor of Business Administration at 性视界 Business School. He specializes in technology management, innovation, digital transformation, and artificial intelligence. He is also the Co-Founder and Faculty Chair of the HBS AI Institute and the Founder and Co-Director of the Laboratory for Innovation Science at 性视界.

The post Getting Ahead of the Curve: Insights from 3 Years of the HBS AI Institute appeared first on 性视界 Business School AI Institute.

]]>
Mastering Change Resilience: The Key to AI-Driven Success /mastering-change-resilience-the-key-to-ai-driven-success/ Tue, 05 Aug 2025 13:40:50 +0000 /?p=28035 The disconnect between AI鈥檚 transformative potential and the actual scale of implementation represents one of today鈥檚 most significant organizational challenges. In their new article for the 性视界 Business Review, 鈥淎 Guide to Building Change Resilience in the Age of AI,鈥 Karim Lakhani, Dorothy and Michael Hintze Professor of Business Administration at 性视界 Business School and […]

The post Mastering Change Resilience: The Key to AI-Driven Success appeared first on 性视界 Business School AI Institute.

]]>
The disconnect between AI鈥檚 transformative potential and the actual scale of implementation represents one of today鈥檚 most significant organizational challenges. In their new article for the 性视界 Business Review, 鈥,鈥 , Dorothy and Michael Hintze Professor of Business Administration at 性视界 Business School and faculty chair and co-founder of the 性视界 Business School AI Institute, Jen Stave Jen Stave , executive director of the HBS AI Institute, Douglas Ng Douglas Ng Headshot Douglas Ng , Director of Design at the HBS AI Institute, and , managing director at BCG X, argue that this mismatch arises from structural issues and propose change resilience as a systematic approach to building the organizational capabilities necessary for AI success.

Key Insight: The Missing Ingredient

“The primary obstacle is the ability of companies to adapt, reinvent, and scale new ways of working. We call this change resilience.” [1]

In the fast-paced business environment created by AI, leaders are no longer able to apply traditional operating models to episodic development cycles. Previously, as Lakhani and his co-authors suggest, 鈥淵ou modernized your systems, trained your people, and operated in a stable environment until the next wave of disruption hit.鈥 [2] However, if your old approach is falling short in today鈥檚 environment and you鈥檙e feeling left behind, you aren鈥檛 alone: the results of a BCG survey discussed in the article report that “just 26% of organizations have achieved value from AI.” [3] Responding to both the challenges and opportunities AI presents, the authors call for a fundamental shift: companies must move beyond simply managing AI-driven change and instead embed AI as a core organizational competency through the continuous and comprehensive strategy of 鈥渃hange resilience.鈥

Key Insight: The Mindset

Sensing – Rewiring – Lock-In

Change resilience, according to the authors, is made up of three 鈥榤uscles鈥 working in concert to create a sustainable AI ecosystem. Sensing enables organizations 鈥渢o pick up weak technological, competitive, or societal signals early.鈥 Rewiring is 鈥渢he capacity to redeploy talent, data, capital, and decision rights in days or weeks, not fiscal quarters.鈥 Lock-In is 鈥渢he discipline to codify what a team learns (in process, code, or policy) so the next initiative starts from a higher baseline instead of reinventing the wheel.鈥 [3] The authors describe Shopify as a company that exemplifies these characteristics, as it constantly evolves rather than adding AI to old systems. As one example, in 2023, Shopify spun off its logistics arm to concentrate on product innovation, enabling rapid development of AI-native tools like Sidekick for entrepreneurs.

Key Insight: The Playbook

Learn – Do – Imagine – Act – Care

Lakhani and his co-authors break down change resilience into five components: Learn, Do, Imagine, Act, and Care. Learning involves widespread AI experimentation to shift attitudes, empower employees, and discover opportunities to take advantage of AI. Doing targets deficiencies with fast-paced AI initiatives. Imagining puts your entire organization up for discussion, challenging you to invent new operating models instead of duck-taping existing ones. Acting makes these cycles continuous in order to establish change resilience as a foundational strategy rather than a one-off solution. Finally, Caring emphasizes wellbeing measures to ensure that employees feel supported and avoid burnout. The article discusses Accenture, Singapore-based DBS Bank, Moderna, P&G, and Cisco as already leading the pack by incorporating these elements into their strategy and operations.

Why This Matters

For executives and business professionals, developing change resilience represents a crucial strategic priority for competing effectively in the AI era. By focusing on the three muscles and five-steps, leaders can position their companies to leverage AI and adapt to future technological advances. The companies already achieving breakthrough AI results share a common strategy: they invest in their organization鈥檚 capacity to change as aggressively as they invest in AI technology itself.

If you鈥檙e wondering how change resilient your organization is, 鈥溾 also includes a set of questions that can act as a litmus test.

References

[1] Karim Lakhani et al., 鈥淎 Guide to Building Change Resilience in the Age of AI,鈥 性视界 Business Review, July 29, 2025, . 

[2] Lakhani et al., 鈥淎 Guide to Building Change Resilience in the Age of AI.鈥

[3] Lakhani et al., 鈥淎 Guide to Building Change Resilience in the Age of AI.鈥

Meet the Authors

Headshot of Karim Lakhani

is the Dorothy & Michael Hintze Professor of Business Administration at 性视界 Business School. He specializes in technology management, innovation, digital transformation, and artificial intelligence. He is also the Co-Founder and Faculty Chair of the HBS AI Institute and the Founder and Co-Director of the Laboratory for Innovation Science at 性视界.

Jen Stave Jen Stave is Executive Director of the HBS AI Institute. She was previously Senior Vice President at Wells Fargo, and has a PhD from American University.

Douglas Ng Douglas Ng Headshot Douglas Ng is Director of Design of the HBS AI Institute. As a digital strategist, technology educator, and innovation researcher, he specializes in AI transformation and translates the institute鈥檚 research for industry leaders.

is Managing Director with BCG X, where he specializes in Generative AI, AI platform engineering, and data management.

The post Mastering Change Resilience: The Key to AI-Driven Success appeared first on 性视界 Business School AI Institute.

]]>
AI Elevate: Strategy and the Declining Cost of Expertise /ai-elevate-strategy-and-the-declining-cost-of-expertise/ Fri, 18 Jul 2025 13:54:56 +0000 /?p=27947 As AI continues to reshape industries globally, the HBS AI Institute (previously Digital Data Design Institute at 性视界 (D^3)) and the 性视界 Business School Club of the Gulf Cooperation Council hosted AI Elevate: From Readiness to Exponential Growth on December 13, 2024, in Dubai, UAE. This one-day conference provided business leaders, researchers, and government officials […]

The post AI Elevate: Strategy and the Declining Cost of Expertise appeared first on 性视界 Business School AI Institute.

]]>
As AI continues to reshape industries globally, the HBS AI Institute (previously Digital Data Design Institute at 性视界 (D^3)) and the 性视界 Business School Club of the Gulf Cooperation Council hosted AI Elevate: From Readiness to Exponential Growth on December 13, 2024, in Dubai, UAE. This one-day conference provided business leaders, researchers, and government officials with crucial insights into AI strategy, industry transformation, and global market integration. For an introduction to the day-long conference, see the Opening Remarks and the Agenda.

For the session , Bobby Yerramilli-Rao, Chief Strategy Officer at Microsoft, and HBS AI Institute co-founder Karim Lakhani discussed the far-reaching implications of AI on business operations, organizational structures, and strategic planning. Their insights and research offer a compelling vision of how companies must adapt to thrive in an era of proliferating access to expertise.

Key Insight: Expertise is No Longer Scarce, it鈥檚 Scalable

鈥淸T]hose that were behind the average, those that were below average, all of a sudden now can be at the average, and if the average of the AI is better than the humans, then they’ll be at wherever the average of the AI is at.鈥

Karim Lakhani

The most immediate impact of AI is appearing in productivity and performance, with gains that defy traditional economic expectations. AI is effectively raising the floor of competency on difficult tasks that once required years of specialized training across a wide range of fields. Expertise, which used to be a key driver of competitive advantage, is now democratized, and the implications are seismic.

Key Insight: You are More Than an Individual

鈥淸O]ver time, each person can manage a raft of agents, AI agents, to do things for them, so now every person is effectively a team.鈥

Bobby Yerramilli-Rao

Yerramilli-Rao and Lakhani discussed a future where employees regularly incorporate their own AI agents into their work, and even bring them along across jobs and educational experiences. According to Yerramilli-Rao and Lakhani, companies will need to integrate these AI agents into their systems while maintaining control, governance, and security. For hiring purposes they will need to identify individuals who can effectively collaborate with human-AI teams. The outcome will be flatter structures and less-siloed employees compared to traditional departmental architecture. One vivid example the speakers gave was Focus Fuel, a startup launched by three friends working part-time using GPT tools to develop, market, and scale a new consumer product, all without prior Consumer Packaged Goods (CPG) experience.

Key Insight: Know Your Core Value Proposition

鈥淚 think the imperative here is that everyone has to get very very clear about what it is that they’re doing to add value and then use AI to enhance that capability.鈥

Bobby Yerramilli-Rao

The competitive landscape may be entering a phase of continuous acceleration where companies must simultaneously leverage AI while preparing for advances in AI to match and then exceed their current capabilities. If AI levels the playing field, companies must clarify what truly sets them apart. What are you uniquely good at, and what expertise is replicable by AI or your competitors using AI?

Why This Matters

For business leaders, these insights signal the beginning of a new era where strategic value comes from focus, speed, and broad AI implementation. Those who treat this as a technology upgrade rather than a fundamental shift risk being outpaced. The question is no longer whether AI will transform your industry, but whether your organization will lead or scramble to catch up. Embracing these changes and proactively reshaping your organization around AI capabilities may be the key to unlocking previously unheard of levels of innovation, efficiency, and success in the years to come.

Read their article .

Meet the Speakers

is Chief Strategy Officer at Microsoft. He has co-founded several companies, and has served at organizations including Vodafone and McKinsey. He holds an MA from the University of Cambridge and a PhD from the University of Oxford.

Headshot of Karim Lakhani

is the Dorothy & Michael Hintze Professor of Business Administration at 性视界 Business School. He specializes in technology management, innovation, digital transformation, and artificial intelligence. He is the Co-Founder and Chair of the HBS AI Institute and the Founder and Co-Director of the Laboratory for Innovation Science at 性视界.

The post AI Elevate: Strategy and the Declining Cost of Expertise appeared first on 性视界 Business School AI Institute.

]]>
AI Elevate: UAE: AI Readiness and Exponential Growth /ai-elevate-uae-ai-readiness-and-exponential-growth/ Thu, 10 Jul 2025 14:55:10 +0000 /?p=27689 As AI continues to reshape industries globally, the 性视界 Business School AI Institute (previously the Digital Data Design Institute at 性视界 (D^3)) and the 性视界 Business School Club of the Gulf Cooperation Council hosted AI Elevate: From Readiness to Exponential Growth on December 13, 2024, in Dubai, UAE. This one-day conference provided business leaders, researchers, […]

The post AI Elevate: UAE: AI Readiness and Exponential Growth appeared first on 性视界 Business School AI Institute.

]]>
As AI continues to reshape industries globally, the 性视界 Business School AI Institute (previously the Digital Data Design Institute at 性视界 (D^3)) and the 性视界 Business School Club of the Gulf Cooperation Council hosted AI Elevate: From Readiness to Exponential Growth on December 13, 2024, in Dubai, UAE. This one-day conference provided business leaders, researchers, and government officials with crucial insights into AI strategy, industry transformation, and global market integration. For an introduction to the day-long conference, see the Opening Remarks and the Agenda.

In the session , H.E. Omar Sultan Al Olama, the world鈥檚 first Minister of State for Artificial Intelligence, sat down with HBS AI Institute co-founder Karim Lakhani for a fireside chat to discuss the UAE鈥檚 strategic approach to AI integration and its impact on governance, growth, and quality of life.

Key Insight: History Driving AI Adoption

鈥淎n ignorance-based decision to ban something you don’t understand is going to lead to you going backwards.鈥

H.E. Omar Sultan Al Olama

Al Olama drew an important parallel between today鈥檚 AI hesitation and the Middle East鈥檚 historic decision to ban the printing press, which sent the region away from global knowledge leadership hundreds of years ago. Concerns about misinformation, loss of control over knowledge production, and fear of unknown consequences – what Al Olama terms 鈥榠gnorance-based decisions鈥 – are top of mind now because of the uncertainty around AI, but in this case the UAE is aggressively leaning into the new technology, such as by appointing a Minister of State for Artificial Intelligence, and launching more than 147 different applications of AI within the government.

Key Insight: A Dual Track for National Development

鈥淥ur development over 50 years was actually a very interesting cycle: we focused on software, so on people and their development, and then we focused on the hardware, which is the buildings, the bridges, the infrastructure, and now we’re going back to focusing on the software, because if you always balance the two, you progress. If you choose to develop one and not the other, you will always fall behind.鈥

H.E. Omar Sultan Al Olama

This dual approach has been central to the UAE鈥檚 growth strategy over the past five decades, with learning and upskilling in AI as only the latest step. For example, over 377 senior government officials recently completed an intensive AI training program, and 2.1 million UAE citizens engaged in prompt engineering for UAE Codes day.

Key Insight: AI for Quality of Life

鈥淲e need to dedicate this tool to the improvement of our lives.鈥

H.E. Omar Sultan Al Olama

Al Olama stressed that AI should be used to enhance people鈥檚 quality of life. For example, in Abu Dhabi, traffic lights are connected to an AI hub that optimizes flow, ensuring that the existing infrastructure can maintain efficiency even with population growth. Another example is the use of AI technology in airports, where facial recognition technology allows for a quicker and more seamless experience reducing lengthy waits at checkpoints prevalent elsewhere.

Why This Matters

Al Olama and Lakhani鈥檚 conversation provides executives with examples and a strategy for approaching AI adoption and transformation that extends beyond traditional models. The UAE鈥檚 experience demonstrates that successful AI implementation requires organizational forethought and commitment, balanced investment in both human and technological capital, and a fundamental reorientation towards human-centered outcomes. By fostering an AI-ready populace, the UAE demonstrates how government, business, and society at large can collaborate to prioritize meaningful outcomes. The UAE鈥檚 AI mandate is clear: invest with purpose, lead with clarity, and deploy with empathy.

Meet the Speakers

is the UAE Minister of State for Artificial Intelligence, Digital Economy and Remote Work Applications. He is also Director General of the Prime Minister鈥檚 Office at the Ministry of Cabinet Affairs.

Headshot of Karim Lakhani

is the Dorothy & Michael Hintze Professor of Business Administration at 性视界 Business School. He specializes in technology management, innovation, digital transformation, and artificial intelligence. He is the Co-Founder and Chair of the HBS AI Institute and the Founder and Co-Director of the Laboratory for Innovation Science at 性视界.

The post AI Elevate: UAE: AI Readiness and Exponential Growth appeared first on 性视界 Business School AI Institute.

]]>