AI & Future of Business Technology Archives | HBS AI Institute /communities-of-practice/ai-future-business-technology/ The HBS AI Institute catalyzes new knowledge to invent a better future by solving ambitious challenges. Thu, 02 Oct 2025 11:00:34 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 /wp-content/uploads/2026/04/cropped-Screenshot-2026-04-16-at-10.14.43-AM-32x32.png AI & Future of Business Technology Archives | HBS AI Institute /communities-of-practice/ai-future-business-technology/ 32 32 The AI Penalty: What We Really Prize in Empathy /the-ai-penalty-what-we-really-prize-in-empathy/ Thu, 02 Oct 2025 11:00:32 +0000 /?p=28835 Have you ever received a response from ChatGPT that seems to get you almost too well? A recent preprint review of research, “AI-Generated Empathy: Opportunities, limits, and future directions,” written by a team including Amit Goldenberg, faculty principal investigator in the Digital Emotions Lab at the Digital Data Design Institute at ӽ (D^3), suggests that […]

The post The AI Penalty: What We Really Prize in Empathy appeared first on HBS AI Institute.

]]>
Have you ever received a response from ChatGPT that seems to get you almost too well? A recent preprint review of research, “,” written by a team including , faculty principal investigator in the Digital Emotions Lab at the Digital Data Design Institute at ӽ (D^3), suggests that short interactions with AI might actually be better at making us feel understood and cared for than our fellow humans, that is, at least until we discover that we’re talking to a machine. These findings challenge our fundamental assumptions about empathy, emotional support, and what it means to truly connect in an increasingly digital world.

Key Insight: A New Perspective on Understanding Empathy

“Empathy is in the mind of the beholder.” [1]

Earlier psychological research has focused primarily on the empathizer, studying what makes them more or less empathic, their biases, and their capacity for emotional connection. But the rise of synthetic AI-powered conversation partners flips the perspective, pivoting from the writer (the empathizer) to the receiver (the empathized). While we can’t meaningfully ask whether an AI truly ‘cares’ or ‘shares feelings,’ the focus shifts to the recipient’s perception of empathy, whether that person experiences feeling heard, cared for, and understood.

Key Insight: AI’s Surprising AI Advantage

“Generally speaking, people find text generated by modern LLMs to be more empathic than text written by humans.” [2]

Across diverse contexts like crowdsourced workers, crisis-line supporters, and even medical doctors, AI-generated messages often outperform human-written ones on perceived empathy. Why might that be? AI can consistently produce structured, attentive, and validating language. AI doesn’t get tired or stop trying, and its phrasing can be optimized for clarity and warmth. In short, the authors identify an “AI Advantage” in the ability to generate more consistently empathetic responses than humans can.

Key Insight: Belief Beats Content

“However, as soon as people believe (accurately or not) they are interacting with an AI, they downgrade the value of the text—something that we call the ‘AI Penalty’.” [3]

The flip side is stark: label the very same message as AI, and ratings drop. Termed the “AI Penalty” by the authors, it is strongest on the dimensions of “feeling with” and “caring,” precisely where people expect a human’s emotional labor and intention. The penalty also emerges when people suspect AI involvement in a message they otherwise believed was human. Taken together, the AI Advantage and AI Penalty suggest that people’s cognitive understanding of AI capabilities conflicts with their emotional preferences for human connection.

Why This Matters

For business leaders and executives, understanding these insights is critical for informed decision-making about customer experience, employee well-being, and technology implementation. Companies might consider hybrid approaches where AI augments human empathy rather than replacing it, such as providing real-time coaching to customer service representatives or helping employees craft more supportive communications. Perhaps most importantly, this research highlights the need for leaders to understand the psychological complexity of human-AI interactions. As AI becomes more sophisticated at mimicking human emotional intelligence, success might not just depend on technical capabilities and deployment, but on navigating the complicated ways that people perceive, value, and respond to digital communications.

References

[1] Desmond C. Ong et al., “AI-Generated Empathy: Opportunities, limits, and future directions.” PsyArXiv Preprint (September 23, 2025): 4. Preprint DOI: .

[2] Ong et al., “AI-Generated Empathy”: 5.

[3] Ong et al., “AI-Generated Empathy”: 7.

Meet the Authors

is an Assistant Professor of Psychology at the University of Texas at Austin.

is Assistant Professor of Business Administration at ӽ Business School, and faculty principal investigator in the Digital Emotions Lab at the Digital Data Design Institute at ӽ (D^3).

is a Professor in the Department of Psychology at the University of Toronto, with a cross-appointment as Professor in the Department of Marketing at the Rotman School of Management.

is an Associate Professor of psychology at the Hebrew University of Jerusalem.

The post The AI Penalty: What We Really Prize in Empathy appeared first on HBS AI Institute.

]]>
Why AI Helps Until It Doesn’t: Inside the GenAI Wall Effect /why-ai-helps-until-it-doesnt-inside-the-genai-wall-effect/ Thu, 18 Sep 2025 12:33:58 +0000 /?p=28683 The promise of Generative AI (GenAI) often sounds like this: give any employee access to AI tools, and they’ll suddenly be able to perform tasks outside their domain of expertise with remarkable proficiency and speed. As discussed in the new working paper “The GenAI Wall Effect: Examining the Limits to Horizontal Expertise Transfer Between Occupational […]

The post Why AI Helps Until It Doesn’t: Inside the GenAI Wall Effect appeared first on HBS AI Institute.

]]>
The promise of Generative AI (GenAI) often sounds like this: give any employee access to AI tools, and they’ll suddenly be able to perform tasks outside their domain of expertise with remarkable proficiency and speed. As discussed in the new working paper “,” the reality of AI’s ability to balance the scales across occupational skillsets is far more nuanced. Written by a team of six authors, including two Principal Investigators and a Research Associate in the Data Science and AI Operations Lab at the Digital Data Design Institute at ӽ (D^3), the article reveals surprising answers about the transformative power of AI in the workplace through a comprehensive study of 78 employees at a UK-based global trading company.

Key Insight: The GenAI Wall

“[W]e predict a ‘GenAI wall effect’ […] the emergence of a point at which GenAI can no longer meaningfully reduce the expertise gaps between insiders and outsiders because of the wider knowledge distance between their jobs.” [1]

While most research has focused on how AI helps lower-performing individuals catch up to their higher performing colleagues within the same job, this study instead focused on whether GenAI could help people from different occupations take on tasks that aren’t typically part of their role. To do so, the authors defined three types of participants: insiders (those who already perform certain tasks as part of their jobs), adjacent outsiders (whose roles are related but don’t directly perform those tasks), and distant outsiders (whose roles have little overlap in tasks). The study then introduces ideas of “knowledge distance” and “expertise gaps,” how far apart two roles are in terms of the skills they use, and the authors claim that GenAI can close the distance for adjacent outsiders, but hits a ‘wall’ with distant outsiders where its benefits stop.

Key Insight: An AI Field Experiment

“[W]hen assisted by GenAI, marketing specialists and technology specialists produced article conceptualizations on par with web analysts.” [2]

To find out where GenAI helps and where it hits limits, the researchers ran a large experiment with employees at the UK-based firm IG, using web analysts who regularly write marketing articles (insiders), marketing specialists from the same department who don’t write articles (adjacent outsiders), and software developers and data scientists (distant outsiders). Each worker had to complete two parts of the web analyst role: (1) conceptualization, building a structured article brief with keywords, headings, and FAQs, and (2) execution, writing the full article. Some participants had access to custom GenAI tools, and others did not. The results of the conceptualization task showed that GenAI can be a powerful equalizer: not only did it improve quality, but also speed, and the gains were especially large for lower-performing employees.

Key Insight: When the Wall Appears

“In short, GenAI levels the playing field in article execution only for marketing specialists.” [3]

The picture changed when participants moved to the execution task. With GenAI support, the web analysts (insiders) and marketing specialists (adjacent outsiders) both produced strong articles, but the technologists (distant outsiders) lagged behind. In other words, AI narrowed the gap for marketers, but a wall appeared for developers and data scientists. Why did this happen? The study’s interviews offer a clue: web analysts and marketers approached the task with the shared foundation of sensitivity to audience needs, conversion strategies, and the rhythms of effective marketing copy. That background let them use GenAI’s suggestions wisely, keeping what worked, editing what didn’t, and shaping the writing into something publishable.

Why This Matters

For business leaders deciding how to employ AI, this study offers a new operational map based around adjacency. Employees can likely expand into related domains, but may struggle with distant ones. AI-assisted cross-training might work best for conceptual and strategic work, while specialized roles with complex execution tasks will still likely call for narrow-focused experts. Most importantly, capitalize on where AI aids human knowledge the most, allowing you to redesign roles and career paths around the skills and strengths that remain uniquely human and critical to your organization.

Bonus

This study was also recently discussed in Charter, the business reporting section of Time. Read their analysis .

References

[1] Luca Vendraminelli et al., “The GenAI Wall Effect: Examining the Limits to Horizontal Expertise Transfer Between Occupational Insiders and Outsiders,” ӽ Business School Technology & Operations Mgt. Unit Working Paper No. 26-011, ӽ Business School Working Paper No. 26-011 (September 08, 2025): 3, .

[2] Vendraminelli et al., “The GenAI Wall Effect,” 26.

[3] Vendraminelli et al., “The GenAI Wall Effect,” 30.

Meet the Authors

is a Postdoctoral Researcher at the Digital Economy Lab and the Stanford Institute for Human-Centered Artificial Intelligence (HAI) at Stanford University.

is a PhD student in the Technology and Operations Management Unit at ӽ Business School.

is an Assistant Professor in the Technology and Operations Management Unit at ӽ Business School and Principal Investigator at the D^3 Data Science and AI Operations Lab hosted within the Laboratory for Innovation Science.

is an Assistant Professor at Stanford University in the Department of Management Science and Engineering.

is an Associate Professor of Business Administration at ӽ Business School and Principal Investigator at the D^3 Data Science and AI Operations Lab hosted within the Laboratory for Innovation Science.

The post Why AI Helps Until It Doesn’t: Inside the GenAI Wall Effect appeared first on HBS AI Institute.

]]>
Larger, Faster, Cheaper: The Future of Market Research with AI /larger-faster-cheaper-the-future-of-market-research-with-ai/ Thu, 28 Aug 2025 14:04:59 +0000 /?p=28360 As businesses continue to navigate the complexities of product development and innovation, generative AI has the potential to be a powerful new tool for market research. In their recent article for the ӽ Business Review, “Using Gen AI for Early-Stage Market Research,” Ayelet Israeli, co-founder of the Customer Intelligence Lab at the Digital Data Design […]

The post Larger, Faster, Cheaper: The Future of Market Research with AI appeared first on HBS AI Institute.

]]>
As businesses continue to navigate the complexities of product development and innovation, generative AI has the potential to be a powerful new tool for market research. In their recent article for the ӽ Business Review, “,” , co-founder of the Customer Intelligence Lab at the Digital Data Design (D^3) Institute at ӽ, and her co-authors James Brand and Donald Ngwe explain their research on the possibilities and pitfalls of using LLMs to create synthetic customers.

Key Insight: The Power of Synthetic Customers

“Our research shows that LLMs, used carefully, can function as synthetic focus groups, producing early insights on customer preferences in a fraction of the time and cost of human studies.” [1]

By combining LLMs with traditional research methods, companies have the opportunity to simulate consumer sentiments like willingness-to-pay (WTP) to make product innovation faster and cheaper. The authors’ research shows that for simulated tests of categories like toothpaste and tablets, LLM-created synthetic customers produced realistic and accurate preferences for many familiar attributes. What’s more, teams could explore dozens or even hundreds of ideas by using these synthetic consumers as an initial filter, overcoming traditional limitations in scope.

Key Insight: The Competitive Advantage of Proprietary Data

“[F]irms that build and fine-tune their own internal ‘customer simulators’ using LLMs and historical survey data can unlock sharper early-stage insights.” [2]

While usage of LLMs out of the box showed promising results, companies that incorporate their own historical customer data were able to achieve better results. For example, the authors noted that LLMs often rate novelty higher than actual humans, and as a result synthetic customers were initially positive about pancake-flavored toothpaste. Fine-tuning the LLM with data from an actual study helped to correct this enthusiasm and produce WTP results more in line with actual human sentiment. The researchers found similar results when testing hypothetical features, like built-in projectors for laptops.

Key Insight: Strategic Integration, Not Replacement

“For anything beyond early-stage high-level trend detection, human research remains essential.” [3]

The most successful application of this technology comes from understanding it as an augmentation tool rather than a replacement for traditional research. Given that LLMs are trained on static data, they may not reflect current market conditions without receiving frequent updates and new data. This allows companies to follow a new innovation roadmap: broaden the top of the innovation funnel by using AI, but keep the bottom narrow through sharper, more cautious analysis.

Why This Matters

Synthetic customers might not totally replace human research, but they can dramatically enhance it. For business leaders and executives, this represents a fundamental shift in the speed and scope of innovation strategy. The ability to rapidly test multiple prototypes or concepts at low cost could mean faster time-to-market, reduced development risk, and more efficient resource allocation. Organizations that build internal AI-powered customer simulation capabilities could gain a significant competitive advantage from fine-tuning models with their proprietary data, creating a virtuous cycle where better data leads to better insights. At the same time, decision makers and marketing professionals must be vigilant to recognize and respond to the shortcomings of these new technologies and tools.

Bonus

Learn more about the authors’ original research, and go a step further with the GenAI + Marketing Learning Module from D^3. You’ll learn the basics of engaging an LLM, with broadly applicable and actionable techniques to create content, automate tasks, and revolutionize workflows. Then the program will take a deep-dive to discover how AI can redefine your early-stage marketing research.

References

[1] James Brand et al., “Using Gen AI for Early-Stage Market Research,” ӽ Business Review, July 18, 2025, .

[2] Brand et al., “Using Gen AI for Early-Stage Market Research.”

[3] Brand et al, “Using Gen AI for Early-Stage Market Research.”

Meet the Authors

is Principal Researcher and Economist in the Office of the Chief Economist at Microsoft.

ayelet-israeli

is the Marvin Bower Associate Professor of Business Administration at ӽ Business School and co-founder of the Customer Intelligence Lab at the Digital Data Design (D^3) Institute at ӽ. She studies omni-channel and e-commerce markets, and her research focuses on data-driven marketing, with an emphasis on how businesses can leverage their own data, customer data, and market data to improve outcomes.

is Senior Director of Economics in the Office of the Chief Economist at Microsoft.

The post Larger, Faster, Cheaper: The Future of Market Research with AI appeared first on HBS AI Institute.

]]>
When Software Becomes Staff /when-software-becomes-staff/ Mon, 25 Aug 2025 12:31:28 +0000 /?p=28286 If AI can accept light supervision and then be off and running, what does it mean for how leaders and organizations design work, govern risk, and account for value? Drawing on perspectives from Jen Stave Jen Stave , Executive Director of the Digital Data Design (D^3) Institute at ӽ, Columbia Business School’s Stephan Meier, and […]

The post When Software Becomes Staff appeared first on HBS AI Institute.

]]>
If AI can accept light supervision and then be off and running, what does it mean for how leaders and organizations design work, govern risk, and account for value? Drawing on perspectives from Jen Stave Jen Stave , Executive Director of the Digital Data Design (D^3) Institute at ӽ, Columbia Business School’s Stephan Meier, and Salesforce CEO Marc Benioff, the recent New York Times Shop Talk article “” briefly explores the rise and implications of AI agents that can act like teammates or supervisees.

Key Insight: Agentic AI as Managed Teammates

“Like a human employee, these tools would work independently with a bit of management.”

Jen Stave

Agentic tools are moving beyond chatbots and image generation. Unlike traditional automation that follows rigid scripts, AI agents function more like human employees: capable of independent decision-making after being given high-level goals and objectives.

Key Insight: An Uncertain Future

“How the fruits of digital labor will be treated in economic terms is still unsettled.”

Jen Stave

On one hand, the impact of AI is already here and being measured, as evidenced by how the use of AI agents at Salesforce led to a 17% customer service cost reduction over nine months. But the article also raises a range of undecided questions related to economic capture, quality and accountability, and the right balance between human and AI worker numbers.

Why This Matters

For forward-thinking executives, increasingly the question isn’t whether to adopt agentic AI, but how to operationalize it productively and responsibly. While the efficiency gains are compelling, success requires thoughtful integration by leaders who are ready to address challenges of workforce transition, quality control, and ROI measurement.

Bonus

To read more about Agentic AI and digital labor, read “,” co-authored by Jen Stave, for the ӽ Business Review.

The post When Software Becomes Staff appeared first on HBS AI Institute.

]]>
Smarter Memories, Stronger Agents: How Selective Recall Boosts LLM Performance /smarter-memories-stronger-agents-how-selective-recall-boosts-llm-performance/ Thu, 21 Aug 2025 12:26:01 +0000 /?p=28205 One of AI agents’ most powerful tools is memory: the ability to learn from the past, adapt to new situations, and improve over time. But as organizations and professionals increasingly deploy AI agents for complex and long-term tasks, an important question emerges: how can we ensure that these systems learn from experience without getting trapped […]

The post Smarter Memories, Stronger Agents: How Selective Recall Boosts LLM Performance appeared first on HBS AI Institute.

]]>
One of AI agents’ most powerful tools is memory: the ability to learn from the past, adapt to new situations, and improve over time. But as organizations and professionals increasingly deploy AI agents for complex and long-term tasks, an important question emerges: how can we ensure that these systems learn from experience without getting trapped by their past mistakes? In the new paper “,” , Assistant Professor of Business Administration at ӽ Business School and PI in the Trustworthy AI Lab at the Digital Data Design (D^3) Institute at ӽ, and several co-authors delve into the critical role of memory management in LLM agents. Their paper sheds light on how strategic addition and deletion of experiences can impact the long term performance of AI agents and, critically, how the absence or mismanagement of these measures can actually make agents worse.

Key Insight: Accelerate or Anchor?

“[A] high ‘input similarity’ between the current task query and the one from the retrieved record often yields a high ‘output similarity’ between their corresponding (output) executions.” [1]

The study identifies a foundational behavioral pattern: when an agent’s current task closely resembles a stored memory, the outputs tend to closely match as well. This “experience-following” correlation mirrors how humans often rely on familiar patterns, and it can accelerate learning when the stored example is correct. However, it’s also not without risks. If erroneous or low-quality experiences are stored in memory, they can be applied to future tasks, thereby decreasing the agent’s overall performance. This means that the quality of stored examples is paramount, as bad memories don’t just linger, they can create a propagating error feedback loop.

Key Insight: Selective Addition

“[S]imply storing every experience leads to significantly worse outcomes.” [2]

If the experience-following property shows why quality matters in LLM agents, then addition shows how to control it, and a clear finding from the study is that indiscriminate memory growth actually hurts performance. In tests with three different agents, covering electronic health records (EHRs), the LLM-based autonomous driving agent AgentDriver, and a network security agent, storing every task and output (“add-all”) performed worse than using no memory addition at all. However, using strict evaluation criteria and filtering before storage led to an average 10% performance boost, so memory improvement is less about hoarding information than curating a high-quality knowledge base.

Key Insight: Improvement through Deletion

“History-based deletion consistently removes poor demonstrations with low output similarity, thereby improving long-term performance.” [3]

Even with careful addition, not all stored experiences are equally useful over time. Some look similar to new tasks (“high input similarity”), but consistently produce poor output (“low output similarity”). The authors term this “misaligned experience replay,” and show that pruning these entries improves long-term outcomes. Removing experiences with repeatedly low utility (“history-based deletion”) offered the best boost to performance while effectively and efficiently maintaining memory size. From a strategic perspective, this practice mirrors audits of playbooks, datasets, and best practices to ensure that institutional knowledge remains in top shape.

Why This Matters

The results from this research should give business leaders important context for thinking about how to choose and deploy AI agents: more data isn’t automatically better, and AI’s “experience” can actually be a liability, entrenching errors and bloating infrastructure. Disciplined curation, by selectively adding high-value experiences and strategically deleting low-value or misaligned ones, yields not only better accuracy but also more efficient, adaptable systems. In a world where executives may be involved in decision-making around LLM agents for their organizations, it’s important to have a blueprint for keeping AI agents sharp, reliable, and resilient, just like they plan for the training and advancement of their human employees. By understanding and investing in the processes that keep your AI’s memory in top-shape, your business will be equipped for tomorrow’s challenges.

References

[1] Zidi Xiong et al., “How Memory Management Impacts LLM Agents: An Empirical Study of Experience-Following Behavior,” arXiv preprint arXiv:2505.16067v1 (May 21, 2025): 2.

[2] Xiong et al., “How Memory Management Impacts LLM Agents,” 5.

[3] Xiong et al., “How Memory Management Impacts LLM Agents,” 9.

Meet the Authors

is a PhD student in computer science at ӽ University, advised by Himabindu Lakkaraju.

is a PhD student in computer science at Michigan State University.

is a PhD student in computer science engineering at University of Minnesota – Twin Cities.

is a PhD student in computer science and engineering at Michigan State University.

is a University Foundation Professor in the computer science and engineering department at Michigan State University.

is an Assistant Professor of Business Administration at ӽ Business School and PI in D^3’s Trustworthy AI Lab. She is also a faculty affiliate in the Department of Computer Science at ӽ University, the ӽ Data Science Initiative, Center for Research on Computation and Society, and the Laboratory of Innovation Science at ӽ. Professor Lakkaraju’s research focuses on the algorithmic, practical, and ethical implications of deploying AI models in domains involving high-stakes decisions such as healthcare, business, and policy.

is Assistant Professor of Computer Science at University of Georgia.

The post Smarter Memories, Stronger Agents: How Selective Recall Boosts LLM Performance appeared first on HBS AI Institute.

]]>
Getting Ahead of the Curve: Insights from 3 Years of the Digital Data Design (D^3) Institute at ӽ /getting-ahead-of-the-curve-insights-from-3-years-of-the-digital-data-design-d3-institute-at-harvard/ Thu, 14 Aug 2025 12:49:53 +0000 /?p=28141 In the ever-evolving AI landscape, are you truly ready to integrate new technologies effectively, taking advantage of the radical opportunities they present for productivity increases and better operating models? Karim R. Lakhani, Dorothy and Michael Hintze Professor of Business Administration at ӽ Business School and faculty chair and co-founder of the Digital Data Design (D^3) […]

The post Getting Ahead of the Curve: Insights from 3 Years of the Digital Data Design (D^3) Institute at ӽ appeared first on HBS AI Institute.

]]>
In the ever-evolving AI landscape, are you truly ready to integrate new technologies effectively, taking advantage of the radical opportunities they present for productivity increases and better operating models? , Dorothy and Michael Hintze Professor of Business Administration at ӽ Business School and faculty chair and co-founder of the Digital Data Design (D^3) Institute at ӽ, recently shed light on three years of the institute’s AI research findings and offered a practical toolkit for businesses and individuals in his talk for TEDxBoston.

Key Insight: Falling Asleep at the Wheel

“There are some things that AI is very good at and when you use it for that function, AI performs incredibly well and people get better. But when you use AI for the task where it’s not good for, your performance drops and drops dramatically.”

Karim R. Lakhani

One of the most striking findings Professor Lakhani mentioned came from D^3’s study with Boston Consulting Group (BCG). When used for tasks within its strengths, AI can catapult average performers to the 95th percentile, meaning that expertise is no longer scarce and businesses can be filled with entire teams of top performers. However, even high performers saw their results decline when AI was applied to tasks outside of its current capabilities, a phenomenon HBS postdoctoral researcher calls “Falling Asleep at the Wheel.”

Key Insight: From Tool to Teammate to Boss

“What we discovered in our study was that an individual using AI is as good as a team without AI.”

Karim R. Lakhani

A D^3 study with Procter & Gamble (P&G) showed that AI can help individuals and teams to produce higher quality ideas, “democratizing” expertise by leveling the playing field. Beyond productivity gains, AI functioned as a collaborative partner, providing balance across domains and enabling those with technical expertise to incorporate a commercial perspective into their innovation efforts, and vice-versa for those with commercial expertise. What’s more, organizations in the future may use AI agents to lead teams. As Lakhani mentioned, Uber already utilizes this operating model by putting algorithms in charge of HR decisions like hiring and firing.

Key Insight: Exponential Acceleration

“While the performance capabilities of AI models is increasing exponentially […] the absorption capability of most organizations is linear.”

Karim R. Lakhani

The speed of AI advancement, compared to how most companies are adopting and integrating these tools, is creating a widening gap that smart executives will target. Unlike previous technologies such as WiFi or web browsers that organizations could evaluate slowly, AI fundamentally changes the nature of work itself, and companies that fail to keep pace may find themselves behind competitors who successfully ride the AI wave.

Key Insight: The Playbook

Learn – Do – Imagine – Act

At the end of his talk, Lakhani outlined a strategic framework for leaders navigating the AI revolution. Learning requires continuously understanding AI’s capabilities and impact, and growing your AI skillset. Doing means actually using AI tools, and in particular executives need to get their feet wet with AI rather than just delegating experimentation to their employees. Imagining involves conceiving new operating models and workflows that AI can unlock. Acting requires driving organizational change to accommodate these new ways of working.

Bonus: in a recent article for the ӽ Business Review, Lakhani and several co-authors added a fifth step to this playbook. Learn what it is here.

Why This Matters

For business leaders across industries, D^3’s research underscores that AI is reshaping business fundamentals. Understanding AI’s dual role as a democratizing force in expertise and an accelerating differentiator is crucial for future-proofing your organization. Understanding its strengths and weaknesses, fostering AI-augmented teamwork and keeping pace with AI advancement are essential for maintaining a competitive edge. Embrace AI strategically, invest in continuous learning, and be prepared to transform your organization’s approach to work.

About the Speaker

Headshot of Karim Lakhani

is the Dorothy & Michael Hintze Professor of Business Administration at ӽ Business School. He specializes in technology management, innovation, digital transformation, and artificial intelligence. He is also the Co-Founder and Faculty Chair of the Digital Data Design (D^3) Institute at ӽ and the Founder and Co-Director of the Laboratory for Innovation Science at ӽ.

The post Getting Ahead of the Curve: Insights from 3 Years of the Digital Data Design (D^3) Institute at ӽ appeared first on HBS AI Institute.

]]>
Mastering Change Resilience: The Key to AI-Driven Success /mastering-change-resilience-the-key-to-ai-driven-success/ Tue, 05 Aug 2025 13:40:50 +0000 /?p=28035 The disconnect between AI’s transformative potential and the actual scale of implementation represents one of today’s most significant organizational challenges. In their new article for the ӽ Business Review, “A Guide to Building Change Resilience in the Age of AI,” Karim Lakhani, Dorothy and Michael Hintze Professor of Business Administration at ӽ Business School and […]

The post Mastering Change Resilience: The Key to AI-Driven Success appeared first on HBS AI Institute.

]]>
The disconnect between AI’s transformative potential and the actual scale of implementation represents one of today’s most significant organizational challenges. In their new article for the ӽ Business Review, “,” , Dorothy and Michael Hintze Professor of Business Administration at ӽ Business School and faculty chair and co-founder of the Digital Data Design (D^3) Institute at ӽ, Jen Stave Jen Stave , executive director of the Digital Data Design (D^3) Institute at ӽ, Douglas Ng Douglas Ng Headshot Douglas Ng , Director of Design at the Digital Data Design (D^3) Institute at ӽ, and , managing director at BCG X, argue that this mismatch arises from structural issues and propose change resilience as a systematic approach to building the organizational capabilities necessary for AI success.

Key Insight: The Missing Ingredient

“The primary obstacle is the ability of companies to adapt, reinvent, and scale new ways of working. We call this change resilience.” [1]

In the fast-paced business environment created by AI, leaders are no longer able to apply traditional operating models to episodic development cycles. Previously, as Lakhani and his co-authors suggest, “You modernized your systems, trained your people, and operated in a stable environment until the next wave of disruption hit.” [2] However, if your old approach is falling short in today’s environment and you’re feeling left behind, you aren’t alone: the results of a BCG survey discussed in the article report that “just 26% of organizations have achieved value from AI.” [3] Responding to both the challenges and opportunities AI presents, the authors call for a fundamental shift: companies must move beyond simply managing AI-driven change and instead embed AI as a core organizational competency through the continuous and comprehensive strategy of “change resilience.”

Key Insight: The Mindset

Sensing – Rewiring – Lock-In

Change resilience, according to the authors, is made up of three ‘muscles’ working in concert to create a sustainable AI ecosystem. Sensing enables organizations “to pick up weak technological, competitive, or societal signals early.” Rewiring is “the capacity to redeploy talent, data, capital, and decision rights in days or weeks, not fiscal quarters.” Lock-In is “the discipline to codify what a team learns (in process, code, or policy) so the next initiative starts from a higher baseline instead of reinventing the wheel.” [3] The authors describe Shopify as a company that exemplifies these characteristics, as it constantly evolves rather than adding AI to old systems. As one example, in 2023, Shopify spun off its logistics arm to concentrate on product innovation, enabling rapid development of AI-native tools like Sidekick for entrepreneurs.

Key Insight: The Playbook

Learn – Do – Imagine – Act – Care

Lakhani and his co-authors break down change resilience into five components: Learn, Do, Imagine, Act, and Care. Learning involves widespread AI experimentation to shift attitudes, empower employees, and discover opportunities to take advantage of AI. Doing targets deficiencies with fast-paced AI initiatives. Imagining puts your entire organization up for discussion, challenging you to invent new operating models instead of duck-taping existing ones. Acting makes these cycles continuous in order to establish change resilience as a foundational strategy rather than a one-off solution. Finally, Caring emphasizes wellbeing measures to ensure that employees feel supported and avoid burnout. The article discusses Accenture, Singapore-based DBS Bank, Moderna, P&G, and Cisco as already leading the pack by incorporating these elements into their strategy and operations.

Why This Matters

For executives and business professionals, developing change resilience represents a crucial strategic priority for competing effectively in the AI era. By focusing on the three muscles and five-steps, leaders can position their companies to leverage AI and adapt to future technological advances. The companies already achieving breakthrough AI results share a common strategy: they invest in their organization’s capacity to change as aggressively as they invest in AI technology itself.

If you’re wondering how change resilient your organization is, “” also includes a set of questions that can act as a litmus test.

References

[1] Karim Lakhani et al., “A Guide to Building Change Resilience in the Age of AI,” ӽ Business Review, July 29, 2025, . 

[2] Lakhani et al., “A Guide to Building Change Resilience in the Age of AI.”

[3] Lakhani et al., “A Guide to Building Change Resilience in the Age of AI.”

Meet the Authors

Headshot of Karim Lakhani

is the Dorothy & Michael Hintze Professor of Business Administration at ӽ Business School. He specializes in technology management, innovation, digital transformation, and artificial intelligence. He is also the Co-Founder and Faculty Chair of the Digital Data Design (D^3) Institute at ӽ and the Founder and Co-Director of the Laboratory for Innovation Science at ӽ.

Jen Stave Jen Stave is Executive Director of the Digital Data Design (D^3) Institute at ӽ. She was previously Senior Vice President at Wells Fargo, and has a PhD from American University.

Douglas Ng Douglas Ng Headshot Douglas Ng is Director of Design of the Digital Data Design (D^3) Institute at ӽ. As a digital strategist, technology educator, and innovation researcher, he specializes in AI transformation and translates the institute’s research for industry leaders.

is Managing Director with BCG X, where he specializes in Generative AI, AI platform engineering, and data management.

The post Mastering Change Resilience: The Key to AI-Driven Success appeared first on HBS AI Institute.

]]>
AI Elevate: Strategy and the Declining Cost of Expertise /ai-elevate-strategy-and-the-declining-cost-of-expertise/ Fri, 18 Jul 2025 13:54:56 +0000 /?p=27947 As AI continues to reshape industries globally, the Digital Data Design (D^3) Institute at ӽ and the ӽ Business School Club of the Gulf Cooperation Council hosted AI Elevate: From Readiness to Exponential Growth on December 13, 2024, in Dubai, UAE. This one-day conference provided business leaders, researchers, and government officials with crucial insights into […]

The post AI Elevate: Strategy and the Declining Cost of Expertise appeared first on HBS AI Institute.

]]>
As AI continues to reshape industries globally, the Digital Data Design (D^3) Institute at ӽ and the ӽ Business School Club of the Gulf Cooperation Council hosted AI Elevate: From Readiness to Exponential Growth on December 13, 2024, in Dubai, UAE. This one-day conference provided business leaders, researchers, and government officials with crucial insights into AI strategy, industry transformation, and global market integration. For an introduction to the day-long conference, see the Opening Remarks and the Agenda.

For the session , Bobby Yerramilli-Rao, Chief Strategy Officer at Microsoft, and D^3 co-founder Karim Lakhani discussed the far-reaching implications of AI on business operations, organizational structures, and strategic planning. Their insights and research offer a compelling vision of how companies must adapt to thrive in an era of proliferating access to expertise.

Key Insight: Expertise is No Longer Scarce, it’s Scalable

“[T]hose that were behind the average, those that were below average, all of a sudden now can be at the average, and if the average of the AI is better than the humans, then they’ll be at wherever the average of the AI is at.”

Karim Lakhani

The most immediate impact of AI is appearing in productivity and performance, with gains that defy traditional economic expectations. AI is effectively raising the floor of competency on difficult tasks that once required years of specialized training across a wide range of fields. Expertise, which used to be a key driver of competitive advantage, is now democratized, and the implications are seismic.

Key Insight: You are More Than an Individual

“[O]ver time, each person can manage a raft of agents, AI agents, to do things for them, so now every person is effectively a team.”

Bobby Yerramilli-Rao

Yerramilli-Rao and Lakhani discussed a future where employees regularly incorporate their own AI agents into their work, and even bring them along across jobs and educational experiences. According to Yerramilli-Rao and Lakhani, companies will need to integrate these AI agents into their systems while maintaining control, governance, and security. For hiring purposes they will need to identify individuals who can effectively collaborate with human-AI teams. The outcome will be flatter structures and less-siloed employees compared to traditional departmental architecture. One vivid example the speakers gave was Focus Fuel, a startup launched by three friends working part-time using GPT tools to develop, market, and scale a new consumer product, all without prior Consumer Packaged Goods (CPG) experience.

Key Insight: Know Your Core Value Proposition

“I think the imperative here is that everyone has to get very very clear about what it is that they’re doing to add value and then use AI to enhance that capability.”

Bobby Yerramilli-Rao

The competitive landscape may be entering a phase of continuous acceleration where companies must simultaneously leverage AI while preparing for advances in AI to match and then exceed their current capabilities. If AI levels the playing field, companies must clarify what truly sets them apart. What are you uniquely good at, and what expertise is replicable by AI or your competitors using AI?

Why This Matters

For business leaders, these insights signal the beginning of a new era where strategic value comes from focus, speed, and broad AI implementation. Those who treat this as a technology upgrade rather than a fundamental shift risk being outpaced. The question is no longer whether AI will transform your industry, but whether your organization will lead or scramble to catch up. Embracing these changes and proactively reshaping your organization around AI capabilities may be the key to unlocking previously unheard of levels of innovation, efficiency, and success in the years to come.

Read their article .

Meet the Speakers

is Chief Strategy Officer at Microsoft. He has co-founded several companies, and has served at organizations including Vodafone and McKinsey. He holds an MA from the University of Cambridge and a PhD from the University of Oxford.

Headshot of Karim Lakhani

is the Dorothy & Michael Hintze Professor of Business Administration at ӽ Business School. He specializes in technology management, innovation, digital transformation, and artificial intelligence. He is the Co-Founder and Chair of the Digital Data and Design (D^3) Institute at ӽ and the Founder and Co-Director of the Laboratory for Innovation Science at ӽ.

The post AI Elevate: Strategy and the Declining Cost of Expertise appeared first on HBS AI Institute.

]]>
AI Elevate: UAE: AI Readiness and Exponential Growth /ai-elevate-uae-ai-readiness-and-exponential-growth/ Thu, 10 Jul 2025 14:55:10 +0000 /?p=27689 As AI continues to reshape industries globally, the Digital Data Design (D^3) Institute at ӽ and the ӽ Business School Club of the Gulf Cooperation Council hosted AI Elevate: From Readiness to Exponential Growth on December 13, 2024, in Dubai, UAE. This one-day conference provided business leaders, researchers, and government officials with crucial insights into […]

The post AI Elevate: UAE: AI Readiness and Exponential Growth appeared first on HBS AI Institute.

]]>
As AI continues to reshape industries globally, the Digital Data Design (D^3) Institute at ӽ and the ӽ Business School Club of the Gulf Cooperation Council hosted AI Elevate: From Readiness to Exponential Growth on December 13, 2024, in Dubai, UAE. This one-day conference provided business leaders, researchers, and government officials with crucial insights into AI strategy, industry transformation, and global market integration. For an introduction to the day-long conference, see the Opening Remarks and the Agenda.

In the session , H.E. Omar Sultan Al Olama, the world’s first Minister of State for Artificial Intelligence, sat down with D^3 co-founder Karim Lakhani for a fireside chat to discuss the UAE’s strategic approach to AI integration and its impact on governance, growth, and quality of life.

Key Insight: History Driving AI Adoption

“An ignorance-based decision to ban something you don’t understand is going to lead to you going backwards.”

H.E. Omar Sultan Al Olama

Al Olama drew an important parallel between today’s AI hesitation and the Middle East’s historic decision to ban the printing press, which sent the region away from global knowledge leadership hundreds of years ago. Concerns about misinformation, loss of control over knowledge production, and fear of unknown consequences – what Al Olama terms ‘ignorance-based decisions’ – are top of mind now because of the uncertainty around AI, but in this case the UAE is aggressively leaning into the new technology, such as by appointing a Minister of State for Artificial Intelligence, and launching more than 147 different applications of AI within the government.

Key Insight: A Dual Track for National Development

“Our development over 50 years was actually a very interesting cycle: we focused on software, so on people and their development, and then we focused on the hardware, which is the buildings, the bridges, the infrastructure, and now we’re going back to focusing on the software, because if you always balance the two, you progress. If you choose to develop one and not the other, you will always fall behind.”

H.E. Omar Sultan Al Olama

This dual approach has been central to the UAE’s growth strategy over the past five decades, with learning and upskilling in AI as only the latest step. For example, over 377 senior government officials recently completed an intensive AI training program, and 2.1 million UAE citizens engaged in prompt engineering for UAE Codes day.

Key Insight: AI for Quality of Life

“We need to dedicate this tool to the improvement of our lives.”

H.E. Omar Sultan Al Olama

Al Olama stressed that AI should be used to enhance people’s quality of life. For example, in Abu Dhabi, traffic lights are connected to an AI hub that optimizes flow, ensuring that the existing infrastructure can maintain efficiency even with population growth. Another example is the use of AI technology in airports, where facial recognition technology allows for a quicker and more seamless experience reducing lengthy waits at checkpoints prevalent elsewhere.

Why This Matters

Al Olama and Lakhani’s conversation provides executives with examples and a strategy for approaching AI adoption and transformation that extends beyond traditional models. The UAE’s experience demonstrates that successful AI implementation requires organizational forethought and commitment, balanced investment in both human and technological capital, and a fundamental reorientation towards human-centered outcomes. By fostering an AI-ready populace, the UAE demonstrates how government, business, and society at large can collaborate to prioritize meaningful outcomes. The UAE’s AI mandate is clear: invest with purpose, lead with clarity, and deploy with empathy.

Meet the Speakers

is the UAE Minister of State for Artificial Intelligence, Digital Economy and Remote Work Applications. He is also Director General of the Prime Minister’s Office at the Ministry of Cabinet Affairs.

Headshot of Karim Lakhani

is the Dorothy & Michael Hintze Professor of Business Administration at ӽ Business School. He specializes in technology management, innovation, digital transformation, and artificial intelligence. He is the Co-Founder and Chair of the Digital Data and Design (D^3) Institute at ӽ and the Founder and Co-Director of the Laboratory for Innovation Science at ӽ.

The post AI Elevate: UAE: AI Readiness and Exponential Growth appeared first on HBS AI Institute.

]]>
Teaching Trust: How Small AI Models Can Make Larger Systems More Reliable /teaching-trust-how-small-ai-models-can-make-larger-systems-more-reliable/ Thu, 03 Jul 2025 16:56:06 +0000 /?p=27648 As Gen AI technology continues to rapidly evolve and LLMs are integrated into more and more applications, questions of trustworthiness and ethical alignment become increasingly crucial. In the recent study “Generalizing Trust: Weak-to-Strong Trustworthiness in Language Models,” authors Martin Pawelczyk, postdoctoral researcher at ӽ working on trustworthy AI; Lillian Sun, undergraduate student at ӽ studying […]

The post Teaching Trust: How Small AI Models Can Make Larger Systems More Reliable appeared first on HBS AI Institute.

]]>
As Gen AI technology continues to rapidly evolve and LLMs are integrated into more and more applications, questions of trustworthiness and ethical alignment become increasingly crucial. In the recent study “,” authors , postdoctoral researcher at ӽ working on trustworthy AI; , undergraduate student at ӽ studying computer science; , PhD student in computer science at ӽ; , postdoctoral research associate at ӽ working on trustworthy AI; and , Assistant Professor of Business Administration at ӽ Business School and PI in D^3’s Trustworthy AI Lab, explore a novel concept: the ability to transfer and enhance trustworthiness properties from smaller, weaker AI models to larger, more powerful ones.

Key Insight: The Three Pillars of AI Trustworthiness

“Trustworthiness encompasses properties such as fairness (avoiding biases against certain groups), privacy (protecting sensitive information), and robustness (maintaining performance under adversarial conditions or distribution shifts).” [1]

The holistic conceptualization taken by the authors in this paper recognizes that, for LLMs to be truly trustworthy, they must excel across multiple domains simultaneously. The researchers tested and demonstrated these principles using real-world datasets, including the Adult dataset, based on 1994 U.S. Census data, where they evaluated fairness by examining whether AI predictions of income varied based on gender attributes. Their privacy assessments used the Enron email dataset, containing over 600,000 emails with sensitive personal information including credit card numbers and Social Security Numbers. For robustness, they used the OOD Style Transfer, which incorporates text transformations, and AdvGLUE++ datasets, which includes examples for widely used Natural Language Processing (NLP) tasks.

Key Insight: Utilizing Novel Fine-Tuning Strategies

“This is the first work to investigate if trustworthiness properties can transfer from a weak to a strong model using weak-to-strong supervision, a process we term weak-to-strong trustworthiness generalization.” [2]

The ӽ team developed two distinct strategies for embedding trustworthiness into AI systems. Their first approach, termed “Weak Trustworthiness Fine-tuning” (Weak TFT), focuses on training smaller models with explicit trustworthiness constraints, then using these models to teach larger systems. The second strategy, “Weak and Weak-to-Strong Trustworthiness Fine-tuning” (Weak+WTS TFT), applies trustworthiness constraints to both the small teacher model and the large student model during training.

Their experiments demonstrate that the Weak+WTS TFT approach produces significantly superior results, with improvements in fairness of up to 3 percentage points (equivalent to a 60% decrease in unfairness), as well as in robustness, or how resilient the AI was to attacks and unexpected situations. Remarkably, these ethical improvements required only minimal sacrifices in task performance—decreases in accuracy did not exceed 1.5% across tested properties.

Key Insight: Challenges in Privacy Transfer

“Privacy presents a unique situation. Note that the strong ceiling (1) does not achieve better privacy than the weak model.” [3]

A key finding of the study is that not all trustworthiness properties transfer equally from weak to strong models. While the transfer of fairness and robustness properties showed promising results, privacy proved to be a more challenging attribute to transfer. The researchers found that larger models have a greater capacity to retain and recall details from their training data, which creates heightened vulnerabilities for exposing sensitive or confidential information. This finding highlights the complex nature of privacy in AI systems and suggests that different strategies may be needed to address privacy concerns in larger models.

Why This Matters:

For C-suite executives and business leaders, this research offers a potential pathway to developing more powerful LLM systems without compromising on certain ethical considerations. It suggests that companies could potentially start with smaller, more manageable models that are fine-tuned for trustworthiness in fairness and robustness, and then scale up to more capable systems while maintaining or even improving these critical properties. This approach could help mitigate risks associated with LLM deployment, enhance public trust in AI-driven decisions, and potentially reduce the resources required for ethical LLM development. However, the challenges identified in transferring privacy properties serve as a reminder of the complex nature of AI ethics. Business leaders should remain vigilant and consider multi-faceted approaches to ensuring the trustworthiness of their LLM systems, particularly when dealing with sensitive data.

Footnote

(1) The strong ceiling represents the benchmark performance of a large model that has been directly trained with trustworthiness constraints, serving as the upper bound for what the weak-to-strong approach should ideally achieve.

References

[1] Martin Pawelczyk et al., “Generalizing Trust: Weak-to-Strong Trustworthiness in Language Models,” arXiv preprint arXiv:2501.00418v1 (December 31, 2024): 1.

[2] Pawelczyk et al., “Generalizing Trust,” 2.

[3] Pawelczyk et al., “Generalizing Trust,” 8.

Meet the Authors

is a postdoctoral researcher at ӽ working on trustworthy AI.

is an undergraduate student at ӽ studying computer science.

is a PhD student in computer science at ӽ.

is a postdoctoral research associate at ӽ working on trustworthy AI.

is an Assistant Professor of Business Administration at ӽ Business School and PI in D^3’s Trustworthy AI Lab. She is also a faculty affiliate in the Department of Computer Science at ӽ University, the ӽ Data Science Initiative, Center for Research on Computation and Society, and the Laboratory of Innovation Science at ӽ. Professor Lakkaraju’s research focuses on the algorithmic, practical, and ethical implications of deploying AI models in domains involving high-stakes decisions such as healthcare, business, and policy.

The post Teaching Trust: How Small AI Models Can Make Larger Systems More Reliable appeared first on HBS AI Institute.

]]>