Industry 4.0 | 性视界 Business School AI Institute /category/industry-4-0/ The 性视界 Business School AI Institute catalyzes new knowledge to invent a better future by solving ambitious challenges. Tue, 21 Apr 2026 15:46:27 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 /wp-content/uploads/2026/04/cropped-Screenshot-2026-04-16-at-10.14.43-AM-32x32.png Industry 4.0 | 性视界 Business School AI Institute /category/industry-4-0/ 32 32 The Hidden Economics of Workplace AI /the-hidden-economics-of-workplace-ai/ Tue, 14 Apr 2026 13:00:23 +0000 /?p=30022 As AI learns directly from how people work, a new tension is emerging about expertise, power, and governance. Listen to this article: In many workplaces, the newest addition to virtual meetings isn鈥檛 a colleague, but an AI assistant like Granola or Otter. Suddenly no one has to scramble for action items or wonder who said […]

The post The Hidden Economics of Workplace AI appeared first on 性视界 Business School AI Institute.

]]>
As AI learns directly from how people work, a new tension is emerging about expertise, power, and governance.

In many workplaces, the newest addition to virtual meetings isn鈥檛 a colleague, but an AI assistant like Granola or Otter. Suddenly no one has to scramble for action items or wonder who said what. The tool fades into the background while work gets a little smoother. And somewhere downstream, the precise record of how capable people think through a problem, handle a difficult client, or navigate a complex negotiation becomes raw material for an AI model. The convenience is real, and the implications are enormous. The new working paper 鈥,鈥 co-written by 性视界 Business School AI Institute Associate, , confronts this dynamic head-on. What happens when workers realize that their work habits, insights, and creativity are training the systems that could replace them? Combining survey evidence, a randomized experiment, and formal economic theory, the authors show that when workers understand that the information they give out to AI may strengthen the organization鈥檚 hand later, they may change how much they share.

Key Insight: The Surveillance Economy

鈥淸W]ork increasingly generates data about work: records of how exactly people do their jobs.鈥 [1]

The working paper argues that workplace surveillance has created a new pipeline from labor to AI: the data produced while employees do their jobs can now be used to train systems that replicate or automate their expertise. This matters because workers report holding large amounts of valuable 鈥渦ncodified鈥 knowledge like tacit know-how, judgment, communication skills and context-specific understanding of customers, projects, and processes that are not fully captured in manuals or company-wide wikis (think Atlassian Confluence). The authors find that workers believe they have substantial control over how much of their knowledge becomes visible to employers, whether by documenting more carefully, communicating on or off monitored channels (think Slack), or altering their participation in surveilled workflows. As workers become aware that this information may be used to build AI that performs similar work, they may become less willing to share it. 

Key Insight: When Workers Find Out, They Pull Back

鈥淸W]orkers who are more aware of how their data may be used choose to forgo meaningful payments for both their past and future data.鈥 [2]

In a study with 971 participants drawn from the online survey platform Prolific, the authors randomly showed half the group a video explaining that AI systems can be trained on records of worker behavior, including their own survey responses. The other half watched an equivalent video that said nothing about data being used for AI training. The effect was stark. Among workers who saw the AI-training explanation, 41% refused to share their past survey data in exchange for a $10 bonus, compared to just 25% in the control group. The same workers were also significantly less willing to participate in future surveys at their existing wage. These results drive the paper鈥檚 theoretical argument. In the model, workers recognize that the knowledge they reveal today can improve the firm by helping create AI that substitutes for their expertise. Anticipating weaker future bargaining power, workers may withhold knowledge in the present. That withholding is individually rational, but collectively costly: it reduces productivity and limits the quality of the AI systems firms can build. Under the current default, worker awareness does not simply slow adoption because people dislike AI, it slows adoption because workers have reason to protect themselves. 

Key Insight: A Fight Over Ownership and Governance

鈥淸C]ollective bargaining over work data eliminates this externality and can achieve both efficient knowledge sharing and a more equitable division of the gains from AI.鈥 [3]

The paper highlights a gap between what workers prefer and what may best protect them. Workers in the survey favored individual ownership of work data, meaning the right to control and sell their own data for AI development. But because each worker鈥檚 knowledge supply (鈥渢he recorded aspects of labor鈥 [4] that could train an AI) could be a substitute for one another, each individual sale strengthens the firm鈥檚 bargaining position against every other worker. Collective ownership resolves this. When workers bargain jointly and their knowledge supplies are bundled together, one worker鈥檚 contribution no longer undermines another’s position. The competition externality disappears. The broader implication is that workplace AI governance should be understood not just as a privacy issue, but as a labor-market and institutional design issue shaped by bargaining power, ownership rights, and collective labor arrangements. 

Why This Matters

For business leaders, this research surfaces a friction that most AI adoption strategies don鈥檛 account for yet. The employees whose expertise you most need to encode could be precisely the ones most aware of what鈥檚 at stake when they share it. As AI tools become more capable and more visible in the workplace, worker awareness will only rise, and so could strategic withholding. This creates a clear managerial implication: organizations can improve AI adoption not just by deploying better tools, but by discussing employee career concerns directly and giving people more meaningful control over how their work data is used. Firms that treat data governance as part of talent strategy and innovation design, rather than a legal checkbox, may be better positioned to unlock mutual benefit: stronger AI performance, higher productivity, and gains that are shared more broadly by the people helping to build the organization鈥檚 future.

Bonus

This paper shows that resistance to workplace AI is not just a matter of fear or inertia, it can emerge whenever new systems redistribute knowledge, bargaining power, or control over how work gets done. For another example, where the friction appears closer to management, check out The Manager鈥檚 AI Dilemma for a perspective on how AI can threaten the authority, discretion, and legitimacy of the very roles expected to approve and implement AI in the workplace.

References

[1] Cullen, Zo毛, Danielle Li, and Shengwu Li, 鈥,鈥 Working Paper (March 30, 2026): 1.

[2] Cullen et al., 鈥淟abor as Capital,鈥 16.

[3] Cullen et al., 鈥淟abor as Capital,鈥 2.

[4] Cullen et al., 鈥淟abor as Capital,鈥 1.

Meet the Authors

Headshot of Zoe Cullen

is Associate Professor of Business Administration at 性视界 Business School and Associate at the HBS AI Institute.

Headshot of Danielle Li

is the David Sarnoff Professor of Management of Technology and a Professor at the MIT Sloan School of Management.

Headshot of Shengwu Li

is Professor of Economics at 性视界 University.

Watch a video version of the Insight Article .

The post The Hidden Economics of Workplace AI appeared first on 性视界 Business School AI Institute.

]]>
80 Apps in One Afternoon: What the Frontier Firm Initiative Is Already Building /80-apps-in-one-afternoon-what-the-frontier-firm-initiative-is-already-building/ Mon, 30 Mar 2026 12:14:21 +0000 /?p=29856 The Frontier Firm AI Initiative was designed around a simple conviction: the most important questions about AI in business can’t be answered in theory. Listen to this article: On Wednesday, March 11, 2026, senior leaders from across the Frontier Firm AI Initiative came together at 性视界 Business School for Journey to the Frontier, an event […]

The post 80 Apps in One Afternoon: What the Frontier Firm Initiative Is Already Building appeared first on 性视界 Business School AI Institute.

]]>
The Frontier Firm AI Initiative was designed around a simple conviction: the most important questions about AI in business can’t be answered in theory.

On Wednesday, March 11, 2026, senior leaders from across the Frontier Firm AI Initiative came together at 性视界 Business School for Journey to the Frontier, an event that brought roughly 100 executives into direct conversation with research shaping the future of their organizations. The Frontier Firm AI Initiative, a collaboration between the 性视界 Business School AI Institute and Microsoft, brings together companies like Barclays, DuPont, EY, Mastercard, and Nestle around a shared commitment: don鈥檛 just adopt AI, but study the transformation rigorously and share what they learn.

The final session of the day, led by Dr. , Mary V. and Mark A. Associate Professor of Business Administration at HBS and co-director of the Tech for All Lab at the HBS AI Institute, invited participants to do something that might seem surprising at an executive-level event: actually build something. By the end of the session, more than 80 working, no-code software applications had taken shape, each addressing a real challenge faced in their daily lives.

Key Insight: The Barrier to Building Has Disappeared

“The gap between the firms leading AI transformation and everyone else is not closing. It is accelerating.” 鈥 Rem Koning

For most of business history, turning an idea into working software meant assembling a team, securing a budget, and waiting. That friction meant most ideas never got off the ground. This session challenged that entirely. Using Lovable, a natural-language app-building platform, participants took the problems they knew best, the ones sitting on their desk every morning, and built tools to address them. One executive created an application that pulls together emails, calendar, and documents each morning and surfaces the handful of decisions that need attention that day. Another built a platform to automate the compliance tracking of fraud claims, the kind of operational infrastructure that would normally require months of engineering work and a dedicated team. In both cases, the gap between having an idea and having a working tool collapsed to an afternoon.

Key Insight: Expertise Is Now the Differentiator

“AI does not replace judgment. It multiplies it. The executives who get the most out of AI are the ones who bring the deepest knowledge of their business to the table.” 鈥 Rem Koning

What became clear across the room was that technical confidence wasn’t what separated the most powerful applications from the rest. It was institutional knowledge. The executives who built the most compelling tools were the ones who understood their problem best, because they had been living with it for years.

One participant built a tool that reframes how their sales teams approach client conversations, shifting the focus from narrow questions about AI deployment toward the more valuable question of how work itself should be redesigned. A leader started building an operations hub to bring their contractors, finances, and the entire pipeline into one place, a single tool to replace all the scattered spreadsheets eating up hours of their week. Another leader from financial services began prototyping an internal tool to help their team evaluate AI projects against a governance framework, something that had previously existed only in manual, time-consuming processes. In each case, what made the tool work was not the technology. It was what the person building it already knew.

Why This Matters

For the organizations in the Frontier Firm cohort, this was just one day in a longer journey. For the broader business world, it was a reminder that the window to lead on AI will not stay open forever. AI is no longer something companies adopt from the outside. It is becoming the foundation on which strategy, operations, and decision-making are built. The leaders who understand this will not just have better tools; they will have built something that is genuinely hard to replicate. The competitive advantage of the next decade will not belong to organizations with the best AI strategy on paper. It will belong to the ones with leaders who know how to build.

Meet the Speaker

Headshot of Rembrand M. Koning

is the Mary V. and Mark A. Stevens Associate Professor of Business Administration in the Entrepreneurial Management Unit at 性视界 Business School. He researches and teaches entrepreneurship, exploring how AI is transforming organizations across the globe, from microenterprises in emerging markets to global enterprises. He is co-director and co-founder of the Tech for All Lab at the HBS AI Institute, and a pioneer in the use of field experiments to study entrepreneurial strategy and innovation.

Watch a video version of the Insight Article here.

The post 80 Apps in One Afternoon: What the Frontier Firm Initiative Is Already Building appeared first on 性视界 Business School AI Institute.

]]>
HBS AI Institute Associates Spotlight Series: Elie Ofek and Julian De Freitas /d3-associates-spotlight-series-elie-ofek-and-julian-de-freitas/ Mon, 23 Feb 2026 19:19:01 +0000 /?p=29518 This series introduces 性视界 Business School AI Institute Associates Program projects which aim to answer important questions at the intersection of artificial intelligence and digital technologies in business and society. This article shares insights from Elie Ofek, Malcolm P. McNair Professor of Marketing, 性视界 Business School and Julian De Freitas, Assistant Professor of Business Administration, […]

The post HBS AI Institute Associates Spotlight Series: Elie Ofek and Julian De Freitas appeared first on 性视界 Business School AI Institute.

]]>
This series introduces 性视界 Business School AI Institute Associates Program projects which aim to answer important questions at the intersection of artificial intelligence and digital technologies in business and society.

This article shares insights from Elie Ofek, Malcolm P. McNair Professor of Marketing, 性视界 Business School and Julian De Freitas, Assistant Professor of Business Administration, 性视界 Business School who are pursuing research on the topics of artificial intelligence and organizations.

1. What drew you to this area of research and how did you first become involved in this work?

Our involvement began through conversations with marketing managers at the 性视界 Art Museums, who were exploring innovative ways to bring their collections to life and were open to using new technologies. This collaboration sparked the idea of using AI avatars to animate historical portraits and measure their impact in real-world settings.

We were naturally drawn to this idea by our interest in how emerging AI technologies鈥攅specially generative and interactive AI鈥攃an transform the way organizations engage with people. While personalization and automation have been studied, there is little empirical work on whether and how embodied, humanlike AI avatars鈥攁 new technology on the cutting edge of AI development鈥攃an foster more engaging interactions between consumers brands. 

2. What are some common misconceptions or barriers around the problem you鈥檙e working to solve?

A limited way to think about AI is that it is primarily a tool for efficiency or information delivery鈥攅ssentially a faster, cheaper way to push content. Yet this perspective does not consider how people naturally perceive these systems. And while use of generative AI for companionship is beginning to be appreciated, many managers under appreciate the technology鈥檚 potential for two-way, emotionally rich interaction that can be relevant for their business/organization. Our research directly explores this potential source of business value in a field setting. 

A barrier is skepticism: organizations worry about handing over control to a system that responds autonomously on their behalf, in case the AI鈥檚 behavior feels too artificial, inauthentic to the brand, or misaligned with its values. Thus, in implementing our research project, we have also needed to take care to develop safeguards鈥攕uch as content moderation, tone calibration, and strict alignment with institutional values鈥攖o ensure that we are protecting the brands of our field partners. Another source of skepticism is whether other modalities鈥攕uch as just an image or a video, which are one-way information provision, is sufficient or even superior to allowing consumers to interact with a brand, particularly when the agent is an AI avatar. Our research aims to examine this very issue and tease out the effect of different modalities.

3. What research is being done on this topic and how is your approach or perspective unique?

There is emerging research on personalization in marketing, human-computer interaction, and parasocial relationships, but most studies are conceptual or run in highly controlled lab settings, and do not focus on the potential of embodied, interactive AI avatars per se. Our approach is unique in three ways:

鈥&苍产蝉辫;Real-world field experiments: we are testing AI avatars in live campaigns with the 性视界 Art Museums, measuring actual engagement, conversion, and visit behavior.

鈥&苍产蝉辫;Systematic variation in interactivity: we directly compare static images, one-way scripted avatars, and two-way conversational avatars to isolate the effects of interactivity.

鈥&苍产蝉辫;Psychological mechanisms: we examine the role of psychological processes in our effects鈥攕uch as perceived intimacy, relational intent, and status dynamics鈥攂ringing a social-psychological lens to AI design in marketing and cultural engagement.

4. What excites you most about this work and its potential impact?

We are most excited about the possibility of moving digital engagement from transactional clicks to meaningful, human-centered interactions. If successful, this work could provide a blueprint for cultural institutions, brands, and nonprofits to create experiences that deepen connection, inspire action, and expand access. For instance, the idea that a historical portrait in a museum could 鈥渢alk鈥 with a visitor鈥攁nd that this could spark curiosity, learning, and even a museum visit鈥攊s both academically fascinating and socially impactful, suggesting a fundamentally new way to engage new generations while sustaining important cultural institutions. 

5. How do you hope working with D^3 will amplify the impact of your work?

The essential infusion of funds from D^3 is allowing us to turbocharge this challenging research program, reaching insights that otherwise would not have been possible or may have come too late to meaningfully inform managerial practice. Through its network, resources, and convening power, D^3 then offers a unique platform to translate our insights into practice, as well as spark new collaborations that could scale these insights well beyond our initial museum partnership. [1]

6. What changes do you hope to see in your field as a result of the work being done in this area?

We hope to see a shift from viewing AI as a cost-saving novelty toward seeing it as a relational tool鈥攐ne that can extend human-brand connections beyond what has previously been possible. We hope the findings will suggest a new vision of brand engagement, in which AI-powered brands listen, adapt, and co-create meaning with customers; rather than just focus on one-way modalities. Furthermore, we believe these findings can inform managerial models of how AI integration efforts should merge with customer relationship and brand management efforts鈥攁n area that is sorely in need of empirically-informed conceptual frameworks. For some initial ideas, see: 

Finally, we hope to see more industry-academic partnerships conducting field-based, ecologically valid studies that address these questions, while still being mindful of how to do so in a sustainable manner that protects long-term customer and brand assets, since many company鈥檚 AI efforts have been failing and there is some apprehension among brands. For a number of analyzed failure examples, see 

7. What鈥檚 an essential area in which AI and digital technologies will reshape the way businesses or society operate in the long run that we may not be considering?

People treat today鈥檚 highly capable chatbots much more like they do other human beings, than they do other (non-living) technologies. We believe this has profound implications for how brands are engaging customers that are still underappreciated. 

To provide just one example, one of us has found that users of so-called 鈥淎I companion鈥 apps often say farewell before logging off the app rather than simply quitting the app. These apps leverage this moment by employing 鈥渆motionally manipulative tactics鈥, like making the user feel guilty for leaving, in order to prevent users from logging off at this point. These tactics work, increasing the number of messages users send and how long they stay on the app beyond the point they said farewell (relative to when the app simply says goodbye in turn, without using emotional manipulation). Notice that such tactics are not simply increasing engagement by how they recommend content to users, but by capitalizing on our social and emotional instincts. For more details, see: 

罢丑别听性视界 Business School AI Institute Associates Program聽supports and accelerates faculty research into the ways AI and digital technologies are reshaping companies, organizations, society, and practice.

Notes

[1] The Digital Data Design Institute at 性视界 (D^3) was renamed the 性视界 Business School AI Institute in April 2026.

The post HBS AI Institute Associates Spotlight Series: Elie Ofek and Julian De Freitas appeared first on 性视界 Business School AI Institute.

]]>
HBS AI Institute Associates Spotlight Series: Dr. Livia Alfonsi /d3-associates-spotlight-series-dr-livia-alfonsi/ Mon, 23 Feb 2026 16:20:42 +0000 /?p=29511 This series introduces 性视界 Business School AI Institute Associates Program projects which aim to answer important questions at the intersection of artificial intelligence and digital technologies in business and society. This article shares insights from Dr. Livia Alfonsi, Assistant Professor of Business Administration at 性视界 Business School, whose research studies labor markets and the transition […]

The post HBS AI Institute Associates Spotlight Series: Dr. Livia Alfonsi appeared first on 性视界 Business School AI Institute.

]]>
This series introduces 性视界 Business School AI Institute Associates Program projects which aim to answer important questions at the intersection of artificial intelligence and digital technologies in business and society.

This article shares insights from Dr. Livia Alfonsi, Assistant Professor of Business Administration at 性视界 Business School, whose research studies labor markets and the transition from school to work, with a focus on how to help young workers find strong job matches and build early-career trajectories.

1. What drew you to this area of research and how did you first become involved in this work?

I spend most of my time studying labor markets in the Global South, particularly in Sub-Saharan Africa and South Asia, where historically large cohorts of young adults are entering the labor market with higher education and higher expectations than previous generations. Over the next decade, more than a billion young people will reach working age in developing countries. Yet in many settings, job creation has not kept pace, and competition for stable formal work is intense.

In these environments, the early years of a career can be fragile. When early job search efforts lead to low pay, unstable work, or repeated rejection, young workers can become discouraged and pull back from active search. Some drift into casual work, subsistence activities, or inactivity. That discouragement is not just a personal experience. It can translate into underutilized human capital at scale, even when education and training investments have risen dramatically.  

In prior research, I studied mentorship interventions that can help young jobseekers persist through uncertainty, recalibrate expectations, and navigate early setbacks. Those programs can be powerful complements to education and training. At the same time, job search is increasingly mediated through digital platforms, especially for younger cohorts. This led me to a new question: can conversational AI be designed to provide some of the continuity, encouragement, and practical guidance that supports persistence, complementing human sources of support when they are available, and extending access to that kind of guidance when they are not? Partnering with Rozee.pk, Pakistan鈥檚 largest job platform, and its AI career buddy, Rozeena, we are testing how conversational tone, message content, and memory callback features shape engagement and job search behavior at scale, in a setting where an AI mentor can support users across the process, from identifying opportunities and understanding the labor market, to practical steps like CV building and interview preparation, and to encouragement and follow-through during setbacks.

2. What are some common misconceptions or barriers around the problem you鈥檙e working to solve?

One common misconception is that job search is mainly an information problem. If young people just had better data about wages, labor demand, or application strategies, outcomes would improve. Another is that networks are the whole story. Networks do matter a lot, but their value goes beyond access to job leads. What’s often missing is sustained support鈥攕omeone to normalize setbacks, provide encouragement, and model what persistence looks like when early efforts don’t pay off immediately. This matters because the challenge is rarely about a single decision. It’s about maintaining momentum over weeks or months when feedback is scarce, rejections are common, and progress is hard to see. In reality, young workers face a twofold gap. First, they lack practical guidance about which steps are most effective at different stages of job search or career growth. Second, they struggle to sustain motivation and adapt strategy over time, especially when early efforts lead to silence or rejection. Both problems need to be addressed together. Information alone doesn’t help if you’re too discouraged to act on it. And encouragement rings hollow if it’s not paired with actionable guidance. That’s why studying how support is delivered鈥攊ncluding tone, timing, and continuity鈥攊s just as important as studying what information is shared.

3. What research is being done on this topic and how is your approach or perspective unique?

There’s growing research on AI in hiring and labor markets, often focused on screening, matching efficiency, or bias. There’s also a rich body of behavioral and psychological research on discouragement, belief formation, and how people respond to feedback during job search. But we still know very little about how to leverage AI systems that people actually interact with, including conversational tone, memory, and continuity, to foster persistence and improve decisions over time. 

Our approach is distinctive in three ways. First, we’re studying these questions at scale, in a real labor market. Working with Pakistan’s largest job platform, we evaluate randomized tests implemented within the product experience that vary how guidance is delivered and what the system remembers from past interactions, all embedded directly in the product experience. Second, we can link conversational interactions to extraordinarily rich administrative data: two decades of platform history, including job postings, applications, hires, and wage trajectories, alongside AI-era conversational logs. This lets us study not just immediate responses, like whether someone clicks on a job ad, but also longer-term shifts in search behavior, match quality, and labor market outcomes. Third, we’re testing communication strategies in addition to information provision, studying whether empathic framing or motivational language helps workers engage with advice, feel understood, and stay active through setbacks. Finally, we see this project as generating evidence that travels beyond AI. By treating conversational AI as a disciplined testbed, we can identify which communication strategies help young workers persist, whether the guidance comes from an AI agent, a mentor, or a career counselor. Those lessons can inform how support providers design interventions that are more credible, more motivating, and more effective.

4. What excites you most about this work and its potential impact?

What excites me most is the possibility of designing digital systems that reinforce agency rather than undermine it. Job search already feels opaque and discouraging to many young workers. There’s a real risk that technology makes this worse: more automated rejections, less human feedback, even less sense of progress. But conversational AI, designed thoughtfully, offers something different: personalization at scale. In many low- and middle-income settings, access to career guidance is uneven and formal support systems are limited. A tool that reaches people through WhatsApp, in local languages, with low friction, can meet workers where they already are. WhatsApp is part of daily life for billions of people, which means this model can, in principle, deliver guidance at a scale that traditional programs never could. The question is whether the design choices we make (empathic language versus neutral facts, recalling past conversations versus treating each interaction as new, proactive follow-up versus waiting for users to return) actually matter for outcomes. If they do, it means platform designers have real leverage to shape not just match efficiency, but worker persistence, confidence, and long-term trajectories. 

5. How do you hope working with D3 will amplify the impact of your work?

I’m grateful for D^3’s support, and I’m especially excited about joining a community that’s thinking rigorously about how AI and data are reshaping organizations and markets. What I value most about this collaboration is the chance to have a structured space to share early findings, stress-test interpretations, and learn from others tackling similar challenges across different domains. It will help ensure this project generates insights that are rigorous, actionable, and useful beyond this single context. That kind of feedback is especially helpful for a project like mine, which sits at the intersection of labor economics, behavioral science, and AI product design, and it depends on close, iterative collaboration with an industry partner, Rozee.pk. D^3’s ecosystem is invaluable because I can learn from parallel efforts across domains. The structured feedback loops, workshops, and cross-disciplinary conversations D^3 enables are especially helpful for a project that’s fundamentally about translating research insights into better platform design. [1]

6. What changes do you hope to see in your field as a result of the work being done in this area?

I hope we move beyond thinking of digital labor platforms as static information boards and instead treat them as systems that shape how people persist, learn, and decide over time. In practice, that means taking seriously that the delivery of guidance, including tone, timing, continuity, and what the system recalls from prior interactions, can influence whether workers stay engaged in the labor market or withdraw after setbacks.

More concretely, I hope the field develops evidence-based principles for how to communicate difficult but important messages in a way that keeps workers moving forward. Many labor markets are changing quickly. Some career paths are becoming flatter, some skills are depreciating faster, and many workers will need to reskill or adjust expectations. A central challenge is not only identifying these shifts, but communicating them in a way that preserves motivation and agency. For example, how do we provide realistic feedback about prospects while still helping people take the next constructive step, whether that step is adjusting search strategy, pursuing training, or pivoting to a nearby occupation?

Finally, I hope research in this area broadens the set of outcomes and design questions we study. In addition to questions about matching, efficiency, and fairness, we should also ask how AI systems shape motivation, expectations, and follow-through, especially for young adults navigating uncertainty and groups that face barriers to opportunity. If AI is going to play a role in career guidance, we cannot forget the humanity of the user. Last, the most effective support may not look the same for everyone. It may need to adapt to different personalities, circumstances, and moments, offering encouragement when confidence is low, structure when someone feels stuck, and practical feedback when someone is ready to take action.

7. What鈥檚 an essential area in which AI and digital technologies will reshape the way businesses or society operate in the long run that we may not be considering?

One underappreciated shift is that AI is becoming part of the advice and feedback ecosystem that shapes how people make high-stakes decisions, especially career decisions. Over the years, we’ve moved from seeking guidance primarily from other people, to relying on search engines or online communities. Conversational AI is becoming the next default: the place people turn after a setback, when they feel stuck, or when they’re deciding whether to persist, pivot, or invest in new skills. As AI becomes embedded in job platforms, workplace tools, and general-purpose assistants, it will fundamentally influence how people interpret feedback, and decide what to do next. Career guidance (once delivered sporadically by mentors or counselors) will increasingly be mediated by systems that respond instantly, repeatedly, and at very low cost.

That’s fundamentally different from human advice, and we’re only beginning to understand the implications. This raises important questions such as; What kinds of advice build agency rather than dependency? How do we design systems that can communicate difficult truths without discouraging users? Designing AI that is not only capable, but responsible, trustworthy, and genuinely supportive in these high-stakes contexts is an essential frontier. It’s about whether we can design systems that help people navigate uncertainty with more confidence, make better-informed decisions, and build stronger long-term trajectories, particularly for workers who lack access to traditional support networks. That’s a design challenge with enormous implications for equity and inclusion in labor markets worldwide.

罢丑别听性视界 Business School AI Institute Associates Program聽supports and accelerates faculty research into the ways AI and digital technologies are reshaping companies, organizations, society, and practice.

Notes

[1] The Digital Data Design Institute at 性视界 (D^3) was renamed the 性视界 Business School AI Institute in April 2026.

The post HBS AI Institute Associates Spotlight Series: Dr. Livia Alfonsi appeared first on 性视界 Business School AI Institute.

]]>
HBS AI Institute Associates Spotlight Series: Alex Chan /d3-associates-spotlight-series-alex-chan/ Mon, 23 Feb 2026 16:04:00 +0000 /?p=29504 This series introduces 性视界 Business School AI Institute Associates Program projects which aim to answer important questions at the intersection of artificial intelligence and digital technologies in business and society. This article shares insights from Alex Chan, Assistant Professor of Business Administration in the Negotiation, Organizations & Markets Unit at 性视界 Business School who is […]

The post HBS AI Institute Associates Spotlight Series: Alex Chan appeared first on 性视界 Business School AI Institute.

]]>
This series introduces 性视界 Business School AI Institute Associates Program projects which aim to answer important questions at the intersection of artificial intelligence and digital technologies in business and society.

This article shares insights from Alex Chan, Assistant Professor of Business Administration in the Negotiation, Organizations & Markets Unit at 性视界 Business School who is pursuing research on the topics of artificial intelligence and organizations.

  1. What drew you to this area of research and how did you first become involved in this work?

My background spans both technology and healthcare, which naturally pulled me toward the 鈥渆ngineering鈥 side of economics鈥攎arket design. I became fascinated by how small changes in market rules, incentives, or even information presentation can meaningfully shape human behavior鈥攕ometimes with life-or-death consequences in settings like healthcare and organ allocation.

My interest in AI grew from two directions. On the research side, I worked early on questions around deep learning鈥檚 ability to extract patient preferences and clinically relevant signals from unstructured data like clinical notes. On the applied side, my time in industry deploying AI-enabled healthcare products made the promise鈥攁nd the risk鈥攙ery concrete: technology can match expert performance and save enormous amounts of time, but it also changes how people make decisions and how accountability is assigned. That combination convinced me that one of the next major market design challenges is not just building better AI systems, but integrating AI into human decision-making environments in ways that are robust, incentive-compatible, and ultimately welfare-improving鈥攅specially as we think ahead to more advanced systems.

2. What are some common misconceptions or barriers around the problem you鈥檙e working to solve?

A major misconception is that 鈥渕ore information鈥 automatically leads to better decisions. In the context of Explainable AI (XAI), for instance, many people assume that if you provide an explanation, decision-makers will naturally use it to make fairer, better choices. But in practice, transparency can create strategic discomfort: explanations can reveal biases, conflicts of interest, or decision rules that stakeholders would rather not surface鈥攅specially when there are financial incentives, reputational concerns, or legal exposure at stake.

One barrier, then, is that people may strategically prefer 鈥渂lack-box鈥 systems鈥攏ot because they love opacity, but because opacity can protect them from scrutiny or responsibility. Another barrier is that we often forecast AI鈥檚 societal impact by linearly extrapolating from recent waves of automation. That framing can miss how AI will reshape how preferences are expressed, how trust is formed, and how institutions evolve when cognition, forecasting, and persuasion become more scalable and more delegated to machines.

3. What research is being done on this topic and how is your approach or perspective unique?

A lot of the current research rightly focuses on the technical 鈥渉ow-to鈥 of AI鈥攂uilding more accurate models, improving interpretability methods, and optimizing performance. My perspective is complementary: I treat AI as a participant in a market or organization rather than simply a tool. That means I focus on how AI systems interact with incentives, power, accountability, and human behavior鈥攐ften in ways that aren鈥檛 visible if we only measure technical accuracy.

For example, in my working paper 鈥淧reference for Explanations: The Case of XAI,鈥 I don鈥檛 just ask whether an AI can explain itself鈥擨 ask whether people actually want explanations when real incentives and tradeoffs are present. Using incentivized experiments with real financial stakes helps reveal when transparency is demanded, when it鈥檚 avoided, and why.

More broadly, by combining market design and behavioral economics, I can study how AI decision-support, monitoring, or recommendation systems interact with factors like gender, race, hierarchy, and institutional constraints鈥攄imensions that pure computer science approaches often treat as 鈥渄ownstream鈥 but that frequently determine real-world outcomes. Market design also pushes us to analyze markets that don鈥檛 fully exist yet, which is increasingly important as AI changes what it even means to 鈥減articipate鈥 in a market.

4. What excites you most about this work and its potential impact?

What excites me most is the possibility of moving beyond the idea that AI progress is mainly about better prediction鈥攁nd toward the idea that progress is about better systems. If we design incentives and institutions well, AI can reduce cognitive overload, improve access to expertise, and make high-stakes decisions more consistent and less arbitrary. In healthcare, that can translate into better triage, more equitable access, reduced clinician burnout, and ultimately better patient outcomes.

At the same time, I鈥檓 excited by the intellectual challenge: AI changes the 鈥渞ules of the game鈥 in markets and organizations. We now have decision-makers who can delegate judgment to models, organizations that can scale monitoring and evaluation, and environments where explanations can be demanded, ignored, weaponized, or strategically suppressed. Understanding those dynamics鈥攁nd designing mechanisms that make good outcomes more likely鈥攆eels both urgent and deeply consequential.

5. How do you hope working with D^3 will amplify the impact of your work?

D^3 is an ideal home for this kind of research because it brings together technologists, economists, organizational scholars, and practitioners who are grappling with the same reality from different angles. I see D^3 as a 鈥渢ranslation layer鈥 between theory and deployment: a place where questions about incentives, governance, and real-world adoption can be stress-tested against how organizations actually operate.

I also hope D^3 will amplify impact through its convening power and practitioner ecosystem鈥攈elping connect research insights to real institutional design decisions, from product development and auditing to policy, procurement, and organizational governance. When the goal is not just to understand AI, but to shape how it鈥檚 used responsibly and effectively, that cross-disciplinary and real-world engagement is invaluable. [1]

6. What changes do you hope to see in your field as a result of the work being done in this area?

I hope to see market design become a central lens for thinking about AI, including advanced systems that may begin to act more like autonomous agents in the economy. Rather than relying primarily on after-the-fact regulation or patchwork compliance, I want to see organizations design digital ecosystems from the ground up with incentives that support transparency, productivity, and fairness simultaneously.

In practical terms, that means shifting from 鈥淐an we build this model?鈥 to 鈥淲hat behavior does this system produce once it鈥檚 embedded in an institution with real incentives?鈥 It also means building stronger evidence around what kinds of transparency and accountability mechanisms actually work鈥攏ot just in principle, but in practice.

7. What鈥檚 an essential area in which AI and digital technologies will reshape the way businesses or society operate in the long run that we may not be considering?

One underappreciated shift is that AI won鈥檛 just replace tasks鈥攊t will reshape the institutional infrastructure through which preferences, negotiations, and decisions happen. As personal AI agents become more common鈥攁gents that summarize options, negotiate on our behalf, filter information, and even execute transactions鈥攎arkets may increasingly become 鈥渁gent-to-agent.鈥 That changes what it means to have a preference, how trust is built, and how persuasion and manipulation operate at scale.

This raises foundational design questions:

  • How do we represent and protect human preferences when they鈥檙e expressed through intermediating AI systems?
  • What new markets and norms emerge when AI can cheaply generate convincing arguments, tailored messaging, or strategic explanations?
  • What does accountability look like when decisions are the output of human-AI teams鈥攐r of automated negotiations between agents?

In the long run, the big opportunity (and challenge) is designing the mechanisms鈥攊dentity, provenance, incentives, auditing, governance鈥攖hat make delegation to AI socially beneficial rather than destabilizing. That鈥檚 where market design and institutional thinking become essential.

罢丑别听性视界 Business School AI Institute Associates Program聽supports and accelerates faculty research into the ways AI and digital technologies are reshaping companies, organizations, society, and practice.

Notes

[1] The Digital Data Design Institute at 性视界 (D^3) was renamed the 性视界 Business School AI Institute in April 2026.

The post HBS AI Institute Associates Spotlight Series: Alex Chan appeared first on 性视界 Business School AI Institute.

]]>
How AI Can Spot Your Next Billion-Dollar Idea /how-ai-can-spot-your-next-billion-dollar-idea/ Wed, 04 Feb 2026 13:08:23 +0000 /?p=29399 A new study shows how AI can influence the kind of innovation you end up funding. Many of us have started using AI as an 鈥渁nswer machine鈥 to brainstorm ideas, analyze data, and pressure-test assumptions. You might even have a set of preferred prompts saved for just these purposes. But how often do you switch […]

The post How AI Can Spot Your Next Billion-Dollar Idea appeared first on 性视界 Business School AI Institute.

]]>
A new study shows how AI can influence the kind of innovation you end up funding.

Many of us have started using AI as an 鈥渁nswer machine鈥 to brainstorm ideas, analyze data, and pressure-test assumptions. You might even have a set of preferred prompts saved for just these purposes. But how often do you switch the order of the steps you give the AI, and what kind of influence does it have on the output? In 鈥,鈥 a team including 性视界 Business School AI Institute鈥檚 shows that when organizations integrate AI into multi-stage innovation evaluation processes, the sequence creates a powerful but largely invisible tradeoff. By understanding how to structure this dynamic, we can gain a crucial advantage when identifying ideas that could bring value to our portfolios and organizations.聽

Key Insight: Navigating Between Novelty and Feasibility

鈥淥ur focus on sequencing is grounded in the observation that evaluators naturally rely on criteria-sequencing, a heuristic involving the prioritization of alternative criteria at different evaluation stages.鈥 [1]

Evaluating innovation is a high-stakes balancing act between two competing forces: novelty and feasibility. You want solutions that depart from established approaches (novelty) and that can realistically be built and implemented (feasibility). But as the authors note, evaluators can鈥檛 weigh everything simultaneously, so they prioritize one over the other at different stages of the process (criteria-sequencing): either novelty-then-feasibility or feasibility-then-novelty. These sequences lead to different results because the order acts as the initial filter. If a solution is eliminated in the first stage based on one criterion (e.g. feasibility), it is never evaluated on the second (e.g. novelty). Since evaluators apply these criteria in personalized ways, the order they use can lead to inconsistent decisions.

Key Insight: An AI Innovation Experiment

鈥淎I recommendations operate much like 鈥榮potlights on a stage鈥: they illuminate certain aspects of a solution while leaving others in the dark, subtly structuring the order and weighting of the cues evaluators consider.鈥 [2]

To see how AI could structure these heuristics, the researchers partnered with the crowdsourcing platform Hackster.io for a field experiment involving 353 evaluators and 132 open-source solutions. They utilized two distinct types of AI: Predictive AI and Generative AI. Predictive AI, which excels at identifying patterns from past data, was used to provide feasibility recommendations based on technical benchmarks. Generative AI, capable of recombining knowledge in unconventional ways, provided novelty-focused recommendations. Both systems provided 鈥淧ass鈥 or 鈥淔ail鈥 recommendations with explanatory content, with half of the evaluators receiving feasibility-then-novelty sequencing, and the other half receiving novelty-then-feasibility. The researchers predicted that the criteria-sequencing would create what they call a mean-variance innovation tradeoff: feasibility-then-novelty would allow evaluators to take greater risks with fewer options, resulting in higher mean innovation, while novelty-then-feasibility would cast a wider initial net, surfacing atypical solutions and producing higher variance.

Key Insight: Tradeoffs in the Pursuit of Breakthroughs

鈥淥verall, our experimental results provide compelling evidence of a mean-variance innovation tradeoff.鈥 [3]

The results supported the researchers鈥 predictions, meaning the order of evaluation dictates the type of innovation an organization is likely to champion. The researchers also found in post hoc analysis that the AI鈥檚 format played a role: compared to a static summary, an interactive chatbot increased innovation variance, but led to a lower mean innovation rating. It appears that without a fixed, standardized summary to guide them, evaluators spent more time exploring diverse questions and ultimately relied more on their own judgment. As a result, evaluations became more complex, average quality declined, and the set of selected options became more diverse.This suggests that dynamic 鈥渢hought partners鈥 encourage more exploration and reliance on human judgment, while static AI recommendations act more as rigid guides. 

Why This Matters

For business leaders and executives encouraging their employees to use AI-augmented workflows, this research fundamentally reframes the integration question. It鈥檚 not just about whether to use AI, or even which tasks to automate versus augment, it鈥檚 about recognizing that AI recommendations create structure that shapes human judgment in path-dependent ways. The sequence you choose could determine whether your organization builds a portfolio optimized for steady performance or one that swings for breakthrough innovation. The question isn鈥檛 whether AI will influence your decisions, it鈥檚 whether you鈥檒l deliberately design that influence, or let it emerge accidentally from your initial prompt. 

Bonus

For another look at how AI can shape outcomes by steering what kind of ideas people generate and select, check out 鈥The Creative Edge: How Human-AI Collaboration is Reshaping Problem-Solving.鈥&苍产蝉辫;

References

[1] Grumbach, Cyrille, Jacqueline N. Lane, and Georg von Krogh, 鈥淭he Mean-Variance Innovation Tradeoff in AI-Augmented Evaluations,鈥 性视界 Business School Technology & Operations Mgt. Unit Working Paper No. 26-038 (2025): 1.  

[2] Grumbach et al., 鈥淭he Mean-Variance Innovation Tradeoff in AI-Augmented Evaluations,鈥 2. 

[3] Grumbach et al., 鈥淭he Mean-Variance Innovation Tradeoff in AI-Augmented Evaluations,鈥 33.

Meet the Authors

is a PhD Candidate and Research Associate at the Chair of Strategic Management and Innovation at ETH Zurich.

Headshot of Jacqueline Ng Lane

is Assistant Professor of Business Administration at HBS and co-Principal Investigator of the Laboratory for Innovation Science at 性视界 (LISH) at the HBS AI Institute.

is a Professor at ETH Zurich and holds the Chair of Strategic Management and Innovation.

The post How AI Can Spot Your Next Billion-Dollar Idea appeared first on 性视界 Business School AI Institute.

]]>
The HBS AI Institute and Microsoft Launch Accelerated AI Research Initiative /d3-and-microsoft-launch-accelerated-ai-research-initiative/ Mon, 17 Nov 2025 20:58:15 +0000 /?p=29052 性视界 Business School faculty, in collaboration with Microsoft and its clients, will study human-AI work, publish evidence-based blueprints, and deliver custom workshops for executives to rapidly reinvent global businesses as Frontier Firms; Eli Lilly and Company, EY, Lumen Technologies, and Nestl茅 among 14 organizations in the inaugural cohort. BOSTON,聽November聽18, 2025 鈥 Faculty at the聽性视界 Business School AI Institute [1]聽today […]

The post The HBS AI Institute and Microsoft Launch Accelerated AI Research Initiative appeared first on 性视界 Business School AI Institute.

]]>
性视界 Business School faculty, in collaboration with Microsoft and its clients, will study human-AI work, publish evidence-based blueprints, and deliver custom workshops for executives to rapidly reinvent global businesses as Frontier Firms; Eli Lilly and CompanyEY, Lumen Technologies, and Nestl茅 among 14 organizations in the inaugural cohort.

BOSTON,聽November聽18, 2025 鈥 Faculty at the聽性视界 Business School AI Institute [1]聽today announced the launch of the Frontier Firm AI Initiative, a collaboration with Microsoft and聽its聽clients that aims to deepen understanding and accelerate the practice of building Frontier Firms.聽As defined by the聽HBS AI Institute,聽Frontier Firms are human led, agent operated organizations that put AI at the core of their strategy to transform operations, accelerate innovation, and amplify human capacity. The research into聽the journey of聽Frontier Firms will be a catalyst for redefining long-held paradigms of work. Hosted by the HBS AI Institute, this Initiative will develop applied research on human-AI collaboration, upskill global C-suite leadership, and deliver new insights and tools to disrupt conventional business thinking.聽

The Chair of the HBS AI Institute and Dorothy and Michael Hintze Professor of Business Administration at 性视界 Business School (HBS), Karim Lakhani, will be joined with fellow HBS faculty members Iavor Bojinov, Rafaella Sadun, Rem Koning, Shunyuan Zhang, and Kadeem Noray to drive forward the portfolio of experiments. Their efforts will focus in five key areas: future-state operating models for effective human-AI collaboration in core business functions, agent boss (an initial management theory for AI agents), agentic workflows, building a Frontier Firm radar based on AI-native startups, and the effect of new technologies on firm demands for skills and labor.

鈥淓xecutives that go all in on AI without a clear path forward risk falling into a frustrating cycle of pilots that don鈥檛 deliver value and have no impact, with this Initiative, we are collaborating with trailblazing organizations who are pushing the limits of agentic AI to deliver value to their customers, reimagine work patterns, reinvent operations, and generate new business models. Together in collaboration with Microsoft and its customers, we aim to create rigorous, evidence-based blueprints for high-performing human-AI workplaces, bridging the gap between ambition and true competitive advantage.鈥

Karim Lakhani, The Chair of the HBS AI Institute and Dorothy and Michael Hintze Professor of Business Administration at 性视界 Business School (HBS)

Jared Spatro, Chief Marketing Officer, AI at Work at Microsoft articulated, 鈥淚t鈥檚 no longer a question of 鈥榠f鈥 AI is right for business鈥攍eaders today are grappling with 鈥榟ow鈥 to become a Frontier Firm. This Frontier Firm AI Initiative is addressing a critical gap in the marketplace, giving leaders the education and practical tools they need to help their people and organizations navigate this transformation.鈥

The inaugural cohort of organizations embarking on the path to become Frontier Firms include Barclays, BNY, Clifford Chance, DuPont, Eaton, Eli Lilly and Company, EY, GHD, Kantar, Levi Strauss & Co., Lumen Technologies, Mastercard, Nestl茅, and others. Organizations will participate in large-scale field-based experiments in AI that explore AI-first work patterns as well as participate in custom workshops that translate the results of the research into practical guidelines for organizations innovating their operating models with AI. 

“AI has given business new ways to create value and a thousand new ways to get lost doing it. Academia鈥檚 role is to chart the tide so leaders can navigate with more reliable information about the surrounding environment.  We鈥檙e grateful for our organizational relationships, which make it possible to curate this knowledge at a moment when best practices are urgently needed yet still unwritten”  

Jen Stave, the founding Director of the HBS AI Institute.

About the HBS AI Institute

The HBS AI Institute provides research-driven insights, accessible to anyone in the world, on using AI and digital technologies to advance business and society. Emerging from 性视界 Business School under the leadership of Dean Srikant Datar and founded on the premise that AI technology is only half of the answer and that businesses must also revamp their processes to harness AI’s potential, the HBS AI Institute is made up of a global network of multidisciplinary faculty, researchers and scientists, business leaders, and entrepreneurs.

Notes

[1] The 性视界 Business School AI Institute was known as the Digital Data Design Institute at 性视界 (D^3) until April 2026.

The post The HBS AI Institute and Microsoft Launch Accelerated AI Research Initiative appeared first on 性视界 Business School AI Institute.

]]>
State of the Market: An Industry Analysis of Tech-Enabled DEI Products /state-of-the-market-an-industry-analysis-of-tech-enabled-dei-products/ Fri, 17 Oct 2025 20:24:41 +0000 /?p=28900 A recent white paper produced by the blackbox Lab at D^3 presents a state-of-the-market analysis of 182 companies offering tech-enabled DEI products, highlighting key patterns in company formation, leadership composition, market positioning, and companies鈥 rhetorical strategy. It offers an in-depth but broad view of how the tech industry is approaching search efforts for much-needed talent […]

The post State of the Market: An Industry Analysis of Tech-Enabled DEI Products appeared first on 性视界 Business School AI Institute.

]]>
A recent white paper produced by the blackbox Lab at D^3 presents a state-of-the-market analysis of 182 companies offering tech-enabled DEI products, highlighting key patterns in company formation, leadership composition, market positioning, and companies鈥 rhetorical strategy. It offers an in-depth but broad view of how the tech industry is approaching search efforts for much-needed talent and contains implications for the future of this sector.
 
This report also reveals a growing demand for inclusion-focused, tech-enabled solutions and the tensions shaping their development. The current pushback against DEI, as well as emerging trends emphasizing AI integration and a potential shift towards skills-based hiring, signal that the field is at an inflection point. As companies balance cultural backlash with market demands, the future necessitates adaptability to an ever- changing social, cultural, and technological landscape.
 
TLDR; This report analyzes 182 companies offering tech-enabled diversity, equity, and inclusion (DEI) products to help leaders understand how the market is evolving and where opportunities鈥攁nd risks鈥攎ay lie. The study reveals that companies with a DEI focus tend to have more diverse leadership teams and are more likely to prioritize identity-based solutions. Most products are concentrated in hiring and recruitment, with fewer focused on retention, promotion, or startup support鈥攊ndicating missed opportunities across the employee lifecycle. The majority of companies use a transactional approach that ties DEI efforts to business value and efficiency, though developmental (culture- and values-driven) and skills-based (competency-focused) approaches are also present. Despite growing interest, most firms remain small, under $50M in valuation, and face headwinds from increasing political backlash. At the same time, trends like AI integration and skills-based hiring are opening new paths forward. For leaders seeking to invest in, partner with, or design DEI technologies, this report offers a clear view of where the field stands today鈥攁nd where it鈥檚 headed.

Read a write up from the  of the report here:

The post State of the Market: An Industry Analysis of Tech-Enabled DEI Products appeared first on 性视界 Business School AI Institute.

]]>
HBS AI Institute Associates Spotlight Series: Boris Groysberg /d3-associates-spotlight-series-boris-groysberg/ Mon, 22 Sep 2025 17:24:24 +0000 /?p=28717 This series introduces 性视界 Business School AI Institute Associates Program projects which aim to answer important questions at the intersection of artificial intelligence and digital technologies in business and society. This article shares insights from Boris Groysberg, Richard P. Chapman Professor of Business Administration who is pursuing research on the topics of artificial intelligence and […]

The post HBS AI Institute Associates Spotlight Series: Boris Groysberg appeared first on 性视界 Business School AI Institute.

]]>
This series introduces 性视界 Business School AI Institute Associates Program projects which aim to answer important questions at the intersection of artificial intelligence and digital technologies in business and society.

This article shares insights from Boris Groysberg, Richard P. Chapman Professor of Business Administration who is pursuing research on the topics of artificial intelligence and organizations.

1. What drew you to this area of research and how did you first become involved in this work?

It came through my field work.   

2.  What are some common misconceptions or barriers around the problem you鈥檙e working to solve?

There is a misconception that large AI initiatives can be implemented without thinking about the organizational and talent management impact and without consideration as to how the organizational structure might be adjusted to optimize these initiatives.  While everyone is talking about AI, virtually no one is talking about the organizational side of implementing AI. Because of this, there is not a lot of information available. This dearth of information is a barrier.

3.  What research is being done on this topic and how is your approach or perspective unique?

While much research has been done on the potential uses of artificial intelligence, the amount of investment in this area, and the types of skills required to effectively implement AI strategies, there has been little research on the organizational implications of AI. Our approach considers the following questions (among others): Where does AI leadership sit in the org chart?  Are AI resources centralized or dispersed?  How do these functions report upward? How does this vary based on the uses of AI, the scope of a company鈥檚 AI initiatives, and the company鈥檚 size and industry? Does the current org chart make sense in an AI environment?

4.  What excites you most about this work and its potential impact?

The potential to provide practical advice to executives who are looking to implement AI strategies.

5.  How do you hope working with D^3 will amplify the impact of your work?

The opportunity to connect with others working in the fast-moving AI field will likely provide invaluable information on how organizations are approaching AI, what are their primary organizational challenges, which approaches are working and which approaches are not. [1]

6.  What changes do you hope to see in your field as a result of the work being done in this area?

We hope to see a more thoughtful approach to organizational structure and talent management and how it should be adapted to  an AI environment.

7.  What鈥檚 an essential area in which AI and digital technologies will reshape the way businesses or society operate in the long run that we may not be considering?

AI and digital technologies may well upend the traditional corporate organizational structure that has been in place across large organizations for the past 50 + years.

罢丑别听性视界 Business School AI Institute Associates Program聽supports and accelerates faculty research into the ways AI and digital technologies are reshaping companies, organizations, society, and practice.

Notes

[1] The Digital Data Design Institute at 性视界 (D^3) was renamed the 性视界 Business School AI Institute in April 2026.

The post HBS AI Institute Associates Spotlight Series: Boris Groysberg appeared first on 性视界 Business School AI Institute.

]]>
Why AI Helps Until It Doesn’t: Inside the GenAI Wall Effect /why-ai-helps-until-it-doesnt-inside-the-genai-wall-effect/ Thu, 18 Sep 2025 12:33:58 +0000 /?p=28683 The promise of Generative AI (GenAI) often sounds like this: give any employee access to AI tools, and they鈥檒l suddenly be able to perform tasks outside their domain of expertise with remarkable proficiency and speed. As discussed in the new working paper 鈥淭he GenAI Wall Effect: Examining the Limits to Horizontal Expertise Transfer Between Occupational […]

The post Why AI Helps Until It Doesn’t: Inside the GenAI Wall Effect appeared first on 性视界 Business School AI Institute.

]]>
The promise of Generative AI (GenAI) often sounds like this: give any employee access to AI tools, and they鈥檒l suddenly be able to perform tasks outside their domain of expertise with remarkable proficiency and speed. As discussed in the new working paper 鈥,鈥 the reality of AI鈥檚 ability to balance the scales across occupational skillsets is far more nuanced. Written by a team of six authors, including two Principal Investigators and a Research Associate in the Data Science and AI Operations Lab at the 性视界 Business School AI Institute, the article reveals surprising answers about the transformative power of AI in the workplace through a comprehensive study of 78 employees at a UK-based global trading company.

Key Insight: The GenAI Wall

鈥淸W]e predict a 鈥楪enAI wall effect鈥 […] the emergence of a point at which GenAI can no longer meaningfully reduce the expertise gaps between insiders and outsiders because of the wider knowledge distance between their jobs.鈥 [1]

While most research has focused on how AI helps lower-performing individuals catch up to their higher performing colleagues within the same job, this study instead focused on whether GenAI could help people from different occupations take on tasks that aren鈥檛 typically part of their role. To do so, the authors defined three types of participants: insiders (those who already perform certain tasks as part of their jobs), adjacent outsiders (whose roles are related but don鈥檛 directly perform those tasks), and distant outsiders (whose roles have little overlap in tasks). The study then introduces ideas of 鈥渒nowledge distance鈥 and 鈥渆xpertise gaps,鈥 how far apart two roles are in terms of the skills they use, and the authors claim that GenAI can close the distance for adjacent outsiders, but hits a 鈥榳all鈥 with distant outsiders where its benefits stop.

Key Insight: An AI Field Experiment

鈥淸W]hen assisted by GenAI, marketing specialists and technology specialists produced article conceptualizations on par with web analysts.鈥 [2]

To find out where GenAI helps and where it hits limits, the researchers ran a large experiment with employees at the UK-based firm IG, using web analysts who regularly write marketing articles (insiders), marketing specialists from the same department who don鈥檛 write articles (adjacent outsiders), and software developers and data scientists (distant outsiders). Each worker had to complete two parts of the web analyst role: (1) conceptualization, building a structured article brief with keywords, headings, and FAQs, and (2) execution, writing the full article. Some participants had access to custom GenAI tools, and others did not. The results of the conceptualization task showed that GenAI can be a powerful equalizer: not only did it improve quality, but also speed, and the gains were especially large for lower-performing employees.

Key Insight: When the Wall Appears

鈥淚n short, GenAI levels the playing field in article execution only for marketing specialists.鈥 [3]

The picture changed when participants moved to the execution task. With GenAI support, the web analysts (insiders) and marketing specialists (adjacent outsiders) both produced strong articles, but the technologists (distant outsiders) lagged behind. In other words, AI narrowed the gap for marketers, but a wall appeared for developers and data scientists. Why did this happen? The study鈥檚 interviews offer a clue: web analysts and marketers approached the task with the shared foundation of sensitivity to audience needs, conversion strategies, and the rhythms of effective marketing copy. That background let them use GenAI鈥檚 suggestions wisely, keeping what worked, editing what didn鈥檛, and shaping the writing into something publishable.

Why This Matters

For business leaders deciding how to employ AI, this study offers a new operational map based around adjacency. Employees can likely expand into related domains, but may struggle with distant ones. AI-assisted cross-training might work best for conceptual and strategic work, while specialized roles with complex execution tasks will still likely call for narrow-focused experts. Most importantly, capitalize on where AI aids human knowledge the most, allowing you to redesign roles and career paths around the skills and strengths that remain uniquely human and critical to your organization.

Bonus

This study was also recently discussed in Charter, the business reporting section of Time. Read their analysis .

References

[1] Luca Vendraminelli et al., 鈥淭he GenAI Wall Effect: Examining the Limits to Horizontal Expertise Transfer Between Occupational Insiders and Outsiders,鈥 性视界 Business School Technology & Operations Mgt. Unit Working Paper No. 26-011, 性视界 Business School Working Paper No. 26-011 (September 08, 2025): 3, . 

[2] Vendraminelli et al., 鈥淭he GenAI Wall Effect,鈥 26.

[3] Vendraminelli et al., 鈥淭he GenAI Wall Effect,鈥 30.

Meet the Authors

is a Postdoctoral Researcher at the Digital Economy Lab and the Stanford Institute for Human-Centered Artificial Intelligence (HAI) at Stanford University.

is a PhD student in the Technology and Operations Management Unit at 性视界 Business School.

is an Assistant Professor in the Technology and Operations Management Unit at 性视界 Business School and Principal Investigator at the HBS AI Institute Data Science and AI Operations Lab hosted within the Laboratory for Innovation Science.

is an Assistant Professor at Stanford University in the Department of Management Science and Engineering.

is an Associate Professor of Business Administration at 性视界 Business School and Principal Investigator at the HBS AI Institute Data Science and AI Operations Lab hosted within the Laboratory for Innovation Science.

The post Why AI Helps Until It Doesn’t: Inside the GenAI Wall Effect appeared first on 性视界 Business School AI Institute.

]]>