This series introduces D^3 Associates Program projects which aim to answer important questions at the intersection of artificial intelligence and digital technologies in business and society.
This article shares insights from Alex Chan, Assistant Professor of Business Administration in the Negotiation, Organizations & Markets Unit at 性视界 Business School who is pursuing research on the topics of artificial intelligence and organizations.
- What drew you to this area of research and how did you first become involved in this work?
My background spans both technology and healthcare, which naturally pulled me toward the 鈥渆ngineering鈥 side of economics鈥攎arket design. I became fascinated by how small changes in market rules, incentives, or even information presentation can meaningfully shape human behavior鈥攕ometimes with life-or-death consequences in settings like healthcare and organ allocation.
My interest in AI grew from two directions. On the research side, I worked early on questions around deep learning鈥檚 ability to extract patient preferences and clinically relevant signals from unstructured data like clinical notes. On the applied side, my time in industry deploying AI-enabled healthcare products made the promise鈥攁nd the risk鈥攙ery concrete: technology can match expert performance and save enormous amounts of time, but it also changes how people make decisions and how accountability is assigned. That combination convinced me that one of the next major market design challenges is not just building better AI systems, but integrating AI into human decision-making environments in ways that are robust, incentive-compatible, and ultimately welfare-improving鈥攅specially as we think ahead to more advanced systems.
2. What are some common misconceptions or barriers around the problem you鈥檙e working to solve?
A major misconception is that 鈥渕ore information鈥 automatically leads to better decisions. In the context of Explainable AI (XAI), for instance, many people assume that if you provide an explanation, decision-makers will naturally use it to make fairer, better choices. But in practice, transparency can create strategic discomfort: explanations can reveal biases, conflicts of interest, or decision rules that stakeholders would rather not surface鈥攅specially when there are financial incentives, reputational concerns, or legal exposure at stake.
One barrier, then, is that people may strategically prefer 鈥渂lack-box鈥 systems鈥攏ot because they love opacity, but because opacity can protect them from scrutiny or responsibility. Another barrier is that we often forecast AI鈥檚 societal impact by linearly extrapolating from recent waves of automation. That framing can miss how AI will reshape how preferences are expressed, how trust is formed, and how institutions evolve when cognition, forecasting, and persuasion become more scalable and more delegated to machines.
3. What research is being done on this topic and how is your approach or perspective unique?
A lot of the current research rightly focuses on the technical 鈥渉ow-to鈥 of AI鈥攂uilding more accurate models, improving interpretability methods, and optimizing performance. My perspective is complementary: I treat AI as a participant in a market or organization rather than simply a tool. That means I focus on how AI systems interact with incentives, power, accountability, and human behavior鈥攐ften in ways that aren鈥檛 visible if we only measure technical accuracy.
For example, in my working paper 鈥淧reference for Explanations: The Case of XAI,鈥 I don鈥檛 just ask whether an AI can explain itself鈥擨 ask whether people actually want explanations when real incentives and tradeoffs are present. Using incentivized experiments with real financial stakes helps reveal when transparency is demanded, when it鈥檚 avoided, and why.
More broadly, by combining market design and behavioral economics, I can study how AI decision-support, monitoring, or recommendation systems interact with factors like gender, race, hierarchy, and institutional constraints鈥攄imensions that pure computer science approaches often treat as 鈥渄ownstream鈥 but that frequently determine real-world outcomes. Market design also pushes us to analyze markets that don鈥檛 fully exist yet, which is increasingly important as AI changes what it even means to 鈥減articipate鈥 in a market.
4. What excites you most about this work and its potential impact?
What excites me most is the possibility of moving beyond the idea that AI progress is mainly about better prediction鈥攁nd toward the idea that progress is about better systems. If we design incentives and institutions well, AI can reduce cognitive overload, improve access to expertise, and make high-stakes decisions more consistent and less arbitrary. In healthcare, that can translate into better triage, more equitable access, reduced clinician burnout, and ultimately better patient outcomes.
At the same time, I鈥檓 excited by the intellectual challenge: AI changes the 鈥渞ules of the game鈥 in markets and organizations. We now have decision-makers who can delegate judgment to models, organizations that can scale monitoring and evaluation, and environments where explanations can be demanded, ignored, weaponized, or strategically suppressed. Understanding those dynamics鈥攁nd designing mechanisms that make good outcomes more likely鈥攆eels both urgent and deeply consequential.
5. How do you hope working with D^3 will amplify the impact of your work?
D^3 is an ideal home for this kind of research because it brings together technologists, economists, organizational scholars, and practitioners who are grappling with the same reality from different angles. I see D^3 as a 鈥渢ranslation layer鈥 between theory and deployment: a place where questions about incentives, governance, and real-world adoption can be stress-tested against how organizations actually operate.
I also hope D^3 will amplify impact through its convening power and practitioner ecosystem鈥攈elping connect research insights to real institutional design decisions, from product development and auditing to policy, procurement, and organizational governance. When the goal is not just to understand AI, but to shape how it鈥檚 used responsibly and effectively, that cross-disciplinary and real-world engagement is invaluable.
6. What changes do you hope to see in your field as a result of the work being done in this area?
I hope to see market design become a central lens for thinking about AI, including advanced systems that may begin to act more like autonomous agents in the economy. Rather than relying primarily on after-the-fact regulation or patchwork compliance, I want to see organizations design digital ecosystems from the ground up with incentives that support transparency, productivity, and fairness simultaneously.
In practical terms, that means shifting from 鈥淐an we build this model?鈥 to 鈥淲hat behavior does this system produce once it鈥檚 embedded in an institution with real incentives?鈥 It also means building stronger evidence around what kinds of transparency and accountability mechanisms actually work鈥攏ot just in principle, but in practice.
7. What鈥檚 an essential area in which AI and digital technologies will reshape the way businesses or society operate in the long run that we may not be considering?
One underappreciated shift is that AI won鈥檛 just replace tasks鈥攊t will reshape the institutional infrastructure through which preferences, negotiations, and decisions happen. As personal AI agents become more common鈥攁gents that summarize options, negotiate on our behalf, filter information, and even execute transactions鈥攎arkets may increasingly become 鈥渁gent-to-agent.鈥 That changes what it means to have a preference, how trust is built, and how persuasion and manipulation operate at scale.
This raises foundational design questions:
- How do we represent and protect human preferences when they鈥檙e expressed through intermediating AI systems?
- What new markets and norms emerge when AI can cheaply generate convincing arguments, tailored messaging, or strategic explanations?
- What does accountability look like when decisions are the output of human-AI teams鈥攐r of automated negotiations between agents?
In the long run, the big opportunity (and challenge) is designing the mechanisms鈥攊dentity, provenance, incentives, auditing, governance鈥攖hat make delegation to AI socially beneficial rather than destabilizing. That鈥檚 where market design and institutional thinking become essential.
The D^3 Associates Program supports and accelerates faculty research into the ways AI and digital technologies are reshaping companies, organizations, society, and practice.