Human-Centered Technology Archives | 性视界 Business School AI Institute /communities-of-practice/human-centered-technology/ The 性视界 Business School AI Institute catalyzes new knowledge to invent a better future by solving ambitious challenges. Mon, 25 Aug 2025 16:53:12 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 /wp-content/uploads/2026/04/cropped-Screenshot-2026-04-16-at-10.14.43-AM-32x32.png Human-Centered Technology Archives | 性视界 Business School AI Institute /communities-of-practice/human-centered-technology/ 32 32 Teaching Trust: How Small AI Models Can Make Larger Systems More Reliable /teaching-trust-how-small-ai-models-can-make-larger-systems-more-reliable/ Thu, 03 Jul 2025 16:56:06 +0000 /?p=27648 As Gen AI technology continues to rapidly evolve and LLMs are integrated into more and more applications, questions of trustworthiness and ethical alignment become increasingly crucial. In the recent study 鈥淕eneralizing Trust: Weak-to-Strong Trustworthiness in Language Models,鈥 authors Martin Pawelczyk, postdoctoral researcher at 性视界 working on trustworthy AI; Lillian Sun, undergraduate student at 性视界 studying […]

The post Teaching Trust: How Small AI Models Can Make Larger Systems More Reliable appeared first on 性视界 Business School AI Institute.

]]>
As Gen AI technology continues to rapidly evolve and LLMs are integrated into more and more applications, questions of trustworthiness and ethical alignment become increasingly crucial. In the recent study 鈥,鈥 authors , postdoctoral researcher at 性视界 working on trustworthy AI; , undergraduate student at 性视界 studying computer science; , PhD student in computer science at 性视界; , postdoctoral research associate at 性视界 working on trustworthy AI; and , Assistant Professor of Business Administration at 性视界 Business School and PI in D^3鈥檚 Trustworthy AI Lab, explore a novel concept: the ability to transfer and enhance trustworthiness properties from smaller, weaker AI models to larger, more powerful ones.

Key Insight: The Three Pillars of AI Trustworthiness

“Trustworthiness encompasses properties such as fairness (avoiding biases against certain groups), privacy (protecting sensitive information), and robustness (maintaining performance under adversarial conditions or distribution shifts).” [1]

The holistic conceptualization taken by the authors in this paper recognizes that, for LLMs to be truly trustworthy, they must excel across multiple domains simultaneously. The researchers tested and demonstrated these principles using real-world datasets, including the Adult dataset, based on 1994 U.S. Census data, where they evaluated fairness by examining whether AI predictions of income varied based on gender attributes. Their privacy assessments used the Enron email dataset, containing over 600,000 emails with sensitive personal information including credit card numbers and Social Security Numbers. For robustness, they used the OOD Style Transfer, which incorporates text transformations, and AdvGLUE++ datasets, which includes examples for widely used Natural Language Processing (NLP) tasks.

Key Insight: Utilizing Novel Fine-Tuning Strategies

“This is the first work to investigate if trustworthiness properties can transfer from a weak to a strong model using weak-to-strong supervision, a process we term weak-to-strong trustworthiness generalization.” [2]

The 性视界 team developed two distinct strategies for embedding trustworthiness into AI systems. Their first approach, termed “Weak Trustworthiness Fine-tuning” (Weak TFT), focuses on training smaller models with explicit trustworthiness constraints, then using these models to teach larger systems. The second strategy, “Weak and Weak-to-Strong Trustworthiness Fine-tuning” (Weak+WTS TFT), applies trustworthiness constraints to both the small teacher model and the large student model during training.

Their experiments demonstrate that the Weak+WTS TFT approach produces significantly superior results, with improvements in fairness of up to 3 percentage points (equivalent to a 60% decrease in unfairness), as well as in robustness, or how resilient the AI was to attacks and unexpected situations. Remarkably, these ethical improvements required only minimal sacrifices in task performance鈥攄ecreases in accuracy did not exceed 1.5% across tested properties.

Key Insight: Challenges in Privacy Transfer

“Privacy presents a unique situation. Note that the strong ceiling (1) does not achieve better privacy than the weak model.” [3]

A key finding of the study is that not all trustworthiness properties transfer equally from weak to strong models. While the transfer of fairness and robustness properties showed promising results, privacy proved to be a more challenging attribute to transfer. The researchers found that larger models have a greater capacity to retain and recall details from their training data, which creates heightened vulnerabilities for exposing sensitive or confidential information. This finding highlights the complex nature of privacy in AI systems and suggests that different strategies may be needed to address privacy concerns in larger models.

Why This Matters:

For C-suite executives and business leaders, this research offers a potential pathway to developing more powerful LLM systems without compromising on certain ethical considerations. It suggests that companies could potentially start with smaller, more manageable models that are fine-tuned for trustworthiness in fairness and robustness, and then scale up to more capable systems while maintaining or even improving these critical properties. This approach could help mitigate risks associated with LLM deployment, enhance public trust in AI-driven decisions, and potentially reduce the resources required for ethical LLM development. However, the challenges identified in transferring privacy properties serve as a reminder of the complex nature of AI ethics. Business leaders should remain vigilant and consider multi-faceted approaches to ensuring the trustworthiness of their LLM systems, particularly when dealing with sensitive data.

Footnote

(1) The strong ceiling represents the benchmark performance of a large model that has been directly trained with trustworthiness constraints, serving as the upper bound for what the weak-to-strong approach should ideally achieve.

References

[1] Martin Pawelczyk et al., 鈥淕eneralizing Trust: Weak-to-Strong Trustworthiness in Language Models,鈥 arXiv preprint arXiv:2501.00418v1 (December 31, 2024): 1.

[2] Pawelczyk et al., 鈥淕eneralizing Trust,鈥 2.

[3] Pawelczyk et al., 鈥淕eneralizing Trust,鈥 8.

Meet the Authors

is a postdoctoral researcher at 性视界 working on trustworthy AI.

is an undergraduate student at 性视界 studying computer science.

is a PhD student in computer science at 性视界.

is a postdoctoral research associate at 性视界 working on trustworthy AI.

is an Assistant Professor of Business Administration at 性视界 Business School and PI in D^3鈥檚 Trustworthy AI Lab. She is also a faculty affiliate in the Department of Computer Science at 性视界 University, the 性视界 Data Science Initiative, Center for Research on Computation and Society, and the Laboratory of Innovation Science at 性视界. Professor Lakkaraju’s research focuses on the algorithmic, practical, and ethical implications of deploying AI models in domains involving high-stakes decisions such as healthcare, business, and policy.

The post Teaching Trust: How Small AI Models Can Make Larger Systems More Reliable appeared first on 性视界 Business School AI Institute.

]]>
Unifying AI Attribution: A New Frontier in Understanding Complex Systems /unifying-ai-attribution-a-new-frontier-in-understanding-complex-systems/ Tue, 10 Jun 2025 14:18:16 +0000 /?p=27063 As artificial intelligence systems become increasingly complex, understanding their behavior has become a critical challenge for businesses and researchers alike. In a recent preprint paper, 鈥淭owards Unified Attribution in Explainable AI, Data-Centric AI, and Mechanistic Interpretability,鈥 authors Shichang Zhang, a postdoctoral fellow in the Trustworthy AI Lab at the Digital Data Design (D^3) Institute at […]

The post Unifying AI Attribution: A New Frontier in Understanding Complex Systems appeared first on 性视界 Business School AI Institute.

]]>
As artificial intelligence systems become increasingly complex, understanding their behavior has become a critical challenge for businesses and researchers alike. In a recent preprint paper, 鈥,鈥 authors , a postdoctoral fellow in the Trustworthy AI Lab at the Digital Data Design (D^3) Institute at 性视界, , a PhD student in Bioinformatics and Integrative Genomics at 性视界 Medical School, , a PhD student in computer science at 性视界, and , Assistant Professor of Business Administration at HBS and lead researcher at D^3鈥檚 Trustworthy AI Lab, propose a unified view of three traditionally separate model behavior attribution methods. This approach aims to bridge the fragmented landscape of AI interpretability, offering new insights into enhancing holistic model understanding.

Key Insight: The Unified Attribution Framework

鈥淲e take the position that […] feature, data, and component attribution share core techniques despite their different perspectives.鈥 [1]

In this paper, Zhang and colleagues propose a unified framework that brings together three traditionally separate attribution methods: feature attribution (FA), which refers to the process of identifying which input features are most important in an AI model鈥檚 output, data attribution (DA), which involves understanding how specific training-data points influence an AI model鈥檚 behavior, and component attribution (CA), which focuses on understanding how internal parts of an AI model contribute to its output. This innovative approach recognizes that while these methods have evolved independently, they share fundamental techniques such as perturbations, gradients, and linear approximations. By unifying these methods, the researchers aim to provide a more comprehensive understanding of AI systems’ behavior.

Key Insight: Supporting Further Research

“Attribution methods also hold immense potential to benefit broader AI research for other applications.” [2]

The unified framework offers multiple advantages for advancing AI interpretability research. For example, by promoting conceptual coherence through less fragmented terminology, it facilitates more effective communication and collaboration. The framework enables cross-attribution innovation, allowing researchers to adapt solutions developed for one attribution type to others, such as applying efficient sampling techniques from perturbation-based FA, which changes input parts to measure the effect on AI’s answers, to improve DA methods. It also simplifies theoretical analysis by identifying common mathematical underpinnings, streamlining research efforts and paving the way for more robust and generalizable techniques.

Key Insight: Implications for AI Regulation and Ethics

“FA reveals input processing patterns, DA exposes training data influences, and CA illuminates architectural roles. This multi-faceted understanding enables more targeted and effective regulation.” [3]

By providing a comprehensive view of AI system behavior, the unified attribution framework enables more informed and targeted regulatory approaches. The authors illustrate this with a real-world example: when tackling issues of bias in AI, the framework enables regulators to pinpoint potentially discriminating features in the input data, identify and track problematic or copyrighted training materials, and highlight specific components within the AI鈥檚 architecture that may contribute to biased outcomes. 

The authors note that regulation and policy frequently stress the need for transparency in AI systems and users’ right to an explanation. The unified attribution framework provides a powerful tool for practitioners to meet these legal and ethical requirements by offering detailed insights into both overall AI system behavior and specific input-output relationships.

Why This Matters

For business leaders, this unification method means gaining more comprehensive and reliable insights into how your AI systems function. Instead of fragmented views, leaders get a holistic understanding of what drives AI decisions. This is essential for building trust, ensuring regulatory compliance, and effectively identifying and addressing issues like bias or errors, whether they stem from data, inputs, or the model’s structure. Ultimately, the unified attribution framework proposed in this research supports more informed model management and governance, directly impacting an organization’s bottom line through cost savings and enhanced value.

References

[1] Shichang Zhang et al., 鈥淭owards Unified Attribution in Explainable AI, Data-Centric AI, and Mechanistic Interpretability,鈥 arXiv preprint arXiv:2501.18887v3 (May 29, 2025): 1.

[2] Zhang et al., 鈥淭owards Unified Attribution,鈥 8.

[3] Zhang et al., 鈥淭owards Unified Attribution,鈥 8.

Meet the Authors

is a postdoctoral fellow at the D^3 Institute at 性视界 University working with Professor Hima Lakkaraju. He received his Ph.D. in Computer Science from University of California, Los Angeles (UCLA).

is a PhD student in the Bioinformatics and Integrative Genomics Program at 性视界 Medical School.

is a PhD student in the 性视界 Computer Science program, working on machine learning interpretability and advised by Hima Lakkaraju. She is a strong advocate for increasing diversity in CS through direct mentorship of early-career minority students.

is an Assistant Professor of Business Administration at 性视界 Business School and PI in D^3鈥檚 Trustworthy AI Lab. She is also a faculty affiliate in the Department of Computer Science at 性视界 University, the 性视界 Data Science Initiative, Center for Research on Computation and Society, and the Laboratory of Innovation Science at 性视界. Professor Lakkaraju’s research focuses on the algorithmic, practical, and ethical implications of deploying AI models in domains involving high-stakes decisions such as healthcare, business, and policy.

The post Unifying AI Attribution: A New Frontier in Understanding Complex Systems appeared first on 性视界 Business School AI Institute.

]]>
Decoding Digital Dynamics: Insights from the Digital Competition and Tech Regulation Conference /decoding-digital-dynamics/ Tue, 06 May 2025 15:17:59 +0000 /?p=26451 As digital platforms continue to reshape markets, the intersection of competition, innovation, and regulation demands nuanced understanding. On April 17鈥18, 2025, Digital Data Design Institute鈥檚 Platform Lab and 性视界 Business School hosted the third annual Digital Competition and Tech Regulation Conference, convening over 70 leading academics, industry practitioners, and policymakers from around the world. With […]

The post Decoding Digital Dynamics: Insights from the Digital Competition and Tech Regulation Conference appeared first on 性视界 Business School AI Institute.

]]>
As digital platforms continue to reshape markets, the intersection of competition, innovation, and regulation demands nuanced understanding. On April 17鈥18, 2025, Digital Data Design Institute鈥檚 Platform Lab and 性视界 Business School hosted the third annual Digital Competition and Tech Regulation Conference, convening over 70 leading academics, industry practitioners, and policymakers from around the world. With participants from universities including MIT, Yale, London Business School, and the University of Toronto, alongside leaders from Google, Amazon, Uber, Airbnb, and Analysis Group, the event offered a rare, high-level forum for cross-sector dialogue. Marking its most successful iteration yet, the event reflected the community’s growing commitment to advancing this vital conversation.

Over two dynamic days, attendees explored the evolving digital landscape through a series of paper sessions, industry panels, and interactive discussions. Topics spanned from strategic issues around data and network effects to the complex roles that platforms play in markets like e-commerce, media, and advertising. Privacy concerns, misinformation, vertical integration, and the unintended consequences of digitally mediated markets were also front and center.

A key theme emerging from the sessions was the critical interaction between research, policy, and industry. As one participant noted, “One needs to hear from policy and industry not only to be relevant but also to be innovative.” This sentiment was echoed in vibrant discussions following presentations on subjects such as online preferences for privacy, algorithmic pricing, and consumer engagement with politics online.

“One needs to hear from policy and industry not only to be relevant but also to be innovative.”

The conference’s structure鈥攂lending senior scholars, junior researchers, and seasoned industry experts鈥攚as particularly effective in catalyzing thoughtful debate and fostering mentorship. Many junior scholars had the rare opportunity to receive feedback from prominent senior faculty, enhancing the research community’s collective knowledge and strengthening future scholarship.

Panels on antitrust policy and AI regulation offered particularly timely insights. Discussions emphasized the need for regulatory frameworks that are both adaptive and thoughtful, balancing the imperatives of innovation with the necessity of safeguarding competitive markets. The presence of both private sector leaders and policymakers enriched these conversations, grounding theoretical models in real-world complexities.

By convening a truly interdisciplinary group, the Digital Competition and Tech Regulation Conference reinforced that the future of digital markets will be shaped at the nexus of academic rigor, industry practice, and public policy. As the digital economy continues to evolve, events like this serve as a critical force for surfacing new ideas, forging partnerships, and guiding the next generation of research and regulation.

The post Decoding Digital Dynamics: Insights from the Digital Competition and Tech Regulation Conference appeared first on 性视界 Business School AI Institute.

]]>
The Gender Divide in Generative AI: A Global Challenge /the-gender-divide-in-generative-ai-a-global-challenge/ Thu, 17 Apr 2025 15:31:48 +0000 /?p=26380 As generative AI transforms the business landscape, a concerning trend demands immediate attention from executives and policymakers alike. In the recent 性视界 Business School (HBS) working paper, 鈥淕lobal Evidence on Gender Gaps and Generative AI,鈥 authors Nicholas G. Otis, PhD candidate at the Berkeley Haas School of Business; Sol猫ne Delecourt, Assistant Professor at the Berkeley […]

The post The Gender Divide in Generative AI: A Global Challenge appeared first on 性视界 Business School AI Institute.

]]>
As generative AI transforms the business landscape, a concerning trend demands immediate attention from executives and policymakers alike. In the recent 性视界 Business School (HBS) working paper, 鈥,鈥 authors , PhD candidate at the Berkeley Haas School of Business; , Assistant Professor at the Berkeley Haas School of Business and Affiliated Researcher at the Laboratory for Innovation Science (LISH) at 性视界; , PhD student at Stanford University; and , Associate Professor of Business Administration at HBS and Principal Investigator at the Digital Data Design (D^3) Institute at 性视界 Tech for All Lab, describe a significant gender gap in the adoption and use of generative AI tools worldwide. This disparity threatens to exacerbate existing inequalities and risks limiting the potential benefits of this revolutionary technology across various sectors and industries.

Key Insight: A Universal Gender Gap in AI Adoption

“To estimate the extent of the gender gap in generative AI use, we first identified every publicly available study that has surveyed people about generative AI use along with their gender […] [Surveys show] a remarkably consistent pattern in generative AI use: men are more likely to adopt generative AI tools than women in all but one survey.” [1]

Otis and his colleagues uncovered a pervasive gender gap in generative AI adoption. Their comprehensive analysis, drawing from 18 diverse studies among more than 140,000 individuals worldwide, showed that women are approximately 20% less likely than men to directly engage with generative AI technology. This gap was not confined to specific industries, geographic locations, or occupations, but appeared to be a universal phenomenon.

Key Insight: Persistence of the Gap Despite Equal Access

“[F]indings show, that even when efforts to increase participation by equalizing access are in place, women are still less likely to use generative AI than men.” [2]

The researchers demonstrated that simply providing equal access to generative AI tools is not sufficient to bridge the gender gap. Their findings suggest that deeper, more complex factors are at play, potentially rooted in cultural, social, or institutional barriers. For example, in a study conducted in Kenya where access to ChatGPT was equalized, women were still about 13.1% less likely to adopt the technology compared to men.

Key Insight: Implications for AI Development and Effectiveness

“As generative AI systems are still in their formative stages, the under-representation of women may result in early biases in the user data these tools learn from, resulting in self-reinforcing gender disparities.鈥 [3]

Otis and his team warned of a potential feedback loop where the current gender gap in AI usage could lead to biased AI systems that further discourage women’s participation. This cycle threatens to perpetuate and even amplify existing gender inequalities. The researchers discovered that women accounted for just 42% of the approximately 200 million average monthly users who visited the ChatGPT website worldwide between November 2022 and May 2024. In smartphone app usage, the gap widens further, with women estimated to make up only around 27.2% of total ChatGPT application downloads.

Key Insight: Multifaceted Roots of the Gender Gap

“[B]ecause women tend to work in different types of firms, jobs, and occupations than men, they may be less exposed to this new technology. Such differences are often further reinforced by the gendered differences in women’s personal and professional networks, further limiting diffusion and learning.” [4]

The working paper identified several potential factors contributing to the gender gap in AI adoption, including differences in workplace exposure, variations in personal and professional networks, and potential disparities in confidence and persistence when using new technologies. Research shows that women consistently say they are less familiar with and knowledgeable about generative AI tools than men. The team found that in the tech industry, junior women significantly lag behind men in generative AI use in both technical and non-technical functions, indicating that even in technology-focused environments, the gap persists.

Why This Matters

For business leaders and policymakers, understanding and addressing the gender gap in generative AI adoption is crucial. It represents a significant untapped potential in workforce productivity and innovation. As generative AI becomes increasingly integral to various business processes, ensuring equal participation across genders will be vital for maintaining competitiveness and fostering diverse perspectives in problem-solving and decision-making.

Moreover, the self-reinforcing nature of this gap poses a serious threat to gender equality in the workplace and beyond. If left unaddressed, it could lead to a widening skills gap, further entrenching gender disparities in high-growth, high-paying sectors of the economy. For executives, this translates to a pressing need to implement targeted strategies that provide equal access to AI tools and address the underlying factors that discourage women from engaging with these technologies.

References

[1] Nicholas G. Otis, Sol猫ne Delecourt, Katelyn Cranney, and Rembrand Koning, “Global Evidence on Gender Gaps and Generative AI”, 性视界 Business School Working Paper No. 25-023, (2024): 30, 3.

[2] Otis et al.,  “Global Evidence on Gender Gaps and Generative AI”, 5.

[3] Otis et al.,  “Global Evidence on Gender Gaps and Generative AI”, 5.

[4] Otis et al.,  “Global Evidence on Gender Gaps and Generative AI”, 2.

Meet the Authors

is a PhD candidate at the Berkeley Haas School of Business, researching the societal and economic effects of generative AI and how it can help underserved people, places, and organizations. He earned his BA in Sociology and MA in Social Statistics from McGill University in Montreal.

is an Assistant Professor at the Berkeley Haas School of Business and Affiliated Researcher at the Laboratory for Innovation Science (LISH) at 性视界. Her studies focus on inequality in business performance and factors that create variation in company profits. She holds a master鈥檚 degree in Economics and Public Policy from Sciences Po Paris and 脡cole Polytechnique. She earned her PhD at the Stanford Graduate School of Business. 

is a PhD student in economics at Stanford University. Her interests include labor, behavioral, and experimental economics and technology adoption, innovation, gender, entrepreneurship, and productivity. Formerly a research assistant at 性视界 Business School working with Rembrand Koning and Sol猫ne Delecourt, she earned her BS in Economics from Brigham Young University.

is an Associate Professor of Business Administration at 性视界 Business School. He is the co-director, co-founder, and a Principal Investigator in the Tech for All Lab at D^3 at 性视界, studying how entrepreneurs can accelerate and shift the rate and direction of science, technology, and AI to benefit humanity. He earned his PhD in Business from the Stanford Graduate School of Business and his BS in Mathematics and BA in Statistics from the University of Chicago.

The post The Gender Divide in Generative AI: A Global Challenge appeared first on 性视界 Business School AI Institute.

]]>
AI Alignment: The Hidden Costs of Trustworthiness /ai-alignment-the-hidden-costs-of-trustworthiness/ Mon, 03 Mar 2025 18:05:14 +0000 /?p=25525 As AI continues to evolve at a breakneck pace, the quest for aligning these systems with human values has become paramount. However, a recent study, 鈥淢ore RLHF, More Trust? On The Impact of Preference Alignment on Trustworthiness鈥, by Aaron J. Li, a master鈥檚 student at the 性视界 John A. Paulson School of Engineering and Applied […]

The post AI Alignment: The Hidden Costs of Trustworthiness appeared first on 性视界 Business School AI Institute.

]]>
As AI continues to evolve at a breakneck pace, the quest for aligning these systems with human values has become paramount. However, a recent study, , by , a master鈥檚 student at the 性视界 John A. Paulson School of Engineering and Applied Sciences (SEAS); , Assistant Professor of Business Administration at 性视界 Business School and Principal Investigator at the Digital Data Design (D^3) Institute at 性视界 Trustworthy AI Lab; and , PhD graduate from 性视界 SEAS and the Trustworthy AI Lab, revealed that the current methods used to achieve this alignment may have unexpected consequences on AI trustworthiness. The study explored the complex relationship between AI alignment techniques and various aspects of trustworthiness, and offered crucial insights for business leaders navigating this new technology landscape.

Key Insight: The Misalignment Paradox

“We identify a significant misalignment between generic human preferences and specific trustworthiness criteria, uncovering conflicts between alignment goals and exposing limitations in conventional RLHF datasets and workflows.鈥 [1]

The team’s research uncovered a surprising paradox in AI development: the techniques designed to align AI with human preferences may inadvertently compromise its trustworthiness. In the study, Reinforcement Learning from Human Feedback (RLHF)鈥攁 common method for fine-tuning machine learning models to improve self-learning鈥攕howed mixed results across different trustworthiness metrics. While it improved performance in machine ethics (observing ethical principles) by an average of 31%, it led to concerning increases in stereotypical bias (150% increase) and privacy leakage (12% increase), and a 25% decrease in truthfulness.

Key Insight: The Ethics Exception

鈥淓mpirically, RLHF does not improve performance on key trustworthiness benchmarks such as toxicity, bias, truthfulness, and privacy, with machine ethics being the only exception.鈥 [2] 

The study showed that machine ethics stood out as the only aspect of large language model (LLM) trustworthiness that consistently improved through RLHF. The researchers found that the false negative rate (FNR) for ethical decision-making decreased significantly across all tested models. This suggests that current AI alignment techniques are particularly effective at instilling ethical behavior, but struggle with other trustworthiness metrics. These metrics include truthfulness (accurate information), toxicity (harmful or inappropriate content), fairness (assessing and addressing biases), robustness (performance under different conditions), and privacy (protecting user data and preventing data leaks).

Key Insight: The Data Attribution Dilemma

“To address this, we propose a novel data attribution analysis to identify fine-tuning samples detrimental to trustworthiness, which could potentially mitigate the misalignment issue.” [3]

Li, Krishna, and Lakkaraju introduced an innovative approach to understanding the root causes of trustworthiness issues in AI alignment. By analyzing the contribution of individual data samples to changes in trustworthiness, they developed a tool to identify and quantify the effects of problematic training data.

Key Insight: The Scale of the Challenge

“Although our experiments focus on models up to 7 [billion] parameters, we expect similar trends in larger models because prior research […] suggests that larger models are not inherently more trustworthy in the aspects where we have observed negative RLHF effects.” [4] 

The research indicated that the trustworthiness issues identified are not limited to smaller AI models. Even as AI systems grow in size and complexity, they remain susceptible to these alignment-induced trustworthiness problems. In fact, the study referred to findings of large-size models using RLHF that demonstrated stronger political views and racial biases.

Why This Matters

For business leaders and executives, the insights from the team鈥檚 research are crucial for understanding the complexities of deploying AI systems, and highlights that simply focusing on aligning AI with human preferences is not enough to ensure trustworthy and reliable AI systems. 

Companies investing in AI technologies must be aware of the potential trade-offs between different aspects of trustworthiness. While improvements in ethical decision-making are encouraging, the increased risks of bias, privacy breaches, and misinformation cannot be ignored. This research calls for a more nuanced approach to AI alignment that balances multiple dimensions of trustworthiness. Using the data attribution analysis method the team proposed to identify problematic training data, companies can potentially improve the trustworthiness of their AI systems without compromising on performance or alignment with human preferences.

References

[1] Aaron J. Li, Satyapriya Krishna, and Himabindu Lakkaraju, “More RLHF, More Trust? On The Impact of Preference Alignment On Trustworthiness”, arXiv:2404.18870v2 [cs.CL] (December 21, 2024): 2.

[2] Li, Krishna, and Lakkaraju, “More RLHF, More Trust? On The Impact of Preference Alignment On Trustworthiness”, 11.

[3] Li, Krishna, and Lakkaraju, “More RLHF, More Trust? On The Impact of Preference Alignment On Trustworthiness”, 11.

[4] Li, Krishna, and Lakkaraju, “More RLHF, More Trust? On The Impact of Preference Alignment On Trustworthiness”, 2.

Meet the Authors

is a master鈥檚 student in Computational Science & Engineering at the 性视界 University John A. Paulson School of Engineering and Applied Sciences (SEAS). He obtained his BA in Mathematics from 性视界. His interests include mathematics, theoretical CS, and physics.

recently completed his PhD at John A. Paulson School of Engineering and Applied Sciences (SEAS) and worked with the D^3 Trustworthy AI Lab, where his research focused on the trustworthy aspects of generative models. He earned his MS in Computer Science from Carnegie Mellon University and his BS in Computer Science and Engineering from the LNM Institute of Information Technology in Jaipur, India.聽

is an Assistant Professor of Business Administration at 性视界 Business School and PI in D^3鈥檚 Trustworthy AI Lab. She is also a faculty affiliate in the Department of Computer Science at 性视界 University, the 性视界 Data Science Initiative, Center for Research on Computation and Society, and the Laboratory of Innovation Science at 性视界. She teaches the first year course on Technology and Operations Management, and has previously offered multiple courses and guest lectures on a diverse set of topics pertaining to artificial intelligence (AI) and machine learning (ML), and their real-world implications.

The post AI Alignment: The Hidden Costs of Trustworthiness appeared first on 性视界 Business School AI Institute.

]]>
Climate Solution Firms: Investment Strategy and Risk Management /climate-solution-firms-investment-strategy-and-risk-management/ Thu, 20 Feb 2025 14:08:33 +0000 /?p=25411 As the global economy grapples with the pressing challenges of climate change, a new paradigm is emerging in the world of finance and investment. In their working paper, 鈥淐limate Solutions, Transition Risk, and Stock Returns,鈥 researchers Shirley Lu, Assistant Professor of Business Administration at 性视界 Business School (HBS) and an affiliate of the HBS Digital […]

The post Climate Solution Firms: Investment Strategy and Risk Management appeared first on 性视界 Business School AI Institute.

]]>
As the global economy grapples with the pressing challenges of climate change, a new paradigm is emerging in the world of finance and investment. In their working paper, 鈥,鈥 researchers , Assistant Professor of Business Administration at 性视界 Business School (HBS) and an affiliate of the HBS Digital Data Design (D^3) Institute Climate and Sustainability Impact Lab; , Professor of Management and Accounting at the Questrom School of Business at Boston University; , Post-Doctoral Fellow in the Climate and Sustainability Impact Lab; and , Professor of Business Administration at HBS and Co-Leader of the Climate and Sustainability Impact Lab, explore the intricate relationship between climate solutions, transition risk, and stock returns. Their findings offer valuable insights for investors, executives, and policymakers navigating the complex landscape of climate-related financial opportunities and risks.

Key Insight: The Rise of Climate Solution Firms

“We measure firms’ climate solutions with data that utilizes large language models (LLMs) to analyze the “Business Description” section of Item 1 in U.S. public firm 10-K filings.” [1]

The researchers developed an innovative approach to identifying companies focused on climate solutions. Using advanced AI techniques, they analyzed SEC regulatory filings from 2006 to 2023 to quantify firms’ involvement in climate-related products and services. This method provides a more nuanced and accurate picture of a company’s climate strategy than traditional metrics alone. 

The team uses the phrase 鈥渉igh-climate solution firms鈥 to describe companies with large portions of their products and services dedicated to climate solutions. During the study, they developed the variable 鈥渃limate solution measure鈥 (CS measure) to represent firms鈥 levels of involvement in client solutions. For example, the paper notes that Tesla, a leader in electric vehicles, has an average CS measure of 57%, compared to 11% for General Motors.

Key Insight: The Hedging Potential of Climate Solutions

“[H]igh-climate solution firms are better positioned to hedge against transition risks, as their products and services are in greater demand during periods of heightened transition risk, allowing them to capitalize on new market opportunities.” [2]

The paper reveals that companies with a higher focus on climate solutions may offer a unique hedging opportunity for investors. As the world transitions to a low-carbon economy, these firms are likely to see increased demand for their products and services, potentially offsetting risks associated with climate change. The researchers found that high-climate solution firms experience improved future profitability as unexpected climate change concerns increase.

Key Insight: The Mispricing Paradox

“[M]arket participants may underreact to negative news about climate solutions, such as not immediately recognizing the technological or production risks associated with investing in them.” [3]

Despite the potential benefits, the paper suggests that the market may not always accurately price the risks associated with climate solution firms. This mispricing could lead to overvaluation in the short term but may also present opportunities for informed investors. The study found that high-climate solution firms tend to have lower stock returns, possibly due to overvaluation resulting from investor preferences or underestimation of risks.

Key Insight: The Impact of Environmental Regulatory Uncertainty

“We measure environmental regulatory uncertainty using the environmental and climate policy uncertainty (EnvPU) index developed by Noailly et al. (2022).” [4]

The researchers highlight the significant role that policy uncertainty plays in the performance of climate solution firms. They used the EnvPU index, available from 2005 to 2019, to measure the share of environmental policy uncertainty articles among all environmental and climate policy articles in leading U.S. newspapers. By using the EnvPU index, the team demonstrated how regulatory changes can affect these companies’ profitability and market perception. For example, the paper notes that periods of high regulatory uncertainty can boost cash flow for climate solution firms, resulting in higher future profitability.

Why This Matters

For business leaders, investors, and policymakers, understanding the dynamics of climate solutions in the financial markets is crucial for navigating the transition to a low-carbon economy. This research provides valuable insights into how companies focused on addressing climate change may perform under various market conditions and regulatory environments. It highlights the potential for these firms to act as a hedge against transition risks, while cautioning about possible mispricing due to market inefficiencies or investor preferences for environmentally friendly products and services. 

The study offers a new tool for assessing a firm’s climate strategy and corporate sustainability efforts. By understanding the complex interplay between climate solutions, market dynamics, and regulatory uncertainty, executives, investors, and policymakers can anticipate the future while managing associated risks and capitalizing on emerging opportunities. 

References

[1] Shirley Lu, Edward J. Riedl, Simon Xu, and George Serafeim, “Climate Solutions, Transition Risk, and Stock Returns”, 性视界 Business School Working Paper, No. 25-024 (November 11, 2024): 1.

[2] Lu, Riedl, Xu, and Serafeim, “Climate Solutions, Transition Risk, and Stock Returns”, 1.

[3]聽 Lu, Riedl, Xu, and Serafeim, “Climate Solutions, Transition Risk, and Stock Returns”, 2.

[4]聽 Lu, Riedl, Xu, and Serafeim, “Climate Solutions, Transition Risk, and Stock Returns”, 20.

Meet the Authors

is an Assistant Professor of Business Administration in the Accounting and Management Unit and a member of D^3鈥檚 Climate and Sustainability Impact Lab. She teaches the Financial Reporting and Control course in the MBA required curriculum.

is a Professor of Accounting and Professor of Management at the Questrom School of Business at Boston University. His research interests include financial reporting mega-trends鈥攆air value accounting, international reporting, and issues relating to environmental, social, and governance (ESG) reporting. Prior to entering academia, he worked at a Big 6 auditor, in internal audit at a Fortune 250 oil company, and in corporate reporting at a real estate brokerage house.

is a Post-Doctoral Fellow in the HBS D^3 Climate and Sustainability Impact Lab. He received his PhD in Finance at the , University of California, Berkeley and is interested in financial intermediation, corporate finance, and banking, with links to climate finance, using LLMs to develop new metrics for assessing firms’ climate solution products and services, and their implications for business strategy and market valuation.

is the Charles M. Williams Professor of Business Administration at 性视界 Business School, where he co-leads the Climate and Sustainability Impact Lab within the D^3. He teaches the MBA course 鈥淩isks, Opportunities, and Investments in an Era of Climate Change鈥 (ROICC), which he developed to guide students in mastering the skills needed for entrepreneurial, managerial, or investment roles in a rapidly evolving climate landscape.

The post Climate Solution Firms: Investment Strategy and Risk Management appeared first on 性视界 Business School AI Institute.

]]>
Enabling Healthcare Access through RE-Assist /enabling-healthcare-access-through-re-assist/ Tue, 11 Feb 2025 20:17:41 +0000 /?p=25305 A recent post from the blackbox Lab at 性视界 Business School鈥檚 Digital Data and Design (D^3) Institute, 鈥淏ridging the Care Gap: How RE-Assist Enhances Healthcare Access,鈥 featured a conversation between James W. Riley, Principal Investigator of the lab and Assistant Professor of Business Administration at HBS, and Ashley Barrow, Principal Product Owner of RE-Assist. Their […]

The post Enabling Healthcare Access through RE-Assist appeared first on 性视界 Business School AI Institute.

]]>
A recent post from the blackbox Lab at 性视界 Business School鈥檚 Digital Data and Design (D^3) Institute, 鈥,鈥 featured a conversation between , Principal Investigator of the lab and Assistant Professor of Business Administration at HBS, and , Principal Product Owner of . Their conversation covered the impetus for RE-Assist, its current mission, and its promising future.

A licensed nurse with many years of experience in healthcare and insurance, Barrow has also spent many years applying process knowledge (Fast Healthcare Interoperability Resources, Agile, and Scrum) to explore how artificial intelligence (AI) can be used to improve healthcare. But the original idea for RE-Assist was more personal. When a family member experienced serious health issues and encountered obstacles to care, both small (transportation and nutrition) and large (equity and access), Barrow saw firsthand how even educated individuals with strong support networks are frustrated by the healthcare system.

Barrow conceived RE-Assist as a way to guide patients through the healthcare system by connecting them with quality healthcare, regardless of their resources. The RE-Assist tool runs on an algorithm that helps healthcare providers identify at-risk patients facing a variety of challenges. By filtering on patients鈥 health and access issues, RE-Assist suggests appropriate providers from a network of services, customized to address patients鈥 needs and constraints. The process happens in a fraction of the time it would take providers to make these connections manually.

RE-Assist is in its early stages of testing and enhancing its algorithms and looking at ways to innovate by incorporating AI into its functions. But the overall mission remains the same鈥攖o make healthcare available and understandable to everyone regardless of their backgrounds and resources.

The post Enabling Healthcare Access through RE-Assist appeared first on 性视界 Business School AI Institute.

]]>
The Creative Edge: How Human-AI Collaboration is Reshaping Problem-Solving /the-creative-edge-how-human-ai-collaboration-is-reshaping-problem-solving/ Thu, 23 Jan 2025 21:12:17 +0000 /?p=25140 As artificial intelligence (AI) capabilities rapidly advance, organizations are exploring new ways to leverage these technologies for creative problem-solving and innovation. A recent HBS working paper, “The Crowdless Future? Generative AI and Creative Problem Solving”, 鈥 by L茅onard Boussioux, Assistant Professor at the University of Washington; Jacqueline N. Lane, Assistant Professor at 性视界 Business School […]

The post The Creative Edge: How Human-AI Collaboration is Reshaping Problem-Solving appeared first on 性视界 Business School AI Institute.

]]>
As artificial intelligence (AI) capabilities rapidly advance, organizations are exploring new ways to leverage these technologies for creative problem-solving and innovation. A recent HBS working paper, , 鈥 by , Assistant Professor at the University of Washington; , Assistant Professor at 性视界 Business School and co-Principal Investigator at the Digital Data Design Institute鈥檚 (顿镑3鈥檚) (LISH); , doctoral candidate and researcher at LISH; , co-founder of D^3 and CEO of ContinuumLab.ai; and , HBS Professor and co-founder of D^3 鈥 investigates how human-AI collaboration compares to traditional crowdsourcing approaches in generating novel and valuable solutions to complex challenges.聽

In the study, the researchers launched a crowdsourcing challenge to develop sustainable business ideas centered on the circular economy. They engaged 125 global participants from diverse industries and used prompt engineering to facilitate human-AI collaborative solutions. Solutions were generated through two main approaches: human crowd (HC) and human-AI (HAI), in which human solvers partnered with LLMs to co-create solutions. Three hundred external evaluators assessed a random subset of 13 solutions from a total of 234, resulting in 3,900 evaluator-solution pairs. Each solution was rated across five criteria: Novelty, Strategic Viability, Environmental Value, Financial Value, and Overall Quality.

Key Insight: Human-AI Solutions Offer Impressive Overall Results

“When considering all factors collectively, the HAI solutions are deemed superior in quality compared to the HC solutions.” [1]

The researchers found that while HC solutions were rated as more novel, HAI-generated solutions scored higher on measures of strategic viability, environmental value, and financial value. Importantly, when all factors were considered together, the HAI solutions were judged to be of higher overall quality. This suggests that AI-augmented approaches may be particularly effective at producing implementable ideas with tangible business value.

Key Insight: Human Guidance Enhances AI Creativity

“Our results demonstrate that for current LLM capabilities, the single instance configuration with iterative human prompts can effectively increase the novelty of outputs while preserving their perceived value.” [2]

The study compared two approaches to human-AI collaboration: an “independent search” and a 鈥渄ifferentiated search.鈥 The independent search used a multiple-instance configuration, in which a human solver supplied an initial prompt, and the LLM, using distinct instances, independently generated potential solutions by leveraging its extensive search capabilities. The differentiated search, on the other hand, employed a single-instance configuration in which a human interacted with a single instance of the LLM iteratively, providing a series of prompts aimed at diversifying the model’s outputs, encouraging it to explore various parts of the solution space.

The researchers found that the human-guided, differentiated search approach produced more novel solutions without sacrificing value, highlighting the importance of human involvement in steering AI creativity.

Key Insight: AI Augmentation Offers Massive Efficiency Gains

“In our specific study, whereas the HC solutions cost $2,555 and 2,520 hours to develop, the final HAI solutions were generated in only 5.5 hours and $27.01.” [3]

One of the most striking findings was the dramatic difference in time and cost between human crowdsourcing and AI-augmented approaches. The human-AI method was able to produce comparable or superior results in a fraction of the time and expense of traditional crowdsourcing. As the data cited above shows, AI-driven R&D approaches reduced costs by 99% and time by 99.8% compared to traditional crowdsourcing methods. 

Why This Matters

While the paper notes certain limitations, such as lack of expertise in its crowdsourced evaluators and the use of a single LLM for the study, it still holds massive promise for business leaders looking to innovate in the age of AI and underscores the value of a hybrid approach to creativity in the AI era. While human ingenuity is vital for novel ideas, AI excels at generating high-quality, scalable solutions. Organizations might combine human brainstorming for initial concepts with AI for rapid iteration, refinement, and evaluation. Effective use of AI requires skilled human interaction, including expertise in prompt engineering and collaboration. To remain competitive, businesses must balance AI’s efficiency with human originality, avoiding over-reliance on automation. Embracing AI as a complement to human creativity will drive breakthrough innovations and accelerate the delivery of complex solutions.

References

[1] L茅onard Boussioux, Jacqueline N. Lane, Miaomiao Zhang, Vladimir Jacimovic, and Karim R. Lakhani, 鈥淭he Crowdless Future? Generative AI and Creative Problem Solving,鈥 Working Paper 24-005, (性视界 Business School, 2024): 1-39, 22.

[2] L茅onard Boussioux et al., “The Crowdless Future?” Working Paper 24-005, (性视界 Business School, 2024), 23.

[3] L茅onard Boussioux et al., “The Crowdless Future?” Working Paper 24-005, (性视界 Business School, 2024), 23.

Meet the Authors

is an Assistant Professor in the Department of Information Systems and Operations Management at the University of Washington, Foster School of Business, with an adjunct position at the Allen School of Computer Science and Engineering. His research combines operations research, machine learning, and artificial intelligence with an emphasis on multimodal frameworks and data-driven decision tools, especially in healthcare and sustainability.

is an Assistant Professor at 性视界 Business School and a co-Principal Investigator of the (LISH) at the Digital Data Design Institute (D^3). She earned her PhD from Northwestern University.

is a doctoral candidate at the Technology & Operations Management Unit at 性视界 Business School and researcher with at D^3. Miaomiao is interested in the role of generative AI in shaping organizational knowledge production, learning, and innovation processes. Her current research focuses on human-AI collaboration in the early stage of the innovation cycle, specifically idea generation, refinement, iteration, and evaluation.

is the co-founder of D^3 and an Advisory Council member, as well as the CEO at ContinuumLab.ai.

is the Dorothy & Michael Hintze Professor of Business Administration at the 性视界 Business School. His innovation-related research is centered around his role as the founder and co-director of the and as the Principal Investigator of the NASA Tournament Laboratory. He is also the co-founder and chair of D^3 and the co-founder and co-chair of the , a university-wide online program transforming mid-career executives into data-savvy leaders.

The post The Creative Edge: How Human-AI Collaboration is Reshaping Problem-Solving appeared first on 性视界 Business School AI Institute.

]]>
Music in the Digital Age: Empowering Creators /music-in-the-digital-age-empowering-creators/ Thu, 23 Jan 2025 19:11:23 +0000 /?p=25135 A recent post, 鈥淪haping the Future of Music in the Creator Economy,鈥 from D^3 blackbox Lab, described how the music industry is undergoing a transformation driven by digital tools and shifting business models. The blog outlined a panel discussion hosted by James Riley, Principal Investigator of the lab and Assistant Professor at HBS, that featured […]

The post Music in the Digital Age: Empowering Creators appeared first on 性视界 Business School AI Institute.

]]>
A recent post, 鈥,鈥 from D^3 blackbox Lab, described how the music industry is undergoing a transformation driven by digital tools and shifting business models. The blog outlined a panel discussion hosted by , Principal Investigator of the lab and Assistant Professor at HBS, that featured , CEO of , and , Assistant Professor of the University of South Carolina. The panel discussed the evolving landscape of artist representation, revenue distribution, and challenges in the creator economy.

Connect Music pioneers an artist-centered model that allows creators to retain 100% ownership of their intellectual property, versus traditional industry norms that often exploit minority artists. By leveraging platforms like TikTok and YouTube, artists gain greater autonomy and financial independence while bypassing corporate gatekeepers. This model also emphasizes localized storytelling and personalized support and empowers artists from underrepresented communities to control their careers and navigate the 鈥渕onetization maze鈥 in the industry today.

By combining data-driven insights, trust-building, and equitable revenue-sharing practices, Connect Music and similar models aim to foster a more inclusive, creator-driven future, redefining success in the modern music industry.

The post Music in the Digital Age: Empowering Creators appeared first on 性视界 Business School AI Institute.

]]>
The Future of Decision-Making: How Generative AI Transforms Innovation Evaluation /the-future-of-decision-making-how-generative-ai-transforms-innovation-evaluation/ Wed, 15 Jan 2025 14:37:51 +0000 /?p=24909 As businesses grapple with an ever-growing volume of ideas, products, and solutions to evaluate, decision-making processes are being reshaped by artificial intelligence (AI). Generative AI, in particular, has emerged as a game-changer in creative problem-solving and evaluation, as demonstrated by a recent field experiment described in the working paper 鈥淭he Narrative AI Advantage? A Field […]

The post The Future of Decision-Making: How Generative AI Transforms Innovation Evaluation appeared first on 性视界 Business School AI Institute.

]]>
As businesses grapple with an ever-growing volume of ideas, products, and solutions to evaluate, decision-making processes are being reshaped by artificial intelligence (AI). Generative AI, in particular, has emerged as a game-changer in creative problem-solving and evaluation, as demonstrated by a recent field experiment described in the working paper 鈥.鈥&苍产蝉辫;

The paper鈥攂y , Assistant Professor at 性视界 Business School and a co-Principal Investigator of the (LISH) at 性视界鈥檚 Digital Data Design Institute (D^3) and a team of researchers (see Meet the Authors section below for details)鈥攄escribes how AI can augment decision-making for early-stage innovation screening.

The experiment, conducted with MIT Solve, included 72 experts and 156 non-expert community screeners who evaluated 48 solutions submitted to the 2024 Global Health Equity Challenge. The team used the GPT-4 large language model (LLM) to recommend whether to pass or fail each idea and provide criteria for failure. The evaluation phase was designed with three conditions:

  • A human-only control condition, with no AI assistance
  • Treatment 1: black box AI (BBAI), AI recommendations without rationale
  • Treatment 2: Narrative AI (NAI), AI recommendations with rationale

Key Insight: AI-Augmented Decisions Are More Stringent

鈥淪creeners were 9 percentage points more likely to fail a solution under the treatment conditions than the control condition.鈥 [1]

Generative AI can be a source of rigor in evaluation. According to the authors, evaluators using AI recommendations were more discerning in their decision-making compared to human-only groups. The study highlights that AI-assisted screeners tended to fail solutions more often than their human-only counterparts, particularly when using treatment 2, which provided detailed narratives justifying its recommendations.

The NAI approach stood out as particularly effective, especially for subjective criteria like quality or alignment with goals. The researchers observed that human screeners were significantly more likely to follow narrative AI’s recommendations because the rationale added credibility and context to its suggestions.

Key Insight: Balancing Objectivity and Subjectivity in AI Collaboration

鈥淸E]ffective decision-making for subjective criteria requires human oversight and close collaboration with AI.鈥 [2]

While AI excels at tasks requiring objective analysis, its role in subjective evaluations remains nuanced. The study revealed a marked difference in human alignment with AI recommendations based on whether the criteria were objective or subjective. For objective tasks, such as assessing technical feasibility, AI provided valuable consistency. However, for subjective tasks, such as evaluating novelty or aesthetics, human oversight was indispensable. The researchers noted that over-reliance on AI narratives for subjective decisions could sometimes lead to uncritical acceptance of its conclusions.

Key Insight: The Rise of AI Interaction Expertise

鈥淸Our findings suggest] the emergence of a new form of expertise鈥擜I interaction expertise鈥攚hich involves effectively interpreting, questioning, and integrating AI-generated insights into decision-making processes.鈥 [3]

The authors suggested that integrating AI into decision-making demands more than technical know-how; it requires “AI interaction expertise.” The paper emphasized that screeners who deeply engaged with AI recommendations鈥攅xamining and, when necessary, challenging them鈥攚ere better able to integrate AI insights into their decisions. This highlights a new skill set for the modern workforce: the ability to collaborate effectively with AI systems.

Why This Matters

The authors鈥 experiment and conclusions can help C-suite and business executives assess the value of using LLMs in decision-making, specifically by:

  • Recognizing AI鈥檚 strengths and weaknesses related to objective and subjective decision-making criteria. LLMs can potentially be used to pre-screen decisions based on objective criteria, and send those results to human screeners. Decisions involving subjective criteria require close human-AI collaboration, where AI tools act as 鈥渟ounding boards鈥 that complement the decision-making process.
  • Understanding the importance of AI interaction expertise in the workforce to interpret AI results and implementing AI training that highlights the value of human perspectives and the uses and risks of AI tools.

As is often the case in studies of the current state of generative AI tools, the authors concluded that 鈥淭he key lies in leveraging LLMs as tools to augment human decision-making rather than replace it entirely.鈥 [4]

References

[1] Jacqueline N. Lane, L茅onard Boussioux, Charles Ayoubi, Ying Hao Chen, Camila Lin, Rebecca Spens, Pooja Wagh, and Pei-Hsin Wang, 鈥淭he Narrative AI Advantage? A Field Experiment on Generative AI-Augmented Evaluations of Early-Stage Innovations鈥, 性视界 Business School Working Paper 25-001 (2024): 1-60, 5.

[2] Lane, et al., 鈥淭he Narrative AI Advantage? A Field Experiment on Generative AI-Augmented Evaluations of Early-Stage Innovations鈥, 33.

[3] Lane, et al., 鈥淭he Narrative AI Advantage? A Field Experiment on Generative AI-Augmented Evaluations of Early-Stage Innovations鈥, 31.

[4] Lane, et al., 鈥淭he Narrative AI Advantage? A Field Experiment on Generative AI-Augmented Evaluations of Early-Stage Innovations鈥, 36.

Meet the Authors

Headshot of Jacqueline Ng Lane

is an Assistant Professor at 性视界 Business School and a co-Principal Investigator of the (LISH) at 性视界鈥檚 Digital Data Design Institute (D^3). She earned her Ph.D. from Northwestern University.

, is an Assistant Professor in the Department of Information Systems and Operations Management at the University of Washington, Foster School of Business, with an adjunct position at the Allen School of Computer Science and Engineering. He earned his Ph.D. in at the .

is a Postdoctoral Research Fellow at the Laboratory for Innovation Science at 性视界 (LISH) supported by a research grant from the Swiss National Science Foundation (SNSF). His research examines the processes of knowledge creation and diffusion in the context of science and innovation. He studies how scientists use their resources and informational advantages to achieve scientific breakthroughs, greater dissemination of knowledge and accessibility of innovation.

is a Lecturer at the University of Washington Global Innovation Exchange.

is an AIOps Product Manager at Microsoft. Prior to her work at Microsoft, Lin earned her Master鈥檚 in Information Systems from the University of Washington where she worked as a Research Assistant.

is  Results Measurement Manager and focuses on using research methods to understand Solve鈥檚 effectiveness and impact. Before joining Solve, Rebecca worked on evaluation and research in UK government, most recently at the Ministry of Justice. Rebecca holds a Master鈥檚 in Development Practice from Emory University and a BA in Modern History and French from the University of St. Andrews.

is Director, Operations & Impact at . Pooja came to Solve in 2017 with over a decade of experience in international development, program evaluation, and data analysis in the private and nonprofit sectors. Pooja holds a Masters in Public Policy from the 性视界 Kennedy School and a Bachelors in electrical engineering from MIT.

is a Cloud First Product Manager at Accenture. At the time of the research article鈥檚 publication, Wang was a Research Assistant and Data Scientist at the University of Washington.


The post The Future of Decision-Making: How Generative AI Transforms Innovation Evaluation appeared first on 性视界 Business School AI Institute.

]]>