HBS AI Faculty, Author at 性视界 Business School AI Institute The 性视界 Business School AI Institute catalyzes new knowledge to invent a better future by solving ambitious challenges. Mon, 25 Nov 2024 22:30:20 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 /wp-content/uploads/2026/04/cropped-Screenshot-2026-04-16-at-10.14.43-AM-32x32.png HBS AI Faculty, Author at 性视界 Business School AI Institute 32 32 Don鈥檛 Expect Juniors to Teach Senior Professionals to Use Generative AI /dont-expect-juniors-to-teach-senior-professionals-to-use-genai/ Tue, 25 Jun 2024 19:28:30 +0000 /?p=21790 Emerging technologies like generative AI are revolutionizing industries, but integrating these tools effectively poses unique challenges.

The post Don鈥檛 Expect Juniors to Teach Senior Professionals to Use Generative AI appeared first on 性视界 Business School AI Institute.

]]>
Emerging technologies like generative AI are revolutionizing industries, but integrating these tools effectively poses unique challenges. The study “Don鈥檛 Expect Juniors to Teach Senior Professionals to Use Generative AI,” authored by and from the , along with Institute experts Fabrizio Dell鈥橝cqua Headshot of Fabrizio Dell'Acqua Fabrizio Dell’Acqua , Edward McFowland III Edward McFowland III , and Karim Lakhani Karim R. Lakhani , in collaboration with Katherine Kellogg (MIT), Ethan Mollick (University of Pennsylvania), and Fran莽ois Candelon (BCG), uncovers crucial insights into the dynamics between junior and senior professionals and offers strategies for successful adoption.

Key Insights

  1. Real Barrier: Technology Risks, Not Status: Contrary to common belief, the primary obstacle isn’t the threat to senior professionals’ status but the novel risks associated with generative AI. These risks include inaccuracies, lack of explainability, and issues with contextual relevance, which are unfamiliar and challenging for senior professionals.
  2. Ineffective Mitigation by Juniors: Juniors often lack a deep understanding of generative AI’s capabilities, leading them to suggest risk mitigation tactics that focus on altering human routines instead of addressing system-level design changes. This approach is insufficient for managing the complexities of AI.
  3. System-Level Solutions Required: Effective integration of generative AI demands system-level interventions. Organizations need to develop robust frameworks for risk mitigation, establish clear guidelines for AI use, and ensure continuous learning and adaptation through real-time data updates.

Gain a comprehensive understanding of the challenges and risks posed by generative AI and discover actionable strategies to equip your organization for innovation and growth.

The post Don鈥檛 Expect Juniors to Teach Senior Professionals to Use Generative AI appeared first on 性视界 Business School AI Institute.

]]>
Skill Over Effort: How GPT Outshines Humans in Reframing Negativity /skill-over-effort-how-gpt-outshines/ Mon, 29 Apr 2024 19:03:25 +0000 /?p=21125 How well can AI understand and reinterpret human emotions? In a revealing study by the Digital Emotions Lab, GPT-4 not only grasped the essence of human disappointments but also excelled in transforming these sentiments positively, outstripping humans in crafting cognitive reappraisals.

The post Skill Over Effort: How GPT Outshines Humans in Reframing Negativity appeared first on 性视界 Business School AI Institute.

]]>
Can AI provide emotional support to humans? Researchers at the Digital Emotions Lab have undertaken an intriguing project, focusing a process called cognitive reframing鈥攁 strategy of reducing negative emotions by changing the interpretation of emotional situations. To compare the ability of humans and AI to perform cognitive reframing of negative situations, they developed 18 vignettes, such as “My friend forgot my birthday after saying we’d go to dinner together. I feel unwanted.” They then trained both humans and GPT-4 on how to perform cognitive reframing, based on a well-established training process that has been validated in tens of thousands of people. Human raters then evaluated the quality of these efforts based on their effectiveness, empathy, novelty, and specificity.

Key Insights:

Performance Insights: Out of 4195 human attempts, GPT-4 consistently performed better across most dimensions, ranking in the 85th percentile for the quality of its responses, with the exception of specificity (“This rethinking is specific to the following scenario”).聽

Effort vs. Skill: To determine if effort influenced outcomes, researchers offered human participants up to 150% of their base pay for exceptional reappraisals. Despite spending more time on the task, there was no noticeable improvement in their responses. This suggests that GPT-4’s superior performance is likely due to ability rather than more effort.

Analyzing the Content: Using advanced analysis techniques, the study measured how closely the rethought scenarios aligned with the original vignettes. It was found that human responses tended to stick closely to the original scenarios, while GPT-4鈥檚 were generally broader. Notably, GPT-4 produced better-quality responses when they were more aligned with the vignettes, whereas humans excelled when they ventured beyond the original context, showing a distinct difference in approach between humans and AI.

This research sheds light on the significant differences in how AI and humans approach the task of cognitive reframing. It reveals that AI can excel in closely following the specific emotional context, while humans show strength in applying a broader perspective. These findings not only help us understand AI’s potential in tasks requiring emotional insight but also suggest ways AI might support human efforts in fields like therapy and customer service, where understanding and reshaping emotions are crucial.

The post Skill Over Effort: How GPT Outshines Humans in Reframing Negativity appeared first on 性视界 Business School AI Institute.

]]>
Lessons in Applying Responsible AI /lessons-in-applying-responsible-ai/ Mon, 08 Apr 2024 14:19:26 +0000 /?p=20959 Insights from the March 27th, 2024 capstone session on Gen AI applied use cases, risks, and responsibilities Recordings and articles from the Generative AI in Healthcare series can be found here. In the capstone session of the Generative AI in Healthcare series, Satish Tadikonda (HBS) spoke with Responsible AI Institute experts Manoj Saxena, Var Shankar, […]

The post Lessons in Applying Responsible AI appeared first on 性视界 Business School AI Institute.

]]>
Insights from the March 27th, 2024 capstone session on Gen AI applied use cases, risks, and responsibilities

Recordings and articles from the Generative AI in Healthcare series can be found here.

In the capstone session of the Generative AI in Healthcare series, Satish Tadikonda (HBS) spoke with Responsible AI Institute experts Manoj Saxena, Var Shankar, and Sabrinah Shih as they outlined current applications of generative AI in healthcare through the lens of two case studies. The conversation included exclusive interviews with Veronica Rotemberg (Memorial Sloan Kettering Cancer Center) and Rakesh Joshi (Skinopathy), who shared their perspectives on responsible AI implementation, and closed with a broader discussion on how these lessons can be applied in a variety of AI contexts.

Responsible AI as a Foundation for Positive Impact

Satish Tadikonda and Manoj Saxena kicked off the session by discussing the key questions at the core of this series – can generative AI have a positive impact in healthcare, and if so, how can such tools be responsibly deployed? Tadikonda summarized the prior sessions with a resounding yes, that AI can indeed produce positive impacts in healthcare, but that it is crucial to do so responsibly due to societal concerns and the varying adoption levels across healthcare domains. Saxena affirmed that enterprises are applying AI with cautious excitement, and outlined a framework anchored in NIST guidelines to operationalize responsible AI. He illustrated examples of potential AI harms, such as incorrect dosages and data privacy breaches in an NHS project involving a generative AI chatbot, advocating for a comprehensive approach to ensure AI’s positive impact. He also underscored the need for responsible AI governance from the top levels of organizations, integrating principles and frameworks early in AI design to mitigate risks effectively.

Applied Use Case – Memorial Sloan Kettering Cancer Center

Sabrina Shih then took the helm to explain the use of skin lesion analyzers in detecting cancerous lesions, outlining their integration into patient diagnosis journeys through self-examination support and aiding clinicians. Shih highlighted the benefits of such analyzers, including convenience and triaging support, while also discussing the advancements in generative AI for skin cancer detection. In an interview, Dr. Veronica Rotemberg, Director of the Dermatology Imaging Informatics Group at Memorial Sloan Kettering Cancer Center, discussed the institution’s approach to evaluating an open-source, non-commercial dermatoscopy-based algorithm for melanoma detection. She delved into the characteristics of trustworthy AI and emphasized the need for large collaborative studies and efficient validation processes to improve the accuracy and efficiency of AI algorithms. She also addressed concerns regarding biases and the challenges in evaluating AI performance in clinical scenarios, stressing the importance of multidisciplinary collaboration and thoughtful consideration of potential harms to ensure patient safety and well-being.

Applied Use Case – Skinopathy

In the second case presented, Dr. Rakesh Joshi, Lead Data Scientist at Skinopathy, discussed the company鈥檚 patient engagement platform and app called “GetSkinHelp.” The app leverages AI to provide remote skin cancer screening for patients, facilitate scheduling options, and enable triaging of cases for clinicians. Dr. Joshi emphasized three key characteristics of trustworthy AI platforms: consistency of results, accuracy against benchmark data sets, and error transparency. He explained Skinopathy’s approach to addressing these characteristics, including ensuring reproducible results across different skin regions and types, benchmarking against open-source data, and being transparent about system limitations and biases. Dr. Joshi also noted the importance of randomized clinical trials to validate the platform’s reliability across diverse demographics, and the need for ongoing recalibration and data collection to maintain accuracy, particularly in underrepresented geographic areas or demographic groups. Additionally, he highlighted the company’s careful consideration of patient privacy and the collaborative decision-making process involving stakeholders such as patients, physicians, and ethicists in determining the platform’s features and data usage. He also discussed strategies to mitigate biases in clinical decision-making, such as presenting AI results after clinicians have made their assessments. He asserted that the aim of the tool is to support clinical decision-making rather than replacing it, and to maintain simplicity for patients while providing detailed information to clinicians. 

Transferrable Takeaways

The throughline of these case studies was that AI applications are duty-bound to prioritize safety, accuracy, and transparency in developing solutions for patient care. Var Shankar closed out the presentation by emphasizing the importance of drawing lessons from these examples to develop a responsible approach for organizations seeking to deploy AI in healthcare and other industries. He outlined key steps, including building on existing knowledge, understanding the benefits and pitfalls of AI, defining trustworthy characteristics, investing in testing and evaluation, and ensuring external checks are in place.

In the Q&A that followed, the speakers also delved into the role of culture in applying AI solutions, with Manoj Saxena highlighting the need for leadership in cultivating an environment which supports experimentation and lifelong learning. The conversation further explored the implications of AI regulation, particularly in the context of new global AI laws, and the need for responsibility to be a key facet throughout the AI pipeline, rather than an afterthought once the technology has been built. Saxena also elucidated that responsible AI can be profitable for institutions under financial pressure by separating AI investments into 鈥渕undane vs moonshot鈥 buckets, starting with cost-saving projects that boost efficiency before investing in more transformative initiatives, thereby creating benefits for both AI-driven organizations and the audiences they seek to serve.

The Gen AI in Healthcare series is collaboratively produced by 性视界鈥檚 Digital, Data, Design (D^3) Institute and the Responsible AI Institute.

About Responsible AI Institute

Founded in 2016, Responsible AI Institute (RAI Institute) is a global and member-driven non-profit dedicated to enabling successful responsible AI efforts in organizations. RAI Institute鈥檚 conformity assessments and certifications for AI systems support practitioners as they navigate the complex landscape of AI products. Members include ATB Financial, Amazon Web Services, Boston Consulting Group, Yum! Brands and many other leading companies and institutions collaborate with RAI Institute to bring responsible AI to all industry sectors.

The post Lessons in Applying Responsible AI appeared first on 性视界 Business School AI Institute.

]]>
Scaling AI for Hospitals and Healthcare Providers /scaling-ai-for-hospitals-and-healthcare-providers/ Wed, 13 Mar 2024 19:08:55 +0000 /?p=20650 Insights from the March 5th, 2024 session on Gen AI use cases among Healthcare Providers Recordings from the Generative AI in Healthcare series can be found here – session 1, session 2, session 3, and session 4 In the fourth session of the Generative AI in Healthcare series, speakers Nikhil Bhojwani (Recon Strategy) and Satish […]

The post Scaling AI for Hospitals and Healthcare Providers appeared first on 性视界 Business School AI Institute.

]]>
Insights from the March 5th, 2024 session on Gen AI use cases among Healthcare Providers

Recordings from the Generative AI in Healthcare series can be found here –
session 1, session 2, , and session 4

In the fourth session of the Generative AI in Healthcare series, speakers (Recon Strategy) and (HBS) outlined the role of generative AI for hospitals and healthcare providers, followed by an engaging panel discussion with guest experts Marc Succi, Alexandre Momeni, Frederik Bay, and Timothy Driscoll, who shared their perspectives on current and future AI applications in the space.

Current Landscape

Nikhil Bhojwani discussed the potential applications of AI within the platform of health systems, emphasizing its role in various areas including clinical work, education, research, patient interaction, revenue cycle management, interoperability, and general organizational functions. Even within the complexity of health systems, there exist a multitude of opportunities for AI to augment, substitute, or support human activities across different departments and functions. Nikhil and Satish encouraged further exploration of specific use cases within each domain and invited the panelists to share their perspectives on the practical applications of AI in the context of hospitals and providers.

Opportunities in Digital Health

Marc Succi discussed various opportunities within Mass General Brigham, ranging from low-risk to high-risk endeavors, with differing timelines for implementation and adoption. While certain initiatives like streamlined prior authorization are already being implemented, more disruptive concepts such as clinical workflow and decision support are expected to take longer. Succi emphasized the importance of ensuring equity, enhancing patient experience, and addressing healthcare worker burnout in the implementation of AI technologies. Alexandre Momeni of General Catalyst further elaborated on three ways health systems can utilize AI: for innovation, transformation, and efficiency. He discussed the regulatory frameworks surrounding AI in clinical decision support and highlighted the potential for AI to significantly impact healthcare workflows. Boston Children鈥檚 Hospital鈥檚 Timothy Driscoll also outlined the institution鈥檚 numerous applications of AI, including operational efficiency, clinical decision support, research, education, and patient care, stressing the importance of responsible AI development and governance structures to maximize its benefits in healthcare settings. Frederick Bay also discussed Adobe鈥檚 focus on patient engagement and digital marketing expertise, utilizing generative AI to overcome traditional barriers to adoption in healthcare systems. He highlighted opportunities for personalized engagement and document management, including faster document creation, insights extraction from existing data, as well as image tagging and labeling, but clarified they were not currently focused on the clinical side of radiology due to security and legal considerations.

Strategic AI Implementation

Timothy Driscoll then described his strategic approach to the AI portfolio at Boston Children鈥檚 Hospital, focusing on objectives such as demonstrating AI’s impact on care quality, ensuring ethical and sustainable use, and driving efficiency and expertise. The hospital holds itself to key principles of diversity, fairness, accountability, and robust governance, fostering a commitment to inclusive and transparent AI development. Driscoll also discussed specific areas where AI drove value, including diagnostic support models and synthesizing complex patient data for frontline staff. He noted a phased approach to implementation, building foundational capabilities, defining prioritization frameworks, and rapidly scaling high-impact use cases. When asked about the use of synthetic data and compliance, Driscoll explained his team鈥檚 focus on leveraging actual patient data, but acknowledged scenarios where synthetic data was used for intelligent automations, such as resume scanning. Marc Succi also shared Mass General Brigham鈥檚 approach to AI adoption, outlining the importance of research and validation through their data science office. He noted the challenges of FDA approval versus actual adoption in patient care, which raises the need for socialization and education within the healthcare community. Succi discussed the deployment of low-risk tools to familiarize users with AI concepts and mentioned ongoing investigations into clinical decision support algorithms, noting the impact on operational use cases in clinical settings. 

Risk and Responsibility

To close out the session, Nikhil Bhojwani shared some of the unique risks related to irresponsible AI use in healthcare, referring to the Responsible AI Institute鈥檚 framework to categorize risks. Among the examples Bhojwani gave were inaccuracies in AI-generated notes by scribes, safety concerns regarding AI-driven drug delivery systems, resilience issues with predictive models like sepsis detection, accountability challenges in AI recommendations, explainability difficulties, privacy risks from de-anonymization of data, and fairness concerns due to biases in training data. These use cases illustrate the multifaceted nature of AI risks in provider systems and underscore the need for robust solutions to ensure responsible implementation.

Momeni added that trust in AI systems is of utmost importance and suggested three key considerations: the degree of automation, benchmarking and evaluation methods, and the establishment of industry standards. Bay noted that transparency and governance processes are also key to establishing trust in AI development, while Succi and Driscoll both emphasized the importance of checks and balances to ensure responsible use. As an example, they mentioned existing practices where physicians review AI-generated notes and reports, driving home the consensus that human accountability remains crucial, especially with potential concerns about over-reliance on AI. The panel agreed that with robust accountability mechanisms in place, such tools could be used to vastly improve the experience of both patients and providers.

The Gen AI in Healthcare series is collaboratively produced by 性视界鈥檚 Digital, Data, Design (D^3) Institute and the Responsible AI Institute.

About Responsible AI Institute

Founded in 2016, Responsible AI Institute (RAI Institute) is a global and member-driven non-profit dedicated to enabling successful responsible AI efforts in organizations. RAI Institute鈥檚 conformity assessments and certifications for AI systems support practitioners as they navigate the complex landscape of AI products. Members include ATB Financial, Amazon Web Services, Boston Consulting Group, Yum! Brands and many other leading companies and institutions collaborate with RAI Institute to bring responsible AI to all industry sectors.

The post Scaling AI for Hospitals and Healthcare Providers appeared first on 性视界 Business School AI Institute.

]]>
Scientific Talent Leaks Out of Funding Gaps /scientific-talent-leaks-out-of-funding-gaps/ Tue, 05 Mar 2024 20:38:51 +0000 /?p=20363 In the recent paper "Scientific Talent Leaks Out of Funding Gaps," Wei Yang Tham from the Laboratory for Innovation Science at 性视界, along with Joseph Staudt and Bitsy Perlman of the US Census Bureau, and Stephanie Cheng of Edgeworth Economics, highlight the pivotal role of sustained funding in fostering the growth of future scientists.

The post Scientific Talent Leaks Out of Funding Gaps appeared first on 性视界 Business School AI Institute.

]]>
In the recent paper 鈥淪cientific Talent Leaks Out of Funding Gaps,鈥 Wei Yang Tham from the , along with Joseph Staudt and Bitsy Perlman of the US Census Bureau, and Stephanie Cheng of Edgeworth Economics, highlight the important role of stable and timely funding in retaining scientific talent within the US. This work is a call to action for the US to improve funding stability to support the US research workforce.

The paper underscores the fragility of the scientific workforce ecosystem, spotlighting how even temporary funding interruptions can destabilize individual careers and thereby hurt US innovation. Key insights include:

  • Significant Career Impacts: Funding delays are linked to a stark 40% increase in the likelihood of scientific personnel not working in the US, . Those that remain employed and in the US experience notable declines in salary. This attrition of talent poses a direct threat to the competitive edge and intellectual capital of the nation.
  • Trainees Most Affected: These effects are disproportionately borne by trainees. This suggests that potential future leaders in science are being lost, further exacerbating the long-term effects on the scientific community and its capacity for innovation.
  • Call to Action for Policy Reforms: These findings serve as a critical call to action for policymakers, funding agencies, and universities, emphasizing the need for reforms to ensure consistent and reliable funding to support the valuable scientific workforce that depends on grants. Such measures can help to retain talent, foster innovation, and maintain US leadership in global research and development.

The paper is an essential read for anyone involved in or affected by scientific research funding, offering compelling evidence of the impacts of funding instability and the need for systemic change to protect and nurture the future of science in the US.

The post Scientific Talent Leaks Out of Funding Gaps appeared first on 性视界 Business School AI Institute.

]]>
Scaling AI Applications in Digital Health /scaling-ai-applications-in-digital-health/ Tue, 27 Feb 2024 17:00:22 +0000 /?p=20315 Insights from the January 31st, 2024 session on Gen AI use cases in the Digital Health sector In the third session of the Generative AI in Healthcare series, speakers Nikhil Bhojwani (Recon Strategy) and Satish Tadikonda (HBS) provided a thought-provoking overview of the current digital health landscape, followed by an engaging panel discussion led by […]

The post Scaling AI Applications in Digital Health appeared first on 性视界 Business School AI Institute.

]]>
Insights from the January 31st, 2024 session on

In the third session of the Generative AI in Healthcare series, speakers (Recon Strategy) and (HBS) provided a thought-provoking overview of the current digital health landscape, followed by an engaging panel discussion led by Alyssa Lefaivre 艩kopac (Responsible AI Institute). Panel speakers Payal Agarwal Divakaran, Reena Pande, and Andrew Le shared their valuable insights as investors, physicians, and executives at the forefront of AI in digital health.

Recordings of the Generative AI healthcare series- session 1 , session 2 and

Current Landscape

Nikhil Bhojwani kicked off the session with a presentation outlining the intersection of digital technology, artificial intelligence (AI), and healthcare. Bhojwani provided an overview of the pervasive influence of digital components across healthcare domains, illustrating the various ways in which AI was employed, including supporting, augmenting, and substituting human work. Examples were presented, ranging from AI synthesis for electronic health records to AI symptom checkers for diagnosis, prompting dialogue on the implications and ethical considerations of AI integration in healthcare tasks traditionally performed by humans. The conversation emphasized the need for a nuanced understanding of AI’s role in healthcare delivery and management, laying the groundwork for further exploration of its ethical, practical, and regulatory dimensions.

Opportunities in Digital Health

RAI鈥檚 Alyssa Lefaivre 艩kopac then initiated a discussion about the opportunities in digital health, particularly focusing on AI integration. Payal Divakaran elaborated on the different perspectives in adopting AI in consumer, physician, and enterprise-oriented use cases, and the importance of trust in AI adoption. Payal noted the lag in enterprise adoption compared to consumer-facing applications, queuing up Andrew Le to share his perspective on the benefits of consumer-facing AI applications, citing examples of how AI can empower consumers by processing complex healthcare data and providing user-friendly interfaces. Andrew expounded upon the transformative potential of AI in enabling consumers to make sense of healthcare data and navigate the healthcare system more effectively, and the significance of AI in optimizing back-office operations in healthcare was echoed amongst the panel members. Reena Pande also noted the importance of focusing on fundamental problems and considering the provider experience alongside patient outcomes and cost. She highlighted opportunities for AI to streamline administrative tasks, improve diagnostic capabilities, and augment patient treatment, urging for careful consideration of where AI can be responsibly applied. Reena underscored the need for a nuanced approach to AI deployment that serves the interests of all stakeholders.

Risks and Limitations

The panelists expressed a mix of skepticism and optimism regarding the integration of AI as a co-pilot in healthcare, acknowledging the need for trust-building and careful consideration of the nuances in healthcare. They discussed the challenges of ensuring responsible AI deployment, including issues of validity, safety, security, accountability, transparency, explainability, data privacy, and bias. Additionally, they highlighted the importance of addressing these risks to foster confidence and ensure the ethical and effective use of AI in healthcare, including the significance of transparent disclosures and establishing clear accountability frameworks within healthcare institutions. They also responded to keen observations from the audience regarding the need for cultural competency to ensure the unique and varied needs of each user, from patients and physicians to administrators and regulators, are being met by their AI solutions. In order to address these challenges, the panel reiterated the need for ongoing dialogue and collaboration between stakeholders to navigate the complexities of AI integration responsibly.

Accountable AI in Healthcare

The panel discussion then delved into the emerging guidance from regulatory bodies like the FDA on managing the integration of AI in healthcare. Participants engaged in a robust conversation about the accountability, transparency, and interpretability of AI systems in healthcare decision-making processes. They explored the traditional role of clinicians as the ultimate decision-makers and discussed the challenges and opportunities in distributing responsibility among various contributors, including AI systems.

As the session concluded, the moderators and panelists stressed the importance of establishing clear accountability frameworks and guardrails to mitigate risks associated with AI deployment in digital health. They emphasized the need for scalable access to data and partnerships between tech giants and healthcare incumbents to foster trust and manage risks effectively. The discussion underscored the complexity of evaluating AI tools and the necessity of ongoing dialogue and collaboration to responsibly address the evolving landscape of AI integration in healthcare.


The Gen AI in Healthcare series is collaboratively produced by and the Responsible AI Institute.

About Responsible AI Institute

Founded in 2016, Responsible AI Institute (RAI Institute) is a global and member-driven non-profit dedicated to enabling successful responsible AI efforts in organizations. RAI Institute鈥檚 conformity assessments and certifications for AI systems support practitioners as they navigate the complex landscape of AI products. Members include ATB Financial, Amazon Web Services, Boston Consulting Group, Yum! Brands and many other leading companies and institutions collaborate with RAI Institute to bring responsible AI to all industry sectors. Become a .

The post Scaling AI Applications in Digital Health appeared first on 性视界 Business School AI Institute.

]]>
The Eco-Digital Era: The dual transition to a sustainable and digital economy /the-eco-digital-era-the-dual-transition-to-a-sustainable-and-digital-economy/ Fri, 16 Feb 2024 18:55:57 +0000 /?p=20207 The Digital Value Lab at Digital Data Design Institute, in collaboration with Capgemini Research Institute, unveils a joint research initiative poised at the intersection of innovation and value creation in the burgeoning eco-digital economy. This comprehensive research paper navigates through the transformative impact of generative AI, digital twins, edge computing, immersive technologies, quantum computing, and […]

The post The Eco-Digital Era: The dual transition to a sustainable and digital economy appeared first on 性视界 Business School AI Institute.

]]>
The Digital Value Lab at Digital Data Design Institute, in collaboration with Capgemini Research Institute, unveils a joint research initiative poised at the intersection of innovation and value creation in the burgeoning eco-digital economy. This comprehensive research paper navigates through the transformative impact of generative AI, digital twins, edge computing, immersive technologies, quantum computing, and synthetic biology鈥攖echnologies that are forging a new paradigm in both digital and sustainable economic development.

Our research underlines the significant attention generative AI commands within corporate strategies, with its presence on the agenda of 96% of organizations. The adoption of digital twins and edge computing is demonstrating robust improvements in efficiency and infrastructure, while immersive technologies captivate consumer interest, enhancing the buying journey. Furthermore, the anticipated integration of quantum computing is set to overhaul process efficiencies and elevate security measures. Synthetic biology emerges at the forefront, blending various fields to engender operational expenditure reduction and sustainability through innovative solutions.

The paper brings light on the dual transition towards a digital and sustainability-centric economy, providing insights into how these emerging technologies can act as catalysts for environmental and economic benefits. As we venture into this ‘jagged technological frontier’, the insights from the Digital Value Lab and Capgemini serve as a strategic guide for organizations aiming to leverage the disruptive potential of these technologies for a competitive advantage and a sustainable future.

Dive into the full paper here to discover how your enterprise can embrace this shift and prosper in the new eco-digital landscape. This research, a collaboration between Capgemini Research Institute and the Digital Value Lab, charts the course for navigating the uncharted waters of tomorrow’s technological breakthroughs

The post The Eco-Digital Era: The dual transition to a sustainable and digital economy appeared first on 性视界 Business School AI Institute.

]]>
Certifying LLM Safety Against Adversarial Prompting /certifying-llm-safety-against-adversarial-prompting/ Fri, 02 Feb 2024 15:17:34 +0000 /?p=20072 Large language models (LLMs) released for public use incorporate guardrails to ensure their output is safe, often referred to as 鈥渕odel alignment.鈥 The study presented by Chirag Agarwal Private: Chirag Agarwal , Suraj Srinivasan Suraj Srinivasan , Himabindu Lakkaraju, Aounon Kumar, and Aaron Jiaxun Li, along with University of Maryland colleague Soheil Feizi, investigates a […]

The post Certifying LLM Safety Against Adversarial Prompting appeared first on 性视界 Business School AI Institute.

]]>
Large language models (LLMs) released for public use incorporate guardrails to ensure their output is safe, often referred to as 鈥渕odel alignment.鈥 The study presented by Chirag Agarwal Private: Chirag Agarwal , Suraj Srinivasan Suraj Srinivasan Headshot Suraj Srinivasan , , Aounon Kumar, and Aaron Jiaxun Li, along with University of Maryland colleague Soheil Feizi, investigates a novel approach for ensuring the safety of Large Language Models (LLMs) against adversarial prompts. These prompts are designed to manipulate LLMs into generating harmful content, challenging the current safety measures in place.

The research introduces an “erase-and-check” method, which evaluates the safety of prompts by sequentially erasing tokens and checking the modified sequences for harmful content. This method is tested against various forms of adversarial attacks, demonstrating its effectiveness in maintaining the integrity of LLM responses. The study also compares this approach with existing techniques like randomized smoothing, highlighting its superior performance in certifying safety.

This paper offers a significant contribution to the field of AI safety, proposing a robust and effective method to protect LLMs from sophisticated adversarial prompts. The findings emphasize the need for ongoing advancements in safety measures to ensure the responsible and secure use of LLMs in various applications.

The post Certifying LLM Safety Against Adversarial Prompting appeared first on 性视界 Business School AI Institute.

]]>
Who is AI Replacing? /who-is-ai-replacing/ Fri, 02 Feb 2024 15:02:39 +0000 /?p=20069 How will the release of generative AI tools affect freelance jobs that require different skills or software? Research from Ozge Demirci Ozge Demirci , along with colleagues Jonas Hannane from the German Institute for Economic Research and Xinrong Zhu from Imperial College Business School, examines the impact of generative AI technologies like ChatGPT and image-generating […]

The post Who is AI Replacing? appeared first on 性视界 Business School AI Institute.

]]>
How will the release of generative AI tools affect freelance jobs that require different skills or software?

Research from Ozge Demirci Ozge Demirci , along with colleagues Jonas Hannane from the German Institute for Economic Research and Xinrong Zhu from Imperial College Business School, examines the impact of generative AI technologies like ChatGPT and image-generating tools on online freelancing platforms. It focuses on how these AI innovations influence the demand for various types of freelance jobs.

Using data from a major global freelancing platform, the study analyzes job posts to observe trends following the introduction of AI tools. It finds a significant decrease in job posts for tasks that AI can automate, like writing and software development, and for image-related jobs after the introduction of AI image generators. The research uses Google 性视界 Volume Index and AI Occupational Exposure Index to support these findings, showing how AI technologies can substitute certain freelance jobs.

This study highlights the transformative impact of generative AI on the online labor market. It reveals the varying effects on different job types, suggesting a shift in demand away from tasks where AI can serve as an effective alternative. This research underscores the importance of understanding and adapting to the evolving dynamics of AI in the freelance job market.

The post Who is AI Replacing? appeared first on 性视界 Business School AI Institute.

]]>
Using GPT for Market Research /using-gpt-for-market-research/ Fri, 02 Feb 2024 14:53:18 +0000 /?p=20067 Can market research be done effectively with a synthetic market? This research from Ayelet Israeli Ayelet Israeli , along with Microsoft colleagues James Brand and Donald Ngwe, presents a comprehensive study on the application of Large Language Models (LLMs) like GPT-3.5 in market research. It explores the extent to which these models can simulate consumer […]

The post Using GPT for Market Research appeared first on 性视界 Business School AI Institute.

]]>
Can market research be done effectively with a synthetic market?

This research from Ayelet Israeli Ayelet Israeli , along with Microsoft colleagues James Brand and Donald Ngwe, presents a comprehensive study on the application of Large Language Models (LLMs) like GPT-3.5 in market research. It explores the extent to which these models can simulate consumer behavior and economic theories.

The study uses GPT-3.5 to examine consumer responses in various scenarios, testing its accuracy in reflecting real-world economic behaviors such as price sensitivity and product preferences. Using both direct and indirect methods, the research gathers data on consumer preferences and willingness to pay for different product attributes. The findings indicate that LLMs like GPT-3.5 can effectively mimic consumer decision-making processes, demonstrating potential as a valuable tool in market research.

The research provides promising insights into the use of LLMs in market research. It showcases the potential of these models to offer cost-effective, efficient alternatives for understanding consumer behavior and preferences. However, it also points out the need for further exploration and refinement in the application of LLMs to ensure accuracy and reliability in market research settings.

The post Using GPT for Market Research appeared first on 性视界 Business School AI Institute.

]]>