Future Proof with AI /future-proof-with-ai/ Tue, 14 Apr 2026 13:38:36 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 /future-proof-with-ai/wp-content/uploads/sites/42/2025/09/cropped-faviconV2-150x150.png Future Proof with AI /future-proof-with-ai/ 32 32 The Hidden Economics of Workplace AI /future-proof-with-ai/the-hidden-economics-of-workplace-ai/ /future-proof-with-ai/the-hidden-economics-of-workplace-ai/#respond Tue, 14 Apr 2026 13:38:34 +0000 /future-proof-with-ai/?p=1202 As AI learns directly from how people work, a new tension is emerging about expertise, power, and governance; Hidden Economics Workplace AI Power

The post The Hidden Economics of Workplace AI appeared first on Future Proof with AI.

]]>
As AI learns directly from how people work, a new tension is emerging about expertise, power, and governance

In many workplaces, the newest addition to virtual meetings isn’t a colleague, but an AI assistant like Granola or Otter. Suddenly no one has to scramble for action items or wonder who said what. The tool fades into the background while work gets a little smoother. And somewhere downstream, the precise record of how capable people think through a problem, handle a difficult client, or navigate a complex negotiation becomes raw material for an AI model. The convenience is real, and the implications are enormous. The new working paper “,” co-written by D^3 Associate , confronts this dynamic head-on. What happens when workers realize that their work habits, insights, and creativity are training the systems that could replace them? Combining survey evidence, a randomized experiment, and formal economic theory, the authors show that when workers understand that the information they give out to AI may strengthen the organization’s hand later, they may change how much they share.

Why This Matters

For business leaders, this research surfaces a friction that most AI adoption strategies don’t account for yet. The employees whose expertise you most need to encode could be precisely the ones most aware of what’s at stake when they share it. As AI tools become more capable and more visible in the workplace, worker awareness will only rise, and so could strategic withholding. This creates a clear managerial implication: organizations can improve AI adoption not just by deploying better tools, but by discussing employee career concerns directly and giving people more meaningful control over how their work data is used. Firms that treat data governance as part of talent strategy and innovation design, rather than a legal checkbox, may be better positioned to unlock mutual benefit: stronger AI performance, higher productivity, and gains that are shared more broadly by the people helping to build the organization’s future.


Link to the D^3 Insight Article

Sign up for our newsletter to stay up to date with D^3 news and research: /#join-our-community

The post The Hidden Economics of Workplace AI appeared first on Future Proof with AI.

]]>
/future-proof-with-ai/the-hidden-economics-of-workplace-ai/feed/ 0
Back to the Beginnings of AI at Work /future-proof-with-ai/back-to-the-beginnings-of-ai-at-work/ /future-proof-with-ai/back-to-the-beginnings-of-ai-at-work/#respond Thu, 09 Apr 2026 11:31:38 +0000 /future-proof-with-ai/?p=1200 What a Landmark AI Study Tells Us About When to Trust, and When Not to Trust, AI. AI Work Landmark Study

The post Back to the Beginnings of AI at Work appeared first on Future Proof with AI.

]]>
What a Landmark AI Study Tells Us About When to Trust, and When Not to Trust, AI

In September 2023, a working paper out of ӽ Business School landed at an unusually consequential moment. Generative AI had been publicly available for less than a year, organizations were scrambling to understand its implications, and almost no rigorous field evidence existed on how it actually affected professional performance. “” offered exactly that. Now, in March 2026, that research has been formally published in the peer-reviewed journal . To mark this milestone, we’re revisiting the study and its findings. The questions it set out to answer, what AI actually does to knowledge worker performance, where it helps, where it hurts, and why, were foundational then. They remain foundational now.

Why This Matters

For executives and leaders, this paper remains foundational because it frames AI adoption as a problem of decisions, strategy, and execution. The lesson is not that AI is universally good or dangerously flawed, it’s that leaders have to understand where, in a workflow, AI strengthens performance and where it creates deceptive failures. This means training people to exercise judgment rather than outsource it, and recognizing that polished output is not the same as sound reasoning. How will you guide your team along the jagged frontier?


Link to the D^3 Insight Article

Sign up for our newsletter to stay up to date with D^3 news and research: /#join-our-community

The post Back to the Beginnings of AI at Work appeared first on Future Proof with AI.

]]>
/future-proof-with-ai/back-to-the-beginnings-of-ai-at-work/feed/ 0
Everyone Has AI. Which Firms are Going to Win? /future-proof-with-ai/everyone-has-ai-which-firms-are-going-to-win/ /future-proof-with-ai/everyone-has-ai-which-firms-are-going-to-win/#respond Tue, 07 Apr 2026 14:36:24 +0000 /future-proof-with-ai/?p=1197 New research shows that access to AI is not the same as knowing where to use it. AI Firms Win Research

The post Everyone Has AI. Which Firms are Going to Win? appeared first on Future Proof with AI.

]]>
New research shows that access to AI is not the same as knowing where to use it.

A firm is only as fast as the slowest step in its chain of work. In manufacturing, it might be one particular machine on the line. In software, one overloaded intake service. Many business leaders are accidentally recreating this scenario with artificial intelligence. They provision AI tools to employees and hear about localized productivity spikes, but the company’s overall performance barely moves. This tension lies at the heart of the new working paper “,” from co-authors at INSEAD and the Digital Data Design Institute at ӽ (D^3). By tracking hundreds of organizations, the researchers have uncovered friction points that hold firms back from realizing the true economic promise of generative AI.

Why This Matters

For business leaders and executives, this research shows that the organizations most likely to realize substantial AI-driven results are those that invest not just in technology, but in the wide-ranging process of exploring where it fits. That is a strategy and execution problem, and leaders will need to ask which parts of their organizations need redesign rather than optimization. If you don’t actively push the boundaries of how AI rewrites your firm, you risk using a map that never leads you to your destination.


Link to the D^3 insight article

Sign up for our newsletter to stay up to date with D^3 news and research: /#join-our-community

The post Everyone Has AI. Which Firms are Going to Win? appeared first on Future Proof with AI.

]]>
/future-proof-with-ai/everyone-has-ai-which-firms-are-going-to-win/feed/ 0
Claude Is Becoming the AI Coworker Business Leaders Have Been Waiting For /future-proof-with-ai/memo-march-2026/ /future-proof-with-ai/memo-march-2026/#respond Wed, 01 Apr 2026 18:53:41 +0000 /future-proof-with-ai/?p=1178 If you’re a business owner or senior leader, the most important shift this month is that AI is now capable of doing real work. In March 2026, Anthropic shipped a series of updates to its LLM, Claude, that deliver on the promise of AI as a powerful business tool. As a result, Claude acts less […]

The post Claude Is Becoming the AI Coworker Business Leaders Have Been Waiting For appeared first on Future Proof with AI.

]]>
If you’re a business owner or senior leader, the most important shift this month is that AI is now capable of doing real work. In March 2026, Anthropic shipped a series of updates to its LLM, Claude, that deliver on the promise of AI as a powerful business tool. As a result, Claude acts less like a chatbot and more like a Chief of Staff. That’s why Claude deserves a closer look this month.

Claude Cowork

Claude Cowork, available on the Claude Desktop app for Mac and Windows, represents a fundamental rethinking of what an AI assistant can be. Think of Cowork as a competent junior colleague who can execute multi-step projects on your behalf.

Here is how it works: point Claude at a folder on your computer and describe an outcome: “Compile Q1 regional sales data from these spreadsheets into an executive summary.” Claude then analyzes your request, creates a plan, breaks complex work into subtasks, coordinates parallel workstreams, and delivers finished outputs directly to your file system. You can watch the process unfold in real time, steer when needed, or step away and come back to finished work.

For executives, especially those leading teams of 50 or fewer, the scheduled tasks feature, combined with integrations, is significant. By integrating Claude with other enterprise platforms (email/calendar, cloud storage, work management, team communication), you can have it continuously monitor and synthesize activity across all sources. You describe the task once, choose the frequency, and Claude can send along, for example, your daily AM briefing, the key metrics for your EOW key-metrics meetings, or updated monthly competitive intelligence reports. No IT department support required.

Claude for Excel & PowerPoint

As of March 11, 2026, Claude operates directly inside Microsoft Excel and PowerPoint through add-ins available to all paid plan subscribers on Mac and Windows. What makes this release different from earlier copilot features is shared context: Claude maintains a continuous conversation across both applications simultaneously. That means a financial analyst can ask Claude to pull comparable company financials from an open workbook, build a trading comps table in Excel, and drop the valuation summary into a pitch deck, all in one session, without switching tabs or re-explaining the data at each step.

The new “Skills” feature is significant for organizations focused on operational consistency. Anthropic provides a preloaded, customizable set of common use cases, like auditing Excel financial models for formula errors and balance-sheet integrity and building competitive landscape decks in PowerPoint.

When your best analyst figures out the right way to run a variance analysis, saving it as a Skill makes that process instantly scalable. For leaders trying to replicate institutional best practices across growing teams, Skills turns tacit knowledge into organizational infrastructure.

Beyond Product Specs: Why Claude Matters

Microsoft has integrated Claude Cowork into its Copilot platform. On March 9, 2026, it launched “Copilot Cowork” at a new licensing tier. This signals that the world’s largest enterprise software company sees Anthropic’s agentic approach as a critical capability for the future of business.

For more technical teams, Claude Code’s new Agent Teams feature allows multiple AI agents to collaborate in parallel on complex projects. For example, one handles data analysis, another drafts the report, a third checks output quality.

Together, these developments point to a broader shift from single AI assistants to coordinated systems that can execute end-to-end workflows.

This Month’s Action Item for Business Leaders

The shift underway is not incremental. AI has moved from “ask me a question” to “tell me what you need done.” The competitive advantage over the coming months will not belong to those with the most sophisticated AI strategy document, but to those who begin delegating real work to these tools. The direction is clear: one person gets the support of a digital team.

My recommendation to get started is to pilot Claude on one or two internal workflows that repeat every week or month. Identify where time is being wasted. Good candidates include recurring KPI summaries, budget variances, or sales reporting.

Next Month at a Glance – Claude Dispatch

Claude Dispatch lets you assign tasks to Claude Cowork to complete on your behalf directly from your phone. Useful while you’re traveling or in meetings. Think of it as your laptop is the office, and Dispatch on your phone is the remote control that lets you send your AI coworker instructions from anywhere. More to come in the May newsletter.


Meet the Author

Mike Grandinetti is an Executive Fellow at ӽ Business School. He’s a serial tech entrepreneur, board member, AI & innovation consultant, VC EIR, and award-winning professor in the practice. A former Silicon Valley engineer and McKinsey consultant, Mike has been C-Suite leader roles across 8 tech startups, resulting in 2 NASDAQ IPOs and 7 strategic exits.

He’s led senior executive workshops for Berkeley, Brown, Carnegie Mellon, Columbia, Cornell, ӽ P&ED, NYU & Oxford. He’s been a senior advisor and organizing team member for the MIT CIO Symposium for a decade.


The post Claude Is Becoming the AI Coworker Business Leaders Have Been Waiting For appeared first on Future Proof with AI.

]]>
/future-proof-with-ai/memo-march-2026/feed/ 0
80 Apps in One Afternoon /future-proof-with-ai/80-apps-in-one-afternoon/ /future-proof-with-ai/80-apps-in-one-afternoon/#respond Mon, 30 Mar 2026 12:30:16 +0000 /future-proof-with-ai/?p=1175 AI Leadership Workshop - The Frontier Firm AI Initiative was designed around a simple conviction: the most important questions about AI in business can't be answered in theory.

The post 80 Apps in One Afternoon appeared first on Future Proof with AI.

]]>
AI Leadership Workshop – The Frontier Firm AI Initiative was designed around a simple conviction: the most important questions about AI in business can’t be answered in theory.

On Wednesday, March 11, 2026, senior leaders from across the Frontier Firm AI Initiative came together at ӽ Business School for Journey to the Frontier, an event that brought roughly 200 executives into direct conversation with research shaping the future of their organizations. The Frontier Firm AI Initiative, a collaboration between D^3 and Microsoft, brings together companies like Barclays, DuPont, EY, Mastercard, and Nestle around a shared commitment: don’t just adopt AI, but study the transformation rigorously and share what they learn.The final session of the day, led by Dr. , Mary V. and Mark A. Associate Professor of Business Administration at HBS and co-director of the Tech for All Lab at D^3, invited participants to do something that might seem surprising at an executive-level event: actually build something. By the end of the session, more than 80 working, no-code software applications had taken shape, each addressing a real challenge faced in their daily lives.

Why This Matters

For the organizations in the Frontier Firm cohort, this was just one day in a longer journey. For the broader business world, it was a reminder that the window to lead on AI will not stay open forever. AI is no longer something companies adopt from the outside. It is becoming the foundation on which strategy, operations, and decision-making are built. The leaders who understand this will not just have better tools; they will have built something that is genuinely hard to replicate. The competitive advantage of the next decade will not belong to organizations with the best AI strategy on paper. It will belong to the ones with leaders who know how to build.


Link to the D^3 Insight Article

Sign up for our newsletter to stay up to date with D^3 news and research: /#join-our-community

The post 80 Apps in One Afternoon appeared first on Future Proof with AI.

]]>
/future-proof-with-ai/80-apps-in-one-afternoon/feed/ 0
The Surprising Link Between AI Reasoning and Honesty /future-proof-with-ai/the-surprising-link-between-ai-reasoning-and-honesty/ /future-proof-with-ai/the-surprising-link-between-ai-reasoning-and-honesty/#respond Mon, 23 Mar 2026 12:57:15 +0000 /future-proof-with-ai/?p=1153 Exploring how the complexity of large language models acts as a moral safeguard The standard fear about advanced AI goes something like this: the more sophisticated a system becomes, the better it gets at sounding convincing, reading the room, and manipulating people. A model that can reason step-by-step might not just answer better, it might […]

The post The Surprising Link Between AI Reasoning and Honesty appeared first on Future Proof with AI.

]]>
Exploring how the complexity of large language models acts as a moral safeguard

The standard fear about advanced AI goes something like this: the more sophisticated a system becomes, the better it gets at sounding convincing, reading the room, and manipulating people. A model that can reason step-by-step might not just answer better, it might lie better. That concern feels intuitive, especially as businesses hand more customer interactions, internal workflows, and decision support to increasingly capable systems. However, in the new study “,” co-written by D^3 Associate Martin Wattenberg, a team of researchers found that our intuition might be backward. Through an exhaustive series of tests involving moral trade-offs and complex reasoning traces, they found that when an AI is forced to slow down and show its work, it becomes significantly more honest.

For business leaders, the value of this paper is not that AI can now be assumed trustworthy. Rather, it offers a more useful way to think about risk. If deceptive outputs are less stable, then system design can exploit that fact. Building deliberation into AI workflows may become an important step before interfacing with customers or making high-stakes decisions. Organizations need systems that hold up when incentives get messy, and this paper suggests that at least in some cases, more reasoning may keep AI honest when it counts.

Ann Yuan et al., “Think Before You Lie: How Reasoning Leads to Honesty,” arXiv preprint arXiv:2603.09957 (2026): 3. .


Link to the D^3 Insight Article

Sign up for our newsletter to stay up to date with D^3 news and research:/#join-our-community

The post The Surprising Link Between AI Reasoning and Honesty appeared first on Future Proof with AI.

]]>
/future-proof-with-ai/the-surprising-link-between-ai-reasoning-and-honesty/feed/ 0
GenAI + Productivity /future-proof-with-ai/genai-productivity/ /future-proof-with-ai/genai-productivity/#respond Thu, 19 Mar 2026 14:39:26 +0000 /future-proof-with-ai/?p=1136 Research indicates that knowledge workers have measurable performance gains when deploying Generative AI.

The post GenAI + Productivity appeared first on Future Proof with AI.

]]>

Knowledge workers have measurable performance gains when deploying Generative AI

Based on the research of Fabrizio Dell’Acqua, Edward McFowland III, Karim R. Lakhani

One longstanding quandary in economics has been the productivity paradox. In spite of all the rapid advances in information technology dating back to the 1970s, there hasn’t been the corresponding productivity growth.

So, from the offset of this Generative AI technological trigger, our Institute researchers sought to find out if Generative AI truly impacts worker productivity.

Researchers from our Laboratory for Innovation Science (LISH) teamed up with Boston Consulting Group (BCG) to understand the impact of Generative AI on the work of 758 management consultants. They measured tasks spanning a consultant’s daily work, including creativity, analytical thinking, writing proficiency, and persuasiveness.

Through this study, they found that using an LLM significantly increased a consultant’s performance, boosting speed by over 25%, human-rated performance by over 40%, and task completion by over 12%.


𲹻ٳڳܱ𲹰:

Sign up for our newsletter to stay up to date with D^3 news and research: /#join-our-community

The post GenAI + Productivity appeared first on Future Proof with AI.

]]>
/future-proof-with-ai/genai-productivity/feed/ 0
Why Your AI Strategy May Be Failing /future-proof-with-ai/why-your-ai-strategy-may-be-failing/ /future-proof-with-ai/why-your-ai-strategy-may-be-failing/#respond Thu, 19 Mar 2026 14:33:38 +0000 /future-proof-with-ai/?p=1133 Improving AI strategy execution and ROI - How companies can overcome the structural frictions that block AI at scale

The post Why Your AI Strategy May Be Failing appeared first on Future Proof with AI.

]]>
Improving AI strategy execution and ROI – How companies can overcome the structural frictions that block AI at scale

AI has entered the enterprise faster than most previous waves of technology, reshaping expectations about speed, productivity, and decision-making. Yet adoption alone does not produce transformation. The Frontier Firm Initiative (FFI), a joint effort between the Digital Data Design Institute at ӽ (D^3) and Microsoft, recently convened senior leaders from a dozen global organizations to address the “last mile” challenge, when a company tries to scale localized, successful AI pilot programs into a standard, enterprise-wide operating model. In the new HBR article “,” Karim R. Lakhani and Jen Stave of D^3 and Microsoft’s Jared Spataro identify a framework of the specific “frictions” stalling progress and outline a strategic blueprint to overcome them. In the insight below, we will zoom in on one friction and one corresponding recommendation from the blueprint to resolve it.

Why This Matters

For today’s business professionals and executives, the “last mile” is less a technical challenge and more a test of leadership imagination. Process debt and clean-sheet redesign are only two parts of a broader diagnosis of seven frictions and corresponding transformation strategies. Read the  to see them all. The potential of the technology you have already purchased is immense, but realizing it requires the courage to redesign the organization to match the speed of an agentic world.

References

[1] Lakhani, Karim R., Jared Spataro, and Jen Stave, “The ‘Last Mile’ Problem Slowing AI Transformation,” ӽ Business Review, March 9, 2026, . 

[2] Lakhani et al., “The ‘Last Mile’ Problem Slowing AI Transformation.” 



Sign up for our newsletter to stay up to date with D^3 news and research: /#join-our-community

The post Why Your AI Strategy May Be Failing appeared first on Future Proof with AI.

]]>
/future-proof-with-ai/why-your-ai-strategy-may-be-failing/feed/ 0
Can You Spot the Bot? /future-proof-with-ai/can-you-spot-the-bot/ /future-proof-with-ai/can-you-spot-the-bot/#respond Thu, 19 Mar 2026 14:29:11 +0000 /future-proof-with-ai/?p=1130 The importance of AI literacy in 2026 - new research reveals just how convincingly AI mimics humans

The post Can You Spot the Bot? appeared first on Future Proof with AI.

]]>
The importance of AI literacy in 2026 – new research reveals just how convincingly AI mimics humans

Alan Turing’s original “imitation game,” proposed in 1950, had an elegant simplicity: a human judge conducts a text-based conversation with two hidden parties—one human, one machine—and tries to guess which is which. Today, the question Turing posed has quietly expanded into territory he never mapped. Our digital existence is a kaleidoscope of multi-modal interactions. We don’t just “talk” to the internet, we upload snapshots of our morning coffee, interpret complex visual data in professional dashboards, estimate the mood of a room through a video call, and follow subtle cues of visual attention.  co-written by Hanspeter Pfister, D^3 Associate and An Wang Professor of Computer Science at ӽ SEAS, explains how a new large-scale study from researchers at 15 organizations around the globe drags the imitation game into the full complexity of how humans communicate, perceive, and describe the world. Are we already past the point where we can reliably tell machines from humans, and does it matter who’s doing the judging?

Why This Matters

For executives and business leaders, this research redraws the risk landscape in two directions. First, the near invisibility of AI responses in everyday tasks means fraud, disinformation, and impersonation are no longer theoretical risks, they are statistically plausible at scale, today. Second, because automated classifiers outperform human judges, detection cannot rely on human vigilance alone anymore. It requires infrastructure, and regulators in the EU and elsewhere are already moving toward mandatory AI disclosure requirements. This paper highlights the importance of building transparency tools now to be prepared for when they are required and to ensure you can maintain your customers’ trust. 

Bonus

As AI systems get more capable, they’re also getting harder to understand. Another response to this challenge is to build clearer explanations for why models behave the way they do with a single, coherent framework. To go deeper on this initiative, check out “Unifying AI Attribution: A New Frontier in Understanding Complex Systems.”

References

[1] Mengmi Zhang et al., “Can Machines Imitate Humans? Integrative Turing-like tests for Language and Vision Demonstrate a Narrowing Gap,” arXiv preprint arXiv:2211.13087v3 (2025): 3.  

[2] Zhang et al., “Can Machines Imitate Humans?”: 2.

[3] Zhang et al., “Can Machines Imitate Humans?”: 16.


Link to the D^3 insight article

Sign up for our newsletter to stay up to date with D^3 news and research: /#join-our-community

The post Can You Spot the Bot? appeared first on Future Proof with AI.

]]>
/future-proof-with-ai/can-you-spot-the-bot/feed/ 0
The Power of AI Stopping Agents /future-proof-with-ai/the-power-of-ai-stopping-agents/ /future-proof-with-ai/the-power-of-ai-stopping-agents/#respond Thu, 19 Mar 2026 14:04:55 +0000 /future-proof-with-ai/?p=1121 How AI stopping agents create autonomous guardrails - When machine learning meets sales Psychology

The post The Power of AI Stopping Agents appeared first on Future Proof with AI.

]]>
How AI stopping agents create autonomous guardrailsWhen machine learning meets sales Psychology

Conventional sales wisdom treats persistence as virtue: stay in the conversation, overcome objections, keep the line alive. But recent research into the dynamics of sales conversations suggests that our bias toward persistence leads to a massive misallocation of resources. In “,” a team including , Professor of Business Administration at HBS and co-founder of the Customer Intelligence Lab at D^3, explain how they built a generative AI “stopping agent” that watches sales transcripts in real time and chooses to quit or wait to maximize cumulative payoff. The result? The ability to lift expected sales by over 30%. 

Why This Matters

For today’s business leaders, the takeaway is clear: efficiency is not just about doing things faster, it’s about choosing not to do the things that don’t work. In an era where AI is increasingly viewed through the lens of total automation, this research also offers a more sophisticated model. It demonstrates that the most effective use of generative AI isn’t to replace the human salesperson, but to provide them with “decision support” that corrects for natural psychological biases. This methodology scales beyond sales to any domain with sequential decisions and observable outcomes. The question for leaders isn’t whether their teams face similar cognitive constraints, they almost certainly do, it’s whether they’re ready to systematically identify and correct them.

Bonus

For another use case where AI doesn’t replace humans, but offers the opportunity to improve judgment, break silos, and accelerate execution, check out “The Cybernetic Teammate: How AI is Reshaping Collaboration and Expertise in the Workplace.”

References

[1] Manzoor, Emaad, Eva Ascarza, and Oded Netzer, “Learning When to Quit in Sales Conversations,” arXiv preprint arXiv:2511.01181 (2025): 23. . See also . 

[2] Manzoor et al., “Learning When to Quit in Sales Conversations,” 1.

[3] Manzoor et al., “Learning When to Quit in Sales Conversations,” 2.

[4] Manzoor et al., “Learning When to Quit in Sales Conversations,” 21.

[5] Manzoor et al., “Learning When to Quit in Sales Conversations,” 2.


Link to the D^3 insight article

Sign up for our newsletter to stay up to date with D^3 news and research: /#join-our-community

The post The Power of AI Stopping Agents appeared first on Future Proof with AI.

]]>
/future-proof-with-ai/the-power-of-ai-stopping-agents/feed/ 0