  {"id":1153,"date":"2026-03-23T12:57:15","date_gmt":"2026-03-23T12:57:15","guid":{"rendered":"https:\/\/d3.harvard.edu\/future-proof-with-ai\/?p=1153"},"modified":"2026-03-23T20:32:26","modified_gmt":"2026-03-23T20:32:26","slug":"the-surprising-link-between-ai-reasoning-and-honesty","status":"publish","type":"post","link":"https:\/\/d3.harvard.edu\/future-proof-with-ai\/the-surprising-link-between-ai-reasoning-and-honesty\/","title":{"rendered":"The Surprising Link Between AI Reasoning and Honesty"},"content":{"rendered":"\n<h3 class=\"wp-block-heading\"><em>Exploring how the complexity of large language models acts as a moral safeguard<\/em><\/h3>\n\n\n\t\t<div class=\"embed-wrapper\">\n\t\t\t<figure class=\"wp-block-embed wp-embed-aspect-16-9 wp-has-aspect-ratio\"> \n\t\t\t\t<div\n\t\t\t\t\tclass=\"rkv-video-placeholder \"\n\t\t\t\t\tstyle=\"background-image:url(https:\/\/i.vimeocdn.com\/video\/2136478381-d5f3d418990dbf9b3434e8a62fef7e32ab29642f4dff4420ae6cb6b4c3dcaf77-d_295x166?region=us);aspect-ratio:16 \/ 9\"\n\t\t\t\t\tdata-provider=\"vimeo\"\n\t\t\t\t\tdata-video-id=\"1175598697\"\n\t\t\t\t><\/div>\n\t\t\t<\/figure>\n\t\n\t\t\t\t\t<\/div>\n\t\t\n\n\n<p>The standard fear about advanced AI goes something like this: the more sophisticated a system becomes, the better it gets at sounding convincing, reading the room, and manipulating people. A model that can reason step-by-step might not just answer better, it might lie better. That concern feels intuitive, especially as businesses hand more customer interactions, internal workflows, and decision support to increasingly capable systems. However, in the new study \u201c<a href=\"https:\/\/arxiv.org\/abs\/2603.09957v2\" target=\"_blank\" rel=\"noreferrer noopener\">Think Before You Lie: How Reasoning Leads to Honesty<\/a>,\u201d co-written by D^3 Associate Martin Wattenberg, a team of researchers found that our intuition might be backward. Through an exhaustive series of tests involving moral trade-offs and complex reasoning traces, they found that when an AI is forced to slow down and show its work, it becomes significantly more honest.<\/p>\n\n\n\n<p>For business leaders, the value of this paper is not that AI can now be assumed trustworthy. Rather, it offers a more useful way to think about risk. If deceptive outputs are less stable, then system design can exploit that fact. Building deliberation into AI workflows may become an important step before interfacing with customers or making high-stakes decisions. Organizations need systems that hold up when incentives get messy, and this paper suggests that at least in some cases, more reasoning may keep AI honest when it counts.<\/p>\n\n\n\n<p>Ann Yuan et al., \u201cThink Before You Lie: How Reasoning Leads to Honesty,\u201d a<em>rXiv preprint arXiv:2603.09957<\/em> (2026): 3. <a href=\"https:\/\/doi.org\/10.48550\/arXiv.2603.09957\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/doi.org\/10.48550\/arXiv.2603.09957<\/a>.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<p><a href=\"https:\/\/d3.harvard.edu\/the-surprising-link-between-ai-reasoning-and-honesty\" target=\"_blank\" rel=\"noreferrer noopener\">Link to the D^3 Insight Article<\/a><br><a href=\"https:\/\/arxiv.org\/abs\/2603.09957v2\" target=\"_blank\" rel=\"noreferrer noopener\">Link to the research paper<\/a><br>Sign up for our newsletter to stay up to date with D^3 news and research:<a href=\"https:\/\/d3.harvard.edu\/#join-our-community\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/d3.harvard.edu\/#join-our-community<\/a><\/p>\n\n\n\n<p><\/p>\n\n\n\n<p><\/p>\n\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Exploring how the complexity of large language models acts as a moral safeguard The standard fear about advanced AI goes something like this: the more sophisticated a system becomes, the better it gets at sounding convincing, reading the room, and manipulating people. A model that can reason step-by-step might not just answer better, it might [&hellip;]<\/p>\n","protected":false},"author":19452,"featured_media":1156,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_trash_the_other_posts":false,"rkv_hide_featured_image":true,"rkv_hide_page_title":false,"rkv_hide_abstract_shapes":true,"rkv_reduce_header":false,"hbsd3_featured_media_url":"https:\/\/vimeo.com\/1175598697?share=copy&fl=sv&fe=ci","hbsd3_featured_media_url_autoplay":false,"editor_notices":[],"footnotes":""},"categories":[1],"tags":[],"tax_cop":[],"rkv-people-shadow":[],"insight_type":[],"class_list":["post-1153","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-uncategorized","rkv-featured-image-is-hidden"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.2 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>The Surprising Link Between AI Reasoning and Honesty - Future Proof with AI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/d3.harvard.edu\/the-surprising-link-between-ai-reasoning-and-honesty\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"The Surprising Link Between AI Reasoning and Honesty - Future Proof with AI\" \/>\n<meta property=\"og:description\" content=\"Exploring how the complexity of large language models acts as a moral safeguard The standard fear about advanced AI goes something like this: the more sophisticated a system becomes, the better it gets at sounding convincing, reading the room, and manipulating people. A model that can reason step-by-step might not just answer better, it might [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/d3.harvard.edu\/the-surprising-link-between-ai-reasoning-and-honesty\/\" \/>\n<meta property=\"og:site_name\" content=\"Future Proof with AI\" \/>\n<meta property=\"article:published_time\" content=\"2026-03-23T12:57:15+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2026-03-23T20:32:26+00:00\" \/>\n<meta property=\"og:image\" content=\"http:\/\/d3.harvard.edu\/future-proof-with-ai\/wp-content\/uploads\/sites\/42\/2026\/03\/The-Surprising-Link-Between-AI-Reasoning-and-Honesty-play-cover-scaled.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"2560\" \/>\n\t<meta property=\"og:image:height\" content=\"1440\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"D^3 Content &amp; Learning\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"D^3 Content &amp; Learning\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"2 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/d3.harvard.edu\/the-surprising-link-between-ai-reasoning-and-honesty\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/d3.harvard.edu\/future-proof-with-ai\/the-surprising-link-between-ai-reasoning-and-honesty\/\"},\"author\":{\"name\":\"trevormasse\",\"@id\":\"https:\/\/d3.harvard.edu\/future-proof-with-ai\/#\/schema\/person\/1185726ac704474179c007e053462f9d\"},\"headline\":\"The Surprising Link Between AI Reasoning and Honesty\",\"datePublished\":\"2026-03-23T12:57:15+00:00\",\"dateModified\":\"2026-03-23T20:32:26+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/d3.harvard.edu\/future-proof-with-ai\/the-surprising-link-between-ai-reasoning-and-honesty\/\"},\"wordCount\":299,\"commentCount\":0,\"image\":{\"@id\":\"https:\/\/d3.harvard.edu\/the-surprising-link-between-ai-reasoning-and-honesty\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/d3.harvard.edu\/future-proof-with-ai\/wp-content\/uploads\/sites\/42\/2026\/03\/The-Surprising-Link-Between-AI-Reasoning-and-Honesty-play-cover-scaled.jpg\",\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/d3.harvard.edu\/the-surprising-link-between-ai-reasoning-and-honesty\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/d3.harvard.edu\/future-proof-with-ai\/the-surprising-link-between-ai-reasoning-and-honesty\/\",\"url\":\"https:\/\/d3.harvard.edu\/the-surprising-link-between-ai-reasoning-and-honesty\/\",\"name\":\"The Surprising Link Between AI Reasoning and Honesty - Future Proof with AI\",\"isPartOf\":{\"@id\":\"https:\/\/d3.harvard.edu\/future-proof-with-ai\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/d3.harvard.edu\/the-surprising-link-between-ai-reasoning-and-honesty\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/d3.harvard.edu\/the-surprising-link-between-ai-reasoning-and-honesty\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/d3.harvard.edu\/future-proof-with-ai\/wp-content\/uploads\/sites\/42\/2026\/03\/The-Surprising-Link-Between-AI-Reasoning-and-Honesty-play-cover-scaled.jpg\",\"datePublished\":\"2026-03-23T12:57:15+00:00\",\"dateModified\":\"2026-03-23T20:32:26+00:00\",\"author\":{\"@id\":\"https:\/\/d3.harvard.edu\/future-proof-with-ai\/#\/schema\/person\/1185726ac704474179c007e053462f9d\"},\"breadcrumb\":{\"@id\":\"https:\/\/d3.harvard.edu\/the-surprising-link-between-ai-reasoning-and-honesty\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/d3.harvard.edu\/the-surprising-link-between-ai-reasoning-and-honesty\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/d3.harvard.edu\/the-surprising-link-between-ai-reasoning-and-honesty\/#primaryimage\",\"url\":\"https:\/\/d3.harvard.edu\/future-proof-with-ai\/wp-content\/uploads\/sites\/42\/2026\/03\/The-Surprising-Link-Between-AI-Reasoning-and-Honesty-play-cover-scaled.jpg\",\"contentUrl\":\"https:\/\/d3.harvard.edu\/future-proof-with-ai\/wp-content\/uploads\/sites\/42\/2026\/03\/The-Surprising-Link-Between-AI-Reasoning-and-Honesty-play-cover-scaled.jpg\",\"width\":2560,\"height\":1440,\"caption\":\"True future-proofing requires AI that balances transparent truth ('1') against other outputs.\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/d3.harvard.edu\/the-surprising-link-between-ai-reasoning-and-honesty\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/d3.harvard.edu\/future-proof-with-ai\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"The Surprising Link Between AI Reasoning and Honesty\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/d3.harvard.edu\/future-proof-with-ai\/#website\",\"url\":\"https:\/\/d3.harvard.edu\/future-proof-with-ai\/\",\"name\":\"Future Proof with AI\",\"description\":\"\",\"potentialAction\":[{\"@type\":\"性视界Action\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/d3.harvard.edu\/future-proof-with-ai\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/d3.harvard.edu\/future-proof-with-ai\/#\/schema\/person\/1185726ac704474179c007e053462f9d\",\"name\":\"trevormasse\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/secure.gravatar.com\/avatar\/1fc8bf8d51ffb84ce05cbf148543f340094d44cd0e56022daac5fbf115b8dc5a?s=96&d=mm&r=g\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/1fc8bf8d51ffb84ce05cbf148543f340094d44cd0e56022daac5fbf115b8dc5a?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/1fc8bf8d51ffb84ce05cbf148543f340094d44cd0e56022daac5fbf115b8dc5a?s=96&d=mm&r=g\",\"caption\":\"trevormasse\"},\"url\":\"https:\/\/d3.harvard.edu\/future-proof-with-ai\/author\/d3-content-learning\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"The Surprising Link Between AI Reasoning and Honesty - Future Proof with AI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/d3.harvard.edu\/the-surprising-link-between-ai-reasoning-and-honesty\/","og_locale":"en_US","og_type":"article","og_title":"The Surprising Link Between AI Reasoning and Honesty - Future Proof with AI","og_description":"Exploring how the complexity of large language models acts as a moral safeguard The standard fear about advanced AI goes something like this: the more sophisticated a system becomes, the better it gets at sounding convincing, reading the room, and manipulating people. A model that can reason step-by-step might not just answer better, it might [&hellip;]","og_url":"https:\/\/d3.harvard.edu\/the-surprising-link-between-ai-reasoning-and-honesty\/","og_site_name":"Future Proof with AI","article_published_time":"2026-03-23T12:57:15+00:00","article_modified_time":"2026-03-23T20:32:26+00:00","og_image":[{"width":2560,"height":1440,"url":"http:\/\/d3.harvard.edu\/future-proof-with-ai\/wp-content\/uploads\/sites\/42\/2026\/03\/The-Surprising-Link-Between-AI-Reasoning-and-Honesty-play-cover-scaled.jpg","type":"image\/jpeg"}],"author":"D^3 Content & Learning","twitter_card":"summary_large_image","twitter_misc":{"Written by":"D^3 Content & Learning","Est. reading time":"2 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/d3.harvard.edu\/the-surprising-link-between-ai-reasoning-and-honesty\/#article","isPartOf":{"@id":"https:\/\/d3.harvard.edu\/future-proof-with-ai\/the-surprising-link-between-ai-reasoning-and-honesty\/"},"author":{"name":"trevormasse","@id":"https:\/\/d3.harvard.edu\/future-proof-with-ai\/#\/schema\/person\/1185726ac704474179c007e053462f9d"},"headline":"The Surprising Link Between AI Reasoning and Honesty","datePublished":"2026-03-23T12:57:15+00:00","dateModified":"2026-03-23T20:32:26+00:00","mainEntityOfPage":{"@id":"https:\/\/d3.harvard.edu\/future-proof-with-ai\/the-surprising-link-between-ai-reasoning-and-honesty\/"},"wordCount":299,"commentCount":0,"image":{"@id":"https:\/\/d3.harvard.edu\/the-surprising-link-between-ai-reasoning-and-honesty\/#primaryimage"},"thumbnailUrl":"https:\/\/d3.harvard.edu\/future-proof-with-ai\/wp-content\/uploads\/sites\/42\/2026\/03\/The-Surprising-Link-Between-AI-Reasoning-and-Honesty-play-cover-scaled.jpg","inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/d3.harvard.edu\/the-surprising-link-between-ai-reasoning-and-honesty\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/d3.harvard.edu\/future-proof-with-ai\/the-surprising-link-between-ai-reasoning-and-honesty\/","url":"https:\/\/d3.harvard.edu\/the-surprising-link-between-ai-reasoning-and-honesty\/","name":"The Surprising Link Between AI Reasoning and Honesty - Future Proof with AI","isPartOf":{"@id":"https:\/\/d3.harvard.edu\/future-proof-with-ai\/#website"},"primaryImageOfPage":{"@id":"https:\/\/d3.harvard.edu\/the-surprising-link-between-ai-reasoning-and-honesty\/#primaryimage"},"image":{"@id":"https:\/\/d3.harvard.edu\/the-surprising-link-between-ai-reasoning-and-honesty\/#primaryimage"},"thumbnailUrl":"https:\/\/d3.harvard.edu\/future-proof-with-ai\/wp-content\/uploads\/sites\/42\/2026\/03\/The-Surprising-Link-Between-AI-Reasoning-and-Honesty-play-cover-scaled.jpg","datePublished":"2026-03-23T12:57:15+00:00","dateModified":"2026-03-23T20:32:26+00:00","author":{"@id":"https:\/\/d3.harvard.edu\/future-proof-with-ai\/#\/schema\/person\/1185726ac704474179c007e053462f9d"},"breadcrumb":{"@id":"https:\/\/d3.harvard.edu\/the-surprising-link-between-ai-reasoning-and-honesty\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/d3.harvard.edu\/the-surprising-link-between-ai-reasoning-and-honesty\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/d3.harvard.edu\/the-surprising-link-between-ai-reasoning-and-honesty\/#primaryimage","url":"https:\/\/d3.harvard.edu\/future-proof-with-ai\/wp-content\/uploads\/sites\/42\/2026\/03\/The-Surprising-Link-Between-AI-Reasoning-and-Honesty-play-cover-scaled.jpg","contentUrl":"https:\/\/d3.harvard.edu\/future-proof-with-ai\/wp-content\/uploads\/sites\/42\/2026\/03\/The-Surprising-Link-Between-AI-Reasoning-and-Honesty-play-cover-scaled.jpg","width":2560,"height":1440,"caption":"True future-proofing requires AI that balances transparent truth ('1') against other outputs."},{"@type":"BreadcrumbList","@id":"https:\/\/d3.harvard.edu\/the-surprising-link-between-ai-reasoning-and-honesty\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/d3.harvard.edu\/future-proof-with-ai\/"},{"@type":"ListItem","position":2,"name":"The Surprising Link Between AI Reasoning and Honesty"}]},{"@type":"WebSite","@id":"https:\/\/d3.harvard.edu\/future-proof-with-ai\/#website","url":"https:\/\/d3.harvard.edu\/future-proof-with-ai\/","name":"Future Proof with AI","description":"","potentialAction":[{"@type":"性视界Action","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/d3.harvard.edu\/future-proof-with-ai\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/d3.harvard.edu\/future-proof-with-ai\/#\/schema\/person\/1185726ac704474179c007e053462f9d","name":"trevormasse","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/1fc8bf8d51ffb84ce05cbf148543f340094d44cd0e56022daac5fbf115b8dc5a?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/1fc8bf8d51ffb84ce05cbf148543f340094d44cd0e56022daac5fbf115b8dc5a?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/1fc8bf8d51ffb84ce05cbf148543f340094d44cd0e56022daac5fbf115b8dc5a?s=96&d=mm&r=g","caption":"trevormasse"},"url":"https:\/\/d3.harvard.edu\/future-proof-with-ai\/author\/d3-content-learning\/"}]}},"_links":{"self":[{"href":"https:\/\/d3.harvard.edu\/future-proof-with-ai\/wp-json\/wp\/v2\/posts\/1153","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/d3.harvard.edu\/future-proof-with-ai\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/d3.harvard.edu\/future-proof-with-ai\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/d3.harvard.edu\/future-proof-with-ai\/wp-json\/wp\/v2\/users\/19452"}],"replies":[{"embeddable":true,"href":"https:\/\/d3.harvard.edu\/future-proof-with-ai\/wp-json\/wp\/v2\/comments?post=1153"}],"version-history":[{"count":3,"href":"https:\/\/d3.harvard.edu\/future-proof-with-ai\/wp-json\/wp\/v2\/posts\/1153\/revisions"}],"predecessor-version":[{"id":1157,"href":"https:\/\/d3.harvard.edu\/future-proof-with-ai\/wp-json\/wp\/v2\/posts\/1153\/revisions\/1157"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/d3.harvard.edu\/future-proof-with-ai\/wp-json\/wp\/v2\/media\/1156"}],"wp:attachment":[{"href":"https:\/\/d3.harvard.edu\/future-proof-with-ai\/wp-json\/wp\/v2\/media?parent=1153"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/d3.harvard.edu\/future-proof-with-ai\/wp-json\/wp\/v2\/categories?post=1153"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/d3.harvard.edu\/future-proof-with-ai\/wp-json\/wp\/v2\/tags?post=1153"},{"taxonomy":"tax_cop","embeddable":true,"href":"https:\/\/d3.harvard.edu\/future-proof-with-ai\/wp-json\/wp\/v2\/tax_cop?post=1153"},{"taxonomy":"rkv-people-shadow","embeddable":true,"href":"https:\/\/d3.harvard.edu\/future-proof-with-ai\/wp-json\/wp\/v2\/rkv-people-shadow?post=1153"},{"taxonomy":"insight_type","embeddable":true,"href":"https:\/\/d3.harvard.edu\/future-proof-with-ai\/wp-json\/wp\/v2\/insight_type?post=1153"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}