{"id":1885,"date":"2026-05-06T19:05:33","date_gmt":"2026-05-06T19:05:33","guid":{"rendered":"https:\/\/gw.adampg777.com\/?p=1885"},"modified":"2026-05-06T19:05:33","modified_gmt":"2026-05-06T19:05:33","slug":"trump-administration-suddenly-embraces-ai-oversight-ideas-it-once-rejected","status":"publish","type":"post","link":"https:\/\/gw.adampg777.com\/?p=1885","title":{"rendered":"Trump administration suddenly embraces AI oversight ideas it once rejected"},"content":{"rendered":"<p><img decoding=\"async\" src=\"https:\/\/fortune.com\/img-assets\/wp-content\/uploads\/2026\/05\/AP26122809856269-e1777769316161.jpg?w=2048\" \/><\/p>\n<p>When it comes to AI, the Trump Administration has largely positioned itself as the opposite of the Biden White House\u2014criticizing what Trump\u2019s tech policy advisors saw as overly burdensome AI safety efforts and licensing regimes, and embracing an anti-regulation approach. Former Trump \u201cAI and crypto czar\u201d David Sacks best embodied this policy ethos.<\/p>\n<p>But the Trump Administration, according to multiple news reports, is now about to engage in a head-spinning policy pirouette. Driven by concerns about the national security implications of Anthropic\u2019s new \u201cMythos\u201d AI model, with its ability to identify and exploit cyber security vulnerabilities\u2014as well as broader fears around cyber capabilities and dangerous misuse\u2014the administration is now reportedly considering oversight for advanced AI models. The policies under discussion, according to news reports, include an executive order that would create a government-industry working group to examine how frontier AI systems should be evaluated before release.\u00a0<\/p>\n<div>\n<p>At the same time, the Center for AI Standards and Innovation (CAISI) \u2014 the Trump administration\u2019s renamed version of the Biden-era United States AI Safety Institute \u2014 announced partnerships with Google, Microsoft, and xAI to evaluate some AI models before deployment.<\/p>\n<p>According to an agency press release, CAISI\u2019s agreements with frontier AI developers \u201cenable government evaluation of AI models before they are publicly available, as well as post-deployment assessment and other research.\u201d The agency said it has completed more than 40 such evaluations, including on state-of-the-art models that remain unreleased.<\/p>\n<p>In an interview on Fox Business this morning, White House National Economic Council Director Kevin Hassett said the administration is studying a possible executive order that would create \u201ca clear road map\u201d for how advanced AI systems should be evaluated before release.<\/p>\n<p>\u201cWe\u2019re studying possibly an executive order to give a clear road map to everybody about how this is going to go and how future AIs that also could potentially create vulnerabilities should go through a process so that they\u2019re released to the wild after they\u2019ve been proven safe \u2014 just like an FDA drug,\u201d Hassett said. \u201cMythos is the first, but it\u2019s incumbent on us to build a system so U.S. AI can be the leader in AI and be safe at the same time. That\u2019s really pretty much what we\u2019re working on almost full-time right now.\u201d<\/p>\n<h2 class=\"wp-block-heading\"><strong>From criticizing oversight to championing it<\/strong><\/h2>\n<p>The current debate carries with it a strong sense of d\u00e9j\u00e0 vu. The original U.S. AI Safety Institute was created by Joe Biden through his November 2023 AI Executive Order, with the goal of helping the federal government evaluate and better understand frontier AI systems from companies like OpenAI, Anthropic, and Google. The order also invoked the Defense Production Act to require companies training the largest AI models to share certain safety testing results with the government.<\/p>\n<p>In other words, the administration that once criticized Biden\u2019s AI oversight efforts is now considering adopting broadly similar policies, even though the original U.S. AI Safety Institute was systematically rebranded and restructured (the word \u201csafety\u201d was notably removed) and its inaugural director, Elizabeth Kelly, stepped down shortly after Trump\u2019s inauguration in January 2025. (She subsequently joined Anthropic as head of \u201cbeneficial deployments,\u201d one of several hires of former Biden officials that may have contributed to the acrimonious relationship between Trump\u2019s tech policy team and Anthropic.) <\/p>\n<p>At the end of April, Chris Fall, who served as an Energy Department official in the first Trump administration, was tapped to lead the rebranded CAISI, with a Commerce Department spokesperson saying \u201cDr. Fall brings the scientific leadership needed to ensure America leads the world in evaluating frontier AI models and advancing the technical standards that protect our national and economic security.\u201d Fall replaced Collin Burns, a former member of Anthropic\u2019s technical staff, who was dismissed from his position after just days on the job, with unnamed Trump administration officials telling reporters that they had not been informed of Burns\u2019 appointment. <\/p>\n<p>Fall spent nearly four years as vice president for applied sciences at technology research nonprofit MITRE.\u00a0<\/p>\n<p>\u201cThe is a 180 for the Trump administration, that has very explicitly been anti-any sort of regulation and also has explicitly tried to block states from enacting any kind of regulation,\u201d said Rumman Chowdhury, an CEO of Humane Intelligence and former US Science Envoy for AI. <\/p>\n<h2 class=\"wp-block-heading\"><strong>A focus on national security risks<\/strong><\/h2>\n<p>Still, the renewed push for evaluations is being framed less around AI ethics concerns and worry about existential dangers, which was a strong focus of the Biden Administration, and more around immediate national security risks.\u00a0<\/p>\n<p>That backdrop includes the uproar over Anthropic\u2019s Mythos model and a broader shift in Washington toward viewing frontier AI systems through the lens of cyberwarfare, infrastructure security, and geopolitical competition. Anthropic itself was labeled a national security threat by the administration after refusing to grant the Pentagon unrestricted use of its technology\u2014a designation the company is now challenging in court. Trump recently struck a more conciliatory tone, telling CNBC that Anthropic was \u201cshaping up\u201d and that \u201cI think we will get along with them just fine.\u201d<\/p>\n<p>Chowdhury said the current White House efforts to offer \u201csensible oversight\u201d over frontier AI models may sound good, but the devil is in the details. \u201cIt depends on their interpretation of these words,\u201d she said. \u201cEvaluations are a policy tool, they are not actually data-driven. My concern is that this is another political tool that the administration wants to own and wield.\u201d <\/p>\n<p>But it remains unclear whether CAISI has the funding and authority needed to fulfill its mission. In 2024, The Washington Post published an investigation into National Institute of Standards and Technology (NIST), the agency that houses CAISI, finding that budget constraints had left the 123-year-old institution understaffed in key technology areas and many facilities at its Gaithersburg, Maryland, and Boulder, Colorado campuses below acceptable building standards.<\/p>\n<p>At the time, now Senate minority leader Chuck Schumer had announced that an appropriations bill included up to $10 million for the establishment of the USAISI at NIST.\u00a0<\/p>\n<p>In January 2026, Congress approved funding increases for NIST\u2019s AI work including $55 million for NIST AI research and measurement efforts and up to $10 million specifically to expand the agency, rebranded as CAISI. But one policy analysis this year, from conservative think tank America First Policy Institute, said CAISI remains underfunded compared with peer institutes internationally and lacks \u201cappropriate funding.\u201d<\/p>\n<h2 class=\"wp-block-heading\">AI model vetting does not mean secure systems<\/h2>\n<p>The challenge is compounded by the fact that much of the government\u2019s evaluation effort depends on cooperation from the same companies building the models.<\/p>\n<p>\u201cIn 2024, BIML identified 23 LLM security risks that are located inside the black box of the frontier models (and thus managed by the vendors themselves),\u201d Gary McGraw, CEO of the AI security nonprofit Berryville Institute of Machine Learning (BIML), said in an email to Fortune. \u201cIn our view, any regulatory guidance should systematically address these risks by opening the black box to scrutiny.\u201d<\/p>\n<p>McGraw added that BIML is \u201cdeeply concerned that the foxes might be asked to guard the chicken house even though they already designed and constructed it in secret.\u201d<\/p>\n<p>In addition, while AI model vetting is useful, it should not be mistaken for AI system security, said Rob van der Veer, founder of the the OWASP (Open Worldwide Application Security Project) AI Exchange and chief AI officer at global technology consultancy Software Improvement Group. <\/p>\n<p>\u201cAI model vetting can motivate model makers to invest more in resilience, and it can help expose obvious weaknesses,\u201d he said by email. \u201cBut AI models will remain fragile, no matter how much we test them\u2026so yes, test the models. Vet them. Improve them. But design the system as if the model can still fail. Because it can.\u201d <\/p>\n<\/div>\n<p>#Trump #administration #suddenly #embraces #oversight #ideas #rejected<\/p>\n","protected":false},"excerpt":{"rendered":"<p>When it comes to AI, the Trump Administration has largely positioned itself as the opposite of the Biden White House\u2014criticizing what Trump\u2019s tech policy advisors saw as overly burdensome AI&hellip; <\/p>\n","protected":false},"author":1,"featured_media":1316,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[2],"tags":[1161,536,3391,665,3393,17,3394,3395,3028,3392,260,406],"class_list":["post-1885","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-finance-news","tag-administration","tag-anthropic","tag-commerce-department","tag-donald-trump","tag-embraces","tag-ideas","tag-oversight","tag-rejected","tag-suddenly","tag-tech-regulation","tag-trump","tag-washington"],"_links":{"self":[{"href":"https:\/\/gw.adampg777.com\/index.php?rest_route=\/wp\/v2\/posts\/1885","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/gw.adampg777.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/gw.adampg777.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/gw.adampg777.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/gw.adampg777.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=1885"}],"version-history":[{"count":0,"href":"https:\/\/gw.adampg777.com\/index.php?rest_route=\/wp\/v2\/posts\/1885\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/gw.adampg777.com\/index.php?rest_route=\/wp\/v2\/media\/1316"}],"wp:attachment":[{"href":"https:\/\/gw.adampg777.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=1885"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/gw.adampg777.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=1885"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/gw.adampg777.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=1885"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}