ThoughtLinks’ Response to the Request for Information on the Development of an Artificial Intelligence (AI) Action Plan
ThoughtLinks’ Response to the Request for Information on the Development of an Artificial Intelligence (AI) Action Plan
ThoughtLinks provides boutique advisory services on AI adoption to a breadth of companies, from top U.S. corporations to start-ups. Established two years ago, ThoughtLinks publicly announced its exclusive focus on AI strategy in 2024. We appreciate the opportunity to provide comments to the Office of Science and Technology Policy (OSTP) and the National Science Foundation (NSF) in response to their Request for Information (RFI) on the Development of an Artificial Intelligence (AI) Action Plan.
At ThoughtLinks, our work focuses on AI-driven business model reinvention, business process transformation, global talent impacts, and risk mitigation strategies that enable the successful adoption of AI with the right governance and controls—primarily in the financial services industry, but also within technology and healthcare sectors.
Our comments in this response reflect broad public interest considerations rather than specific industry advocacy.
Given our line of work and focus on AI, we recognize the significant challenges U.S. businesses face in scaling AI and are encouraged by Executive Order 14179, which seeks recommendations to remove barriers to American leadership in AI and sustain America's global dominance in order to promote human flourishing, economic competitiveness, and national security.
AI has the potential to reshape the world. It can solve some of our greatest challenges—eliminating disease, improving longevity, reducing the impact of climate-related disasters, and transforming education. Every American could have a real-time AI-driven financial assistant, guiding their financial wellness and promoting more financial inclusion. AI has the potential to empower every American to start and grow a business if they so desire. From our own experience, AI enhances productivity and can reduce operational burden, allowing small businesses to compete at a higher level. AI can give more people confidence to start a new business.
¹ 'ThoughtLinks to Focus Services Exclusively on Enterprise AI Strategy and Its Responsible Adoption,' PR Newswire, February 14, 2024.
AI could add trillions to economic growth and provide everyone with a personal assistant—not just to deliver knowledge and insights but to handle busy work, making us more intelligent. Unlike past breakthroughs, there is no industry untouched by AI’s potential for disruption.
At the same time, the risks of getting it wrong are as significant as the rewards of getting it right. We are still in the early stages, and no one knows precisely where AI will end up. Therefore, it is critical that the U.S. fosters an environment that brings more—not fewer—people into AI discussion and development. Moving with speed is essential to harness AI's full potential while minimizing risks. Being overly cautious could leave the U.S. in a weaker position.
That said, we believe the U.S. can strengthen its policies and infrastructure to move ahead at speed with AI. While there have been tremendous advances in models and capabilities, true success depends on broad and safe business adoption across both large enterprises and small businesses. To achieve this, we must accelerate AI adoption while removing barriers that create unnecessary costs and friction. We must also educate the broader public.
The case for and against AI has been dissected many times, so we will directly get into making five recommendations:
1) Prioritize National AI Investments by Impact and Outcomes
Clearly define the national interest areas where AI can drive bold innovation and transformative change—whether in job growth (workforce development), education, elimination of diseases (cancer, etc.), mental wellness, clean air and water, or bolstering national security. For each such priority, establish specific national-level objectives (some moonshots) and measurable goals and targets, using them as the foundation for AI public investments and public-private partnerships that demonstrably advance these priorities. Ensure continued funding is contingent on achieving measurable milestones and tangible outcomes. Set the risk appetite and safety standards for all sanctioned initiatives. Include a broader range of stakeholders in these discussions and committees.
2) Change Regulatory Mandates and Adopt a Consequence-Based Approach
Consider expanding regulatory mandates beyond safety and soundness to explicitly include AI-driven innovation. Bolder yet, modernize regulatory charters to reflect AI’s game-changing potential. The current regulatory uncertainty surrounding AI adoption can slow down innovation, even in areas where AI could enhance safety and compliance. Thoughtful AI governance, supervision, and oversight should balance responsible innovation at pace with risk mitigation. Ensuring regulatory frameworks are clear, consistent, and adaptable will help the U.S. maintain its leadership while preventing adversaries and bad actors from outpacing AI development.
Agencies like the SEC, FDA, and FTC should consider establishing dedicated AI task forces or appoint pro-growth Chief AI Officers to clarify how their rules apply to AI innovations while ensuring compliance and risk mitigation.
As the regulatory landscape becomes simpler due to the Executive Order “Unleashing Prosperity through Regulation,” regulators should be encouraged to use AI to aid their supervision and oversight.
AI does a good job of detecting anomalies, and there have been very successful use cases in detecting fraud in the financial services industry.
Encourage the adoption of a simpler, risk-tiered consequence framework, where AI systems with minimal to moderate risk face limited regulatory scrutiny while large-scale, mission-critical AI platforms undergo rigorous auditing and review. These high-impact systems must be checked for harmful biases in datasets—ideological or engineered. NIST AI and Cybersecurity frameworks remain critical in setting standards, and the U.S. should continue to rely on an independent, well-respected body to oversee AI risks. Additionally, a community-driven evaluation system, similar to X’s Community Notes, should be considered in the right places to ensure AI accountability, mitigating the risk of a few individuals shaping AI decisions through deliberate or inadvertent bias.
3) Modernize and Increase Adoption of AI for the U.S. Government
The U.S. government has a great opportunity to lead by example in demonstrating pro-growth, responsible AI adoption. It has a significant opportunity to upgrade outdated systems, enhance operational efficiency, detect fraud, and mitigate societal risks through AI integration.
A recent example is Transportation Secretary Sean Duffy’s announcement² that the FAA will modernize air traffic control systems over the next four years, leveraging AI to identify "hot spots" where close encounters between aircraft occur frequently. This type of AI-driven risk detection and efficiency improvement should be scaled across federal agencies to enhance security and productivity. Successful examples will help build public trust in the right way.
² Greg Wehner, "Sean Duffy proposes big plans to upgrade air traffic control systems, use AI to find 'hot spots'," Fox News, March 11, 2025.
4) Training of the U.S. Workforce and Offer Free Courses
We recommend doubling down on all efforts to encourage and support all types of workforce training, at every level, from apprenticeships and vocational programs to certifications in AI and cybersecurity. Although many organizations already offer free courses, access remains limited due to lack of awareness and inconsistent quality.
We recommend a government-backed, standardized initiative to provide free AI and cybersecurity courses to every American. We are living in a digital and AI age, and it is a fundamental right of every American to learn more about AI, understand how to engage with AI systems responsibly, and use AI to enrich and improve their lives and well-being.
Today, there are wearable health devices that can detect Afib, monitor sugar levels, track sleep quality, and detect early signs of neurodegenerative diseases. AI-powered wearables are already helping prevent strokes and heart attacks and providing other critical, lifesaving alerts—but we are just getting started. Early warnings and prevention are key to reducing disease impact and improving health outcomes. As Benjamin Franklin famously said, “An ounce of prevention is worth a pound of cure.” Clearly, everyone should understand how to assess risks. Robust and broad user engagement may help weed out bad actors. Large corporations have implemented comprehensive online training programs, but access to such education should not be reserved for a select few. We say—let’s extend it to every American.
5) Encourage and Support Open Source
Encouraging and supporting open-source AI, with appropriate safeguards, will accelerate innovation, expand the development talent pool, and ensure the U.S. remains globally competitive in an increasingly complex technical landscape. Security considerations should remain central to open-source AI policies to mitigate risks from adversarial actors. Open-source AI is also a powerful tool for training the next generation of AI engineers, ensuring the U.S. remains a global leader in AI expertise.
Both open-source and proprietary technologies have essential roles to play in fostering maximum innovation, and the U.S. must strike the right balance to foster competition, security, and long-term AI leadership.
Sincerely,
Sumeet Chabria
Founder & CEO, ThoughtLinks
Donna Chabria
Senior Advisor, ThoughtLinks
This document is approved for public dissemination. The document contains no business-proprietary or confidential information. Document contents may be reused by the government in developing the AI Action Plan and associated documents without attribution.