The Implementation
The Department of Health and Human Services has been using AI tools from Palantir Technologies to audit grants, applications, and job descriptions for compliance with executive orders targeting diversity programs and gender-related content, according to HHS's 2025 AI inventory published last month.
The deployment spans HHS's Administration for Children and Families, which oversees foster care, adoption systems, and child welfare funding. Palantir is the sole contractor identifying "position descriptions that may need to be adjusted for alignment with recent executive orders."
Credal AI, a startup founded by two Palantir alumni, handles grant review. The company's AI flags application files and "generates initial priorities for discussion" before routing findings to ACF's Program Office. Both systems are listed as "deployed" - meaning actively in use.
The Money
HHS has paid Palantir more than $35M since January 2025 across multiple contracts. Credal AI received approximately $750,000 for its "Tech Enterprise Generative AI Platform." None of the Federal Register payment descriptions mention DEI or gender ideology screening.
This builds on existing HHS-Palantir relationships: $19M for ARPA-H data infrastructure (2024), $20M for integrated care management (August 2025-February 2027). HHS's AI inventory separately shows Palantir Foundry being used for grant spend prediction and reviewer selection.
The Orders
Executive Orders 14151 and 14168, both issued January 20, 2025, mandate elimination of federal policies mentioning DEI, equity, or environmental justice. The second order defines sex as "immutable biological classification" and prohibits federal funds from promoting "gender ideology."
The orders direct OMB, OPM, and the Attorney General to lead enforcement. Each agency must "assess grant conditions and grantee preferences" to ensure compliance.
What's Notable
Neither Palantir nor HHS announced this use case publicly. The disclosure only surfaced through HHS's mandatory AI inventory - a requirement predating the current administration.
HHS's inventory confirms human review as the final decision layer. ACF staff conduct "final review" of AI-flagged content. This matters for compliance frameworks: the AI screens, humans decide.
The National Science Foundation began similar flagging of research proposals in early 2025, suggesting a pattern across agencies with grant-making authority.
The pattern is clear: Federal agencies are deploying commercial AI tools to operationalize policy directives at scale, with limited public disclosure of the technical implementations.