
Image by American Life League’s photostream, from Flikr
Department Of Health And Human Services Rolls Out AI Tool
The Department of Health and Human Services (HHS) has introduced ChatGPT for all staff members to improve efficiency, but caution employees to maintain proper care when handling confidential information.
In a rush? Here are the quick facts:
- Rollout overseen by CIO Clark Minor, former Palantir employee.
- Sensitive data like SSNs and bank details cannot be input.
- Concerns remain over bias in AI systems affecting patient care.
HHS distributed an email to all staff members announcing that ChatGPT would be rolled out for all staff across the organization. The email, from Deputy Secretary Jim O’Neill, titled “AI Deployment,” is part of an initiative led by Clark Minor, HHS Chief Information Officer, who previously worked at Palantir.
“Artificial intelligence is beginning to improve health care, business, and government,” the email reads, as first reported by 404Media.
“Our department is committed to supporting and encouraging this transformation. In many offices around the world, the growing administrative burden of extensive emails and meetings can distract even highly motivated people from getting things done. We should all be vigilant against barriers that could slow our progress toward making America healthy again.”
The email adds, “I’m excited to move us forward by making ChatGPT available to everyone in the Department effective immediately,” it adds. “Some operating divisions, such as FDA and ACF [Administration for Children and Families], have already benefitted from specific deployments of large language models to enhance their work, and now the rest of us can join them. This tool can help us promote rigorous science, radical transparency, and robust good health. As Secretary Kennedy said, ‘The AI revolution has arrived.’”
HHS staff are instructed to log in at go.hhs.gov/chatgpt using government email addresses and can ask ChatGPT questions, refine answers, and consult multiple perspectives.
“Of course, you should be skeptical of everything you read, watch for potential bias, and treat answers as suggestions. Before making a significant decision, make sure you have considered original sources and counterarguments. Like other LLMs, ChatGPT is particularly good at summarizing long documents,” the email says.
Minor has “taken precautions to ensure that your work with AI is carried out in a high-security environment,” the email adds, noting that most internal data, including procurement-sensitive information, can be entered safely.
It warns, however, that ChatGPT “is currently not approved for disclosure of sensitive personally identifiable information (such as SSNs and bank account numbers), classified information, export-controlled data, or confidential commercial information subject to the Trade Secrets Act.”
The rollout comes amid broader federal efforts to integrate AI and raises concerns about bias in AI systems, especially in programs like Medicare and Medicaid that determine patient eligibility for treatment.