DeepSeek AI in China: Safety Warnings

China’s hospitals are adopting DeepSeek AI at an alarming rate, raising significant safety concerns among medical experts. A study in JAMA highlights the potential risks associated with this rapid deployment, as over 300 hospitals have integrated the AI despite known issues with diagnostic accuracy.

"Too Fast": Chinese Doctors Warn of DeepSeek AI Adoption

The integration of DeepSeek AI into Chinese hospitals now involves its extensive use in over 300 medical institutions. However, a cautious voice has emerged from within China’s medical community. A research review published in JAMA, led by Huang Tianyin, the founding dean of Tsinghua University’s School of Medicine, warns that the rapid deployment of DeepSeek’s large language models in clinical settings might be "too fast, too early."

These numbers paint a compelling picture of the AI-driven transformation occurring within China’s healthcare sector. The deployment of DeepSeek in tertiary hospitals signifies a major shift in how artificial intelligence is utilized, going beyond mere diagnostic assistance to encompass hospital administration, research facilitation, and patient management. The model has demonstrated notable efficiency gains, including a 40-fold increase in patient follow-up efficiency. This widespread adoption stems from DeepSeek’s unique position as an open-source, low-cost alternative to proprietary AI systems.

The LLM DeepSeek-V3 and DeepSeek-R1, developed by a subsidiary of a Chinese investment firm, provide the unique advantage of low cost and open-source accessibility, significantly reducing the barrier to entry for LLM usage. Chinese healthcare firms have been swift in integrating the models into their operations. Over 30 mainland healthcare companies have added AI into their operations, including firms such as Hengrui Pharmaceuticals Co. Ltd. and Yunnan Baiyao Group Co. Ltd.

Following the adoption of multiple open-source AI models to increase operational efficiency and reduce costs, Berry Genomics Co. Ltd. saw its share price increase by over 71%.

Warning Signs: Clinical Safety Under Scrutiny

Despite the enthusiasm surrounding DeepSeek AI, the JAMA research viewpoint raises significant red flags. Huang Tianyin, an ophthalmology professor and former medical director of Singapore National Eye Centre, along with his collaborators, identifies several key concerns.

Researchers caution that DeepSeek’s propensity to generate "seemingly plausible but factually incorrect outputs" could lead to "serious clinical risks," despite its robust reasoning capabilities. This phenomenon, known as AI hallucination, is particularly dangerous in medical contexts where accuracy can be a matter of life and death.

The research team emphasizes how healthcare professionals might over-rely on or uncritically accept DeepSeek’s outputs, potentially leading to diagnostic errors or treatment biases. More cautious clinicians may face the burden of validating AI outputs in time-sensitive clinical environments.

Infrastructure Challenges and Security Vulnerabilities

In addition to clinical accuracy concerns, the rapid deployment of DeepSeek AI in Chinese hospitals has exposed significant cybersecurity vulnerabilities. While many hospitals opt for private, on-site deployments to mitigate security and privacy risks, the study argues that this approach "shifts security responsibilities to individual healthcare institutions," many of which lack comprehensive cybersecurity infrastructure.

Recent cybersecurity research compounds these concerns. Studies indicate that DeepSeek is used 11 times more often by cybercriminals than other AI models, highlighting a critical vulnerability in its design. A study by Cisco found that DeepSeek failed to block harmful prompts in security evaluations, including those related to cybercrime and disinformation.

DeepSeek’s open-source nature, while enhancing accessibility, also introduces unique security challenges. DeepSeek’s open-source structure means that anyone can download and modify the application, allowing users to not only change its functionality but also alter its security mechanisms, creating a broader range of risks for exploitation.

Real-World Impact: Stories from the Clinical Frontlines

The integration of DeepSeek AI in Chinese hospitals is already beginning to transform the dynamics of patient-physician interactions. A viral video on Douyin showed a frustrated doctor, challenged by a patient using DeepSeek, discovering that medical guidelines had indeed been updated and that the AI was correct.

This anecdote illustrates both the potential and the perils of AI adoption in healthcare. While the technology can help keep medical practices current, it also challenges traditional hierarchies and introduces new sources of uncertainty in clinical decision-making.

A "Perfect Storm" of Security Risks

Researchers argue that China’s unique healthcare landscape is creating a "perfect storm" of clinical safety risks. They cite gaps in primary care infrastructure along with high smartphone penetration rates as contributing factors. They note that "vulnerable populations with complex medical needs now have unprecedented access to AI-driven health advice but often lack the clinical oversight needed for safe implementation."

The democratization of medical AI access, while potentially beneficial for healthcare equity, raises serious questions about the quality and safety of care in resource-constrained environments where proper oversight may be lacking.

Geopolitical Implications and Data Privacy

The rapid adoption of DeepSeek AI in Chinese hospitals has not gone unnoticed internationally. Some countries have already taken precautionary measures. Due to concerns that the application’s data management practices pose a threat to national security, Italy, Taiwan, Australia and South Korea have blocked or banned access to the application on government devices.

Privacy experts have raised alarms regarding data collection and storage. The Chinese chatbot may pose a national security risk because "this data, aggregated, can be used to gain insight into populations, or user behavior, which could be used to create more effective phishing attacks or other malicious manipulation campaigns."

Regulatory Gaps

Despite its widespread use, China’s regulatory framework has struggled to keep pace with the rapid deployment of AI in healthcare. Current regulatory interpretations permit artificial intelligence to augment, but not replace, human diagnostic judgment, suggesting that its integration into healthcare delivery should still be approached with caution.

Notably, no medical AI products are included in China’s National Basic Medical Insurance, indicating continued skepticism about the technology’s reliability. That being said, the story of DeepSeek AI in Chinese hospitals represents a microcosm of the broader challenges facing AI adoption in critical sectors worldwide.

While the technology offers enormous potential for improving healthcare delivery and reducing costs, the warnings from medical researchers highlight the necessity of careful, methodical implementation.

Recent research has highlighted the relative accuracy of DeepSeek models in specific metrics, such as the Deauville score for lymphoma patients, but still acknowledges substantial gaps compared to human clinicians. The accuracy gaps, coupled with security vulnerabilities and regulatory challenges, suggest that the current pace of adoption may indeed be "too fast, too early."

Conclusion: A Critical Juncture

As China continues to push forward with "smart hospitals" and an AI-driven healthcare transformation, the integration of DeepSeek AI in Chinese hospitals serves as both a testament to technological innovation and a cautionary tale about the risks of rapid deployment. The concerns raised by Huang Tianyin of Tsinghua University Medicine and his colleagues are not an argument against progress, but rather a call for responsible innovation that prioritizes patient safety alongside technological advancement.

The challenge moving forward will be to strike the right balance between leveraging the undeniable benefits of AI in healthcare and implementing robust safeguards to protect patients from the risks of premature or under-regulated AI deployment.

The ongoing debate surrounding DeepSeek AI in Chinese hospitals ultimately reflects a fundamental question facing healthcare interfaces worldwide: how fast is too fast when it comes to integrating powerful AI systems into life-critical medical applications? The answer to this question will shape the future of digital health, not only in China, but across the globe.