The impact of artificial intelligence is increasing in all fields, and the medical system is no exception. We tried to find out what the risks of integrating AI are by talking to a cybersecurity expert.
Artificial intelligence has already penetrated the medical services sector for several years, initially in areas less visible to the patient. It has been, and continues to be, used in patient flow management – from taking requests to automating administrative processes. Gradually, however, its role has surpassed this stage, and AI is starting to be used more frequently in the diagnostic area as well.
This change is most easily seen in radiology, where algorithms can analyze medical images quickly and with a high degree of accuracy, giving doctors an additional tool in interpreting the data. At the same time, AI-based technologies are making their way into other specialties as well, outlining a broader transformation of how medicine is practiced.
Integration is not without its challenges, however. From reliance on data quality to cybersecurity vulnerabilities or the lack of clear standards, AI also brings with it risks that cannot be ignored. Dragoș Ionică, Senior Manager Deloitte Romania, talks about these limits and how prepared the medical system is to manage them. Ionică was part of the team of Romanian security experts who were world vice champions at the most recent edition of the Biohacking Village competition within the DEF CON conference, where the challenge was testing the vulnerabilities of medical equipment.
Health innovation will also be discussed at the “Healthcare Forum 2026: The Truth about Romania’s Health”, an “Adevărul” brand event that will take place on Tuesday, March 24, 2026. In the event – attended by decision-makers, patient representatives, doctors, experts – there will be debates on strategic topics such as digitization through PNRR, the integration of AI in diagnosis and the elimination of gaps between rural and urban environments.
“As a patient, I would like to know that my device is constantly being monitored, tested and updated”
The first question was about the most common security mistake in telemedicine, a segment that is starting to integrate artificial intelligence more and more, telemedicine being seen as a solution to increase the access of patients – especially those in disadvantaged areas – to medical services. “The most common mistake I see in telemedicine is treating it as a simple digital service, not as a direct extension of the medical act. The moment a consultation, analysis result or vital parameter is transmitted online, it’s not just about IT anymore, it’s about patient safety. Many times applications are launched quickly to meet a real need, but security is integrated later or superficially.” explains the expert. Weak authentication, lack of adequate encryption, or uncontrolled integration with other systems create exploitable gaps, Ionică continues, the problem not being the technology, but the fact that security is not designed from the start as part of the medical architecture.
Do you think hospitals are ready for connected medicine? Are medical staff getting enough digital training?
There are very well prepared hospitals, but overall maturity is uneven. The IT infrastructure of many medical facilities is built in stages over several years, and the integration of new connected devices on top of legacy systems adds complexity and risk. Medical staff are extremely well trained clinically, but we cannot assume that a doctor or nurse automatically has cybersecurity skills. In many cases, digital training is insufficient or ad hoc, not continuous. Connected medicine requires a cultural change: security must be understood as part of patient safety, not as a technical formality.
How the Romanian health system will be digitized
What is the most worrying scenario you have tested?
The most worrisome scenario is not data theft, although that is also serious. The most concerning is when a critical medical device can be accessed or influenced from the network. In controlled testing, we have seen situations where the lack of segmentation or robust authentication mechanisms allowed access to sensitive equipment. When we discuss devices that influence vital parameters or therapeutic processes, we are no longer talking about an IT incident, but about a potential direct risk to the patient. This intersection between cyber and the medical act is the area that requires the most responsibility.
If you were to become a patient tomorrow, what connected medical device would you not use?
I wouldn’t avoid a particular type of device, because the technology itself isn’t the problem. But I would avoid a device that isn’t updated, doesn’t have a transparent security policy, and doesn’t demonstrate that the manufacturer takes data and functionality protection seriously. As a patient, I would like to know that my device is constantly being monitored, tested and updated. In digital medicine, trust must be built through transparency and technical accountability.
Can a medical smartwatch or heart sensor be compromised?
Theoretically, yes. Any connected device can become a target, especially if the ecosystem around it – the mobile app, the online account, the cloud infrastructure – has vulnerabilities. Often the device itself is not the weakest link, but the infrastructure that collects and processes the data. Attacks are not necessarily spectacular, but may involve interception or manipulation of data. It is important to emphasize that the risk exists, but it can be significantly reduced by well-implemented security standards.
“In the era of AI, data protection is no longer just a matter of privacy”
Have you encountered vulnerabilities that could alter patient data?
Yes, in authorized testing contexts we have identified situations where the lack of appropriate controls allowed the modification of parameters or data transmitted between systems. In an extreme scenario, if data integrity is not rigorously verified, information can be tampered with before reaching analytics systems or medical decision support applications. In modern medicine, data is the foundation of diagnosis, and any alteration of it can have clinical consequences.
Do you think we will have specialist ‘medical hackers’, are healthcare facilities/users thinking about this enough?
There are already specialists who exclusively study the security of medical devices and clinical infrastructures (among which I am). In parallel, criminal groups are becoming increasingly interested in the medical field, as the impact of an attack on a hospital is immediate and the pressure to pay is high. I think there is still not enough awareness that a healthcare cyber incident is not just a data issue, but a medical continuity and patient safety issue. As digitization advances, specialization – both defensive and offensive – will inevitably increase.
What digitization should bring to health. “Informatization is needed. Money is being thrown at us”
Is digital medicine safe or just inevitable?
It’s inevitable, but it doesn’t have to be unsafe. AI, remote monitoring and predictive analytics can save lives and prevent complications before they occur. But safety doesn’t come automatically with technology. It must be designed, tested and continuously audited. Digital medicine can be secure if it is treated as critical infrastructure, not as a simple online service.
If the data is manipulated, can AI misdiagnose?
Yes. AI is dependent on data quality and integrity. If the data is manipulated – either by mistake or on purpose – the system can generate the wrong diagnosis or recommendation. Artificial intelligence models don’t “guess” clinical reality, they process what they receive as input. Therefore, security and validation of data flows are essential. In the age of AI, data protection is no longer just a matter of privacy, but one of medical accuracy.