Artificial intelligence has the opportunity to do something better and faster in all areas. Potential for REVOLUTION. More precise diagnoses and personalized therapies are only a few parts of the possible spectrum.
However, it also harbours dangers that are often impossible to assess. So what happens if AI remains unregulated? Could a wrong decision due to a faulty AI application have life-threatening consequences? Questions upon questions and no one really knows the answer! Nevertheless, the European Union has decided on the AI Act and the MDR (Medical Device Regulation) in order to initiate a comprehensive law to regulate AI systems. For companies that use AI-supported technologies in the diagnosis of diseases or other applications, this regulation means that they have a lot of catching up to do! But first of all, something briefly about the classification of the AI Act. On August 1, 2024, the EU’s AI Act and MDR Regulation came into force. The AI Act categorizes AI systems into four levels depending on the risk to security and fundamental rights:
- Minimal risk: Unregulated applications such as spam filters and spell corrections.
- Limited risk: Transparency obligations for systems such as chatbots or AI-generated videos. Users need to know that they are interacting with an AI.
- High risk: Medical diagnostic systems and AI-based credit assessments must meet strict data protection and security requirements.
- Unacceptable risk: Systems such as social scoring or predictive policing that endanger fundamental rights are completely prohibited.
In addition to the advanced features and functions of AI tools, the risks in the medical field cannot be dismissed. Tools and systems that are used for anamnesis or support it are classified in the category “high risk”. A faulty system could jeopardize the health of patients and also affect trust in the companies involved. For reasons of patient protection, the EU has passed the AI Act and the MDR. This goes hand in hand with the control and test specifications for AI-controlled medical and device technology before they can be released for sale.

The AI Act and the MDR do not provide for barriers to innovation, but rather they are intended as protective mechanisms for patients and the general public. Companies that act in good time and deal with the regulations at an early stage will be able to successively strengthen compliance and the trust of their customers and patients. Ultimately, it remains to be seen what advantages and disadvantages AI brings to medical and device technology and how the industry integrates AI into new products, software and applications. The ethical aspect should not and must not be neglected.