What is a class in software

IEC 62304 introduced the concept of safety classification so that medical device manufacturers can adapt the effort for software documentation to the degree of possible damage that would be caused by a software error. This article helps you to determine the safety classes and to document them in compliance with IEC 62304.

Update: No more presumption of conformity with security class A? More…

Definition of the security classes

IEC 62304: 2007 differentiates between three security classes:

  • First Class a: No injury or damage to health is possible
  • Then class B: No SERIOUS INJURY is possible
  • In the end Class C: DEATH or SERIOUS INJURY is possible

The norm defines a serious injury as "Injury or illness which is directly or indirectly life-threatening, leads to permanent impairment of a bodily function or to permanent damage to the (human) body or which requires medical or surgical intervention to prevent permanent impairment of a bodily function or permanent damage to the (human) body. Body to prevent ".

Although this definition is clear, the topic of security classes is one of the most common questions in our free micro-consulting.

Changes through Amendment I

Why the security classes had to be redefined

“Finally” one would like to say, there is a revised definition of the security classes. The IEC 62304: 2007 only considered the severity of possible damage. This has meant that one could always construct an arbitrarily improbable case that would lead to serious damage. With this, almost all manufacturers classified their software in security class C, and the intention of the standard to concentrate documentation efforts on really relevant areas of the software was counteracted.

New definition of security classes

The revised or supplemented version of IEC 62304 now defines the safety classes as follows:

  • Security class A: A software system falls into class A if it cannot contribute to a hazardous situation or (and this is new) if the software system can contribute to a hazardous situation but does not lead to an unacceptable risk, at the latest when appropriate risk control measures have been taken. However, these risk control measures must lie outside the software system.
  • Security class B: A software system falls into class B if it can lead to a hazardous situation, which after the risk control measures (to be implemented again outside the software system) leads to an unacceptable risk, but the possible resulting damage is not severe.
  • Security class C: This class is defined like class B with the exception that the possible resulting damage is severe or even fatal.

The measures mentioned must take action outside of the software system. Read more about this below.

Changes from IEC 62304: 2015 (Amendment 1) to IEC 62304 "second edition"

In the decision tree of the second edition of IEC 62304 there is no longer the almost absurd sentence as in IEC 62304: 2015:

Does failure of the software result in unacceptable risk? "

After that there could only have been software of security class A. Now it says "Considering the external RCM: Can the hazardous situation lead to unacceptable risk? ".

That is not really great: Now the probability is on the one hand in the risk and on the other hand in the word "can". This regularly requires further explanations.

100% probability still applies

As before, the standard says that the probability of a software error must be assumed to be 100%. This is misleading because every manufacturer contradicts that their software is 100% flawed. But that's not what the norm says either.

Rather, the standard says: If a software error is possible, then the probability of its occurrence should not be discussed, but rather it should be assumed that it will occur. The choice of security classes should only be based on the above decisions:

  1. Can a dangerous situation arise at all due to the software error?
  2. If so: Can the software error lead to unacceptable risks?
  3. If so, what is the resulting severity?

As you can see, none of these questions have anything to do with the probability of the software error occurring. It is believed to happen when it can.

Conclusion: The assumption of 100% probability of software errors is required when determining the security class. Nobody claims that software errors occur 100% of the time.

Definition of "software system"

Amendment I does not change the definition of the term software system. The formulations like "The MANUFACTURER shall assign to each SOFTWARE SYSTEM a software safety class " suggest that the software system Not represents the entirety of all software in the medical device, but rather is to be understood per PESS or per processor / memory area.

Sense of the security classes

The aim of security classes is to control the amount of documentation required.

5.15.25.35.45.55.65.75.8789
A.xxxxxxx
B.xxx(x)xxxxxx
C.xxxxxxxxxxx

Table 1: Dependency of the documentation on the safety classes (listed according to the standard chapter)

For software of security class A, manufacturers only need to document the software requirements (5.2) and the software release (5.8), for class B the software architecture (5.3) and software verification (5.5 to 5.7) are added, and for Security class C also the detailed design (5.4).

modification: Since Amendment I, the system tests are also required for security class A.

However, the Johner Institute does not consider many discussions about class B or C to be very effective. What are the differences too? That you don't do a detailed design or unit tests? These are things that third-semester students learn. And as soon as you look in the direction of the FDA, this discussion becomes completely absurd. Because there, regardless of the level of concern, all documents have to be created, just not submitted.

Reduction of the security classes

Reduction according to IEC 62304: 2007

The IEC 62304 specifies: "If the RISK of death or SERIOUS INJURY resulting from a software bug is subsequently reduced to an acceptable level (as defined in ISO 14971) by a hardware RISK CONTROL measure, by eliminating the consequences of the bug or that the likelihood of death or SERIOUS INJURY resulting from this error is reduced, the software security class can be reduced from C to B. "

Hardware means either mechanics or at least a second system with an independent processor and memory area.

Even within a software system, individual components can have a reduced, i.e. smaller, security class than the higher-level software system itself. However, the manufacturers must justify this, which is usually only possible with the help of a documented and suitable software architecture.

example

Does an emergency stop switch justify reducing the security class of software by one class? That is the exciting question. The question is also exciting because it shows the whole dilemma of IEC 62304: on the one hand, the standard ignores the probability of the safety classes, on the other hand, it speaks of risks in the hardware measures - which by definition contain probabilities:

This means that the question can only be answered from risk management: If the emergency stop switch reduces the risk to an acceptable level - and only then (!) - is the reduction in the safety class justifiable. Since in most cases, like this one, the risk is only reduced by reducing the probability, you would have to justify by how many orders of magnitude the probability decreases and then check your risk acceptance matrix to see whether you end up in an acceptable range. The definition of your risk acceptance matrix, more precisely the probability axis, must therefore also be quantitative. Otherwise you will not have a reason why you have reduced the security class of your software from C to B or from B to A.

Actually, one should assume a 100% probability of errors in the software. That is, if the measure could reduce the probability by two orders of magnitude (even that will be difficult to argue), there would be a 1% probability that the system would fail. The likelihood of harm will be even lower. But is the probability of damage then achieved after the measure acceptable?

Of course, the assumption of 100% is absurd, which is why - in anticipation of the next version of the 62304 - I am already arguing today with error probabilities of less than 100%.

Reduction according to Amendment I.

Amendment I makes it clear that risk-minimizing measures can only reduce the security class of the software system if they outside of the software system are implemented, for example, in the system-Architecture.

This means that a reduction in the security class with standalone software is hardly conceivable. However, an external measure could also be a routine clinical measure. Amendment I speaks Not more of hardware measures.

Do you need assistance?

Are you unsure which security class your software has? Are you unclear about the consequences of the classification? Professor Johner and his team are happy to help! Get in contact!

make contact

Interaction of the security classes

Security classes and classification according to MDD

“Is there a connection between the safety class according to IEC 62304 and the classification according to MDD?” Is a frequently asked question.

There is no very strict correlation between the classification according to MDD and the safety class according to IEC 62304. It cannot be. An example:

If you have a highly critical device, let's say class IIb, but in which the software has nothing to do with this criticality, this software can definitely be class A. Conversely, however, one can say that class C software is more likely not to be classified as class I. Because class C means that death or serious injuries can explicitly occur.

As a rule of thumb, one can assume that active therapy devices are classified as class IIa, if nothing bad can happen, otherwise as class IIb. This is of course only a rule of thumb, in individual cases the classification rules must be declined and, in case of doubt (which often occurs), discussed with the notified body.

The short answer to the question is: A high classification, e.g. IIb, cannot be used to infer a high software security class. It works the other way around, but not with mathematical evidence.

Security classes and level of concern

The security classes are defined almost identically to the level of concern. But there are two main differences:

  1. The level of concern influences the scope of the documentation to be submitted, not the documentation to be created.
  2. In contrast to the other levels of concerns and the security classes, the minor level of concern includes the aspect of probability.

Security classes and functions

Manufacturers are inventive: Obviously enthusiastic about IEC 62304, someone came up with the idea of ​​linking the functions (UI elements) of a medical device directly to safety classes. This means that he can directly identify all safety-critical software parts, verify them accordingly and trace everything. A good idea?

Unfortunately not: In the case of functions, i.e. elements of a user interface, a distinction is made between elements for input, for selection and for display. These are, for example, switches, buttons, drop-down menus, graphics, texts or input fields.

It may still be possible to assign a risk without knowing the architecture. For example, even without knowing the inner workings of a device, it is possible to estimate how high and how likely damage will be if data is displayed for the wrong patient or if the controller of an infusion pump receives the flow of an infusion solution incorrectly.

What one cannot estimate without knowledge of the architecture is the contribution that the software makes to it. And architecture means software and hardware architecture. The software may not be involved at all or an error in it would be detected and controlled by a hardware control measure. Then we would be at high risk, but class A software.

In other words: The functions, i.e. all types of elements of a user interface, regardless of whether they are implemented in hardware or software, allow a correlation with the risk, but not with a software security class.

Examples: security classes for certain products

a) Classification of monitoring software

IEC 62304 allows these classes to be reduced through hardware risk control measures. These control measures may themselves contain software. But what class does this “monitoring” software have?

The standard writes about this:

The MANUFACTURER must assign a software security class to every SOFTWARE SYSTEM that contributes to the implementation of a RISK CONTROL measure. It is based on the possible effects of the HAZARD that is controlled by the RISK CONTROL measure.

This in turn means that the software with which the risk control measure is implemented has the same security class as the software that is monitored / controlled with it.

b) PACS

PACS software can only be classified in a security class in accordance with IEC 62304 if the intended purpose has been precisely described. If, for example, the software should / may be used to measure the anatomical structure in preparation for an operation or radiation planning, then (without wanting to anticipate a dedicated risk analysis) a serious injury is conceivable in the event of a software error. The software system would consequently be class C. That means, however, that not all components of this system are class C. Which these are must be described in the architecture (IEC 62304 Chapter 5.3).

A valued employee of a medical technology manufacturer drew our attention to a blog post in which the author came to the following conclusion:

"The more you have to rely on software to treat the patient, to monitor its vital constants or to give a diagnosis, the higher the class."

What do you make of it? The assessment that the higher the dependency on the product, the higher the class is problematic. The level of addiction affects the likelihood of something happening, but not its severity. For the safety classification according to EN 62304, however, only the maximum severity is relevant.

A comment on the blog post in the context of PACS also says:

That kind of software is class B because it is not used alone in the diagnosis. It is always inserted in a chain of decisions for example, assessment by pairs, biopsy ... The software in its environment can not be the cause of a severe injury. There are always other measures of protection to prevent a problem if it is buggey or crashes.

The Johner Institute cannot accept the assessment that the SW of a PACS viewer is class B either. Of course, a right-left reversal can lead to incorrect or delayed treatment (surgery, radiation). That has happened often enough. That PACS are class IIb should also be an indication. The fact that there are "measures of protection to prevent" only reduces the probability and therefore does not justify a different classification than C.

Typical misunderstandings and errors in the security classes

The security classification of software according to IEC 62304 is always a topic in our seminars, in-house workshops and consultations. The participants or companies are inspired by the desire to lower the security class as much as possible.

Read more about the pitfalls that you should definitely avoid, especially with software of security class A.

a) Assumption that the security classes correspond to the level of concern

In fact, the definitions of security classes and levels of concern seem to coincide. However, in contrast to the security classes, the Levels of Concern aim to control the scope of the (!) Documentation to be submitted, not (!) That to be created.

b) Assumption that the security class can be derived from the MDD class

One of the arguments I then hear is as follows: The product is only class I, i.e. the software cannot be class C. That is not true in this form. There are class I products which, in the worst case, can lead to serious injury in the event of incorrect behavior. The “in the worst case” is a statement about the probability and thus the risk.However, the security classification does not take probabilities into account. Therefore it is a safety and not a risk classification.

c) Assumption that one can save a lot of documentation with class B.

The next assumption is that with class B you have to do / document significantly less. That is also only true to a very limited extent. Compared to a class C, you only save the more detailed design - which by no means has to go down to the method or even method signature level - and the dynamic component tests. But with all due respect: which development department that has even a hint of professionalism will do without unit tests?

d) Classification by the development department

I see another mistake in the fact that the software development team determines the security class. Software per se has no security class. In the sense of an FTA, this results from an analysis of the consequences of an externally "visible" device fault and the analysis of how a PESS (programmable electrical subsystem) can contribute to such a device fault. The security class is determined by the risk manager and system architect, but not by the software development department.

The result depends very much on the context of use. By the users, the type of use, etc. Without knowing this context, you cannot define a security class under any circumstances.

e) Assumption that components have a certain security class

As just written, it is not your job as a developer to carry out the security classification of the software (classes A, B, C according to IEC 62304). I write this because software engineers regularly ask me how to do it. In particular, they are asked to determine the security class of components.

To be very clear: a software component can never be assigned a security class per se. Neither a dll nor any other component.

The security class is derived from the surrounding software system. And the class of the software system from risk analysis.

Let me know (e.g. via the web form) if I can help you with the safety classification. I do that with pleasure, quickly, confidentially and usually even free of charge.

f) Segregation that is incomprehensible in architecture

What also strikes us is a rather “casual” reason why individual components of a software system have a lower security class than the software system as a whole. This cannot be argued at all without a documented architecture that corresponds to reality.

g) Assumption that software is 100% defective

There are often discussions about how it can be that you can work with probabilities in the context of risk analysis, but you always have to assume 100% within the software (according to IEC 62304). A good question!

By definition, a risk is the combination of the severity and probability of harm. In fact, IEC 62304 does not recognize any probabilities, only safety classes A (no injury possible) to C (serious injury or death possible). Of course, the probability of software failure isn't 100%. The IEC 62304 does not even want to discuss the risks. Rather, it wants to support the manufacturer in concentrating the effort for the documentation and testing of software on the security-critical areas of the software. And this effort should only depend on the security class, i.e. the possible consequences of an error within the software, and not on the probability. The IEC 62304 says nothing more.

In the context of a risk analysis (e.g. based on FMEA or FTA) one can also assume other probabilities than 100%. Even within the software! They are two different philosophies that do not prohibit each other.

h) Assumption that the safety class has something to do with in / direct damage

We also discussed this at my meeting with the notified bodies. The question was whether the degree of directness to which software can contribute to damage has an effect on the security class.

For example, the software of a defibrillator has a high degree of directness, the software for calculating drug interactions has a lower degree. One auditor said he would make sure that the medical device manufacturers did not declare the doctors to be “alibi man in the middle”, who in reality did not have a proper overview of what the device was doing.

I agree that there are varying degrees of directness. But I would like to question whether these have an effect on the security class. The degree of directness has an impact on the likelihood that a software bug will lead to damage. But it is precisely this probability that is intended for the safety classification Not but only the severity.

How you can trick

The IEC 62304 defines the various security classes for software quite clearly. It also stipulates that the security classes can be reduced by one class from C to B or from B to A through hardware measures. Nonetheless, the notified bodies tell us how manufacturers seek and find “scope for design”.

Imagine the following scenario:

Scenario 1: Measure in front Security classification

A manufacturer puts a medical product on the market in which, thanks to a special design in the form of hardware, a software error cannot cause any harm to patients or users. By definition, the software then has security class A.

Scenario 2: action to Security classification

A manufacturer puts the identical medical device on the market. This time, however, he realizes that a software error can lead to serious damage. The software is therefore class C. In order to minimize the risk, he chooses the same measure in the form of hardware that prevents damage to patients and users. The manufacturer may therefore use the Reduce the security class of the software to B..

Is the order decisive for classification according to IEC 62304?

So when determining the software security classes according to IEC 62304, is the sequence decisive and thus only the choice of argumentation? The representatives of the notified bodies who were my guests have agreed on the following line:

If a measure is so obvious that any graduate in electrical engineering or mechanical engineering would choose it, one can argue according to scenario 1. Otherwise it is really a measure that you want to see explicitly listed in the risk analysis.

Typical pitfalls when working in accordance with safety class A.

IEC 62304 means well with us developers of medical software: We can leave uncritical components - those of security class A - almost undocumented. No architecture, no code reviews, no documented tests. That's wonderful, isn't it? But be careful! I consider the generous omission of any documentation for components of security class A to be problematic for several reasons.

Case 1: meeting the essential requirements

MDD / IVD

The Medical Device Directive and thus the Medical Devices Act clearly require that manufacturers adhere to software life cycle processes. There are notified bodies who are of the opinion (by the way, I do too) that software development that does without the elicitation and documentation of the software requirements, the modeling of an architecture, code reviews or tests does not meet this requirement (Comply with software life cycle processes). Regardless of whether the software falls into security class A.

If a software component of security class A is later reused in a different context and thus falls into a higher security class, you must catch up on the complete documentation for this component. This is more complex than it would have been initially.

MDR / IVDR

The Medical Device Regulation MDR and the In-Vitro Diagnostic Regulation IVDR go one step further. Both require the following:

  1. State-of-the-art IT security
  2. Software verification
  3. Software validation (also in the intended usage environment, also in the intended technical environment (e.g. hardware, operating system, network)
  4. Requirements for the technical environment, in particular for networks, hardware, operating systems
  5. Instructions for use specifying these requirements
  6. Description of the software and the data processing, especially the algorithms (special requirement of the IVDR)
  7. Representation of the design phases
  8. Description of the products and components

If a manufacturer only documents according to safety class A conforming to IEC 62304: 2006, one can Not assume that the general safety and performance requirements as well as the requirements for the technical documentation of the MDR or IVDR are met.

Even if the manufacturer carries out the software system tests (as required in Appendix A1: 2015 of IEC 62304), auditors could question the conformity. Because at least points 3, 6, 7 and 8 in the above list do not appear to be sufficiently taken into account.

Case 2: safety classification and risk analysis

If you want to classify software, specifically a software system, according to safety, then must have been preceded by a risk analysis. It is not absolutely necessary to look closely at the probabilities, but definitely at the severity of the possible damage. Only if the result of this analysis shows that no damage can be caused by the software does this justify security class A by definition.

But is there really no conceivable case in which something could happen? With stand-alone software of security class A you should ask yourself whether you are sure that it is a medical product at all ...

You need to manage risk begin before the safety classification, but it doesn't end there. For example, as part of product tracking in the field, you have to monitor that your own assessments, which, for example, justify safety class A, were correct. Companies often fail to do this.

Case 3: Security Class A versus Minor Level of Concern

The FDA does not recognize security class A and above all no completely undocumented and untested code. In the case of an audit, that would not just be a "minor non-conformity". The minor level of concern, which roughly corresponds to the security class, regulates the scope of the to be submitted Documentation, not the one too creating! This means that if you want to sell your product in the USA, there is no security class A for you. Read more about the Level of Concern here.

Trap 4: software quality

Documenting an architecture, testing and reviewing code - these are all activities that should not only serve to meet regulatory requirements. Rather, they are proven best practices. Anyone who thinks they can do without it because of security class A is either developing trivial programs or has no idea about professional software development. The consequences of this approach are obvious:

  • The architecture will be inadequate, changes to the software will be disproportionately complex, the consequences are hardly foreseeable and often lead to errors.
  • The software will be flawed, which in the case of medical devices in particular will lead to frustration among users or to risks for patients - which should not be the case with security class A.
  • Development will take longer because a try-error in coding will never be as efficient as carefully collecting (and documenting) the requirements and modeling the architecture.

Case 5: Compliance with IEC 62304?

Some companies divide the software into different security classes - in line with IEC 62304. But: Do your development colleagues even know which security class is for the component that you are currently working on? Some companies struggle with the fact that the developers even adhere to their own specifications, for example the software development SOP. And now they should be able to do that depending on the security class?

Conclusion

If you are developing software that you want to sell, develop all code according to security class C. Sometimes the argument about the right security class takes longer than creating the documents. Most of them just don't understand how little effort IEC 62304 and FDA-compliant documentation can mean and, above all, how much time it ultimately saves.

Would you like support in developing and documenting your software quickly and in compliance with IEC 62304 or FDA? My team and I are happy to help! Get in contact!