\section{Introduction} %% $$ \int_{0\-}^{\infty} f(t).e^{-s.t}.dt \; | \; s \in \mathcal{C}$$ This thesis describes the application of, mathematical (formal) techniques to the design of safety critical systems. The initial motivation for this study was to create a system applicable to industrial burner controllers. The methodology developed was designed to cope with both the deterministic and probablistic approaches. %specific `simultaneous failures'\cite{EN298},\cite{EN230},\cite{EN12067} %and the probability to dangerous fault approach\cite{EN61508}. The visual notation developed was initially designed for electronic fault modelling. However, it was realised that it could be applied to mechanical and software domains as well. This changed the target for the study slightly to encompass these three domains in a common notation. \section{Background} I completed an MSc in Software engineering in 2004 at Brighton University while working for an Engineering firm as a Software Engineer. The firm specialise in industrial burner controllers. Industrial Burners are potentially very dangerous industrial plant. They are generally left running unattended for long periods. They are subject to stringent safety regulations and must conform to specific `EN' standards. For a non-safety critical product one can merely comply with the standards, and `self~certify' by applying a CE mark sticker. Safety critical products are categorised and listed. These require certification by an independent and `competent body' recognised under European law. The certification process typically involves stress testing with repeated operation cycles over a specified a range of temperatures, electrical stress testing with high voltage interference, power supply voltage ranges with surges and dips, electro static discharge testing, and EMC (Electro Magnetic Compatibility). A significant part of this process however, is `static testing'. This involves looking at the design of the products, from the perspective of environmental stresses, natural input fault conditions\footnote{For instance in a burner controller, the gas supply pressure reducing}, components failing, and the effects on safety this could have. Some static testing involves checking that the germane `EN' standards have been complied with\footnore{for instance protection levels of enclosure, or down rating of electrical components}. Failure Mode Effects Analysis (FMEA) was also applied. This involved looking in detail at selected critical sections of the product and proposing component failure scenarios. For each failure scenario proposed either a satisfactory answer was required, or a counter proposal to change the design to cope with a theroretical component failure eventuality. FMEA was time consuming, and being directed by experts undoubtly ironed out many potential safety faults before the product saw light of day. However it was quickly apparent that only a small proportion of component~failure modes was considered. Also there was no formalism. The component~failure~modes investigated were not analysed within any rigourous or mathematically proven framework. \subsection{ Blanket Risk Reduction Approach } The suite of tests applied for a certified product amount to a `blanket' approach. That is to say that by applying electrical, repeated operations, and environmental stress testing it is hoped that the majority of latent faults are discovered. The FMEA and static testing only looked at the most obviously safety critical aspects, and a small minority of the total component base for a product. Systemic faults, or mistakes are missed by this form of static testing. \subsection{Possibility of applying mathematical techniques to FMEA} My MSc project was a diagram editor for Constraint diagrams. I wanted to apply constriant diagram techniques to FMEA and began thinking about how this could be done. One obvious factor was that a typical safety critical system could have more than 1000 component parts. Each component would typically have several failure modes. Trying to apply a rigorous methodology on an entire product was going to be impractical. To do this with complete coverage each component failure mode would have to have been checked against the other thousand or so components for influence, and then a determination of the effects on the system would have had to have been made. Thus millions of checks would have to have been performed, and as FMEA is an `expert only' time consuming technique, this idea was obviously impractical. Note that most of the checks made would be redundant. Most components affect the performance of a few that they are placed to work with to perform some particular low-level function. \paragraph{Top down Approach} A top down approach has several potential problems. By its nature it means that at the start of the process a set of system or top level faults or undesireable outcomes are defined. It then must break the system down into modules and decide which of these can contribute to a system level fault mode. Potentially failure modes, be they from components or the interaction between modules can be missed. A disturbing example of this is the NASA space shuttle in 1986, which missed the fault mode of an O ring. This was made even worse, by the fact that the `O' ring had a specified temperature range where the probability of this fault occuring was dramatically raised when below the temperature range. This was a known and documented feature of a safety critical component and it was ignored in the safety analysis. \paragraph{Bottom-up Approach} A bottom-up approach looked impractical at first due to the sheer number of component failure modes in a typical system. However were this bottom-up approach to be modular, (reducing the order of cross checking), and build a hierachy of modules rising up until all components are covered, we can model an entire complex system. This is the core concept behind this study. By working from the bottom up, at the lowest level taking the smallest functional~groups of components and analysing these, we can obtain a set of failure modes for the functional~groups. We can then treat these as `higher level' components and combine them to form new `functional~groups'. In this way all failure modes from all components must be at the very least considered. Also a hierarchy is formed when the top level errors are formed naturally from the lower levels of analysis. Unlike a top~down analysis, we cannot miss a top level fault condition. \paragraph{Multi-disipline} Most safety critical systems are composed of mechanical, electrical and computing elements. A tragic example of the mechanical and electrical elements interfacing to a computer is found in the THERAC25 x-ray dosage machine. With no common notation to integrate the saftey analyis between the electrical/mechanical and computing domains, synchronisation errors occurred that were in some cases fatal. \paragraph{Requirements for a rigorous FMEA process} It was determined that any process to apply FMEA in rigorous and complete (in terms of complete component coverage) had to be a bottom~up process to eliminate the possibility of missing component failure modes. It also had to naturally converge to a failure model of the system. It had to take potentially thousands of component failure modes and simplify these into system level errors. To analyse the large number of component failure modes, and resolve these to perhaps a handful of system failure modes, would require a process of modularisation from the bottom~up. \begin{list}{$*$}{} \item The analysis process must be `bottom~up' \item The process must be modular and hierarchical \item The process must be multi-dicipline and must be able to represent hardware, electronics and software \end{list} \section{Safety Critical Systems} % %How safe is "safe"? %The word "safety" is too general—it really doesn't mean anything definitive. Therefore, we use terms such as safety-related and safety-critical. % %A safety-related device provides or ensures safety. It is required for machines/vehicles, which cause bodily harm or death to human being when they fail. A safe state can be defined (in other words, safety-related). In case of a buzz saw, this could be a motor that seizes all movements immediately. The seizure of movement makes the machine safe at that moment. IEC 61508 defines the likelihood of failures of this mechanism, the Safety Integrity Levels (SIL). SIL 3 is defined as the likelihood of failing less than 10-7% per hour. This is a necessary level of safety integrity for products such as lifts, where several people's lives are endangered. The buzz saw is likely to require SIL 2 only, it endangers just one person. % %Safety-critical is a different matter. To understand safety-critical imagine a plane in flight: it is not "safe" to make all movement stop since that would make the plane crash. A safe state for a plane is in the hangar, but this is not an option when you're in flight. Other means of ensuring safety must be found. One method used in maritime applications is the "CANopen flying master" principle, which uses redundancy to prevent failure. For the above example an SIL 4, meaning likelihood of failing less than 10-8% per hour is necessary. This is also true for nuclear power station control systems, among other examples. % \subsection{General description of a Safety Critical System} A safety critical system is one in which lives may depend upon it or it has the potential to become dangerous\cite{sccs}. %(/usr/share/texmf-texlive/tex/latex/amsmath/amstext.sty) %An industrial burner is typical of plant that is potentially dangerous. %An incorrect air/fuel mixture can be explosive. %Medical electronics for automatically dispensing drugs or maintaining %life support are examples of systems that lives depend upon. \subsection{Two approaches : Probablistic, and Deterministic} There are two main philosophies applied to safety critical systems certification. \paragraph{Probablistic safety Measures} One is a general number of acceptable failures per hour\footnote{The common metric is Failure in Time (FIT) values - failures per ${10}^{9}$ hours of operation} of operation or a given statistical failure on demand. This is the probablistic approach and is embodied in the European Standard EN61508 \cite{EN61508} (international standard IOC1508). \paragraph{Deterministic safety Measures} The second philosophy, applied to application specific standards, is to investigate components for sub-systems in the critical safety path and to look at component failure modes and ensure that they cannot cause dangerous faults. %With the application specific standards detail %specific to the process are The simplest deterministic safety measure is to require that no single component failure mode can cause a dangerous error. This philosophy is first mentioned in aircraft safety operation reseach (WWII) studies. Here potential single faults (usually mechanical) were traced to catastrophic failures \cite{boffin}. EN298, the European Gas burner standard, goes further than this and requires that no two single component faults may cause a dangerous condition. % % \begin{example} % \label{exa1} % Test example % \end{example} % % And that is example~\ref{exa1} \subsection{Overview of regulation of safety Critical systems} Reference chapter dealing specifically with this but given a quick overview. \subsubsection{Overview system analysis philosophies } - General safety standards - specific safety standards \subsubsection{Overview of current testing and certification} Ref chapter specifically on this but give an overview now A modern industrial burner has mechanical, electronic and software elements, that are all safety critical. That is to say unhandled failures could create dangerous faults. %To add to these problems %Operators are often under pressure to keep them running. An boiler supplying %heat to a large greenhouse complex could ruin crops %should it go off-line. Similarly a production line relying on heat or steam %can be very expensive in production down-time should it fail. %This places extra responsibility on the burner controller. % % % This needs to become a chapter %\subsection{Mechanical components} %describe the mechanical parts - gas valves damper s %electronic and software %give a diagram of how it all fits A %together with a %\subsection{electronic Components} % %\subsection{Software/Firmware Components} % % %\subsection{A high level Fault Hierarchy for an Industrial Burner} % %This section shows the component level, leading up higher and higher in the abstraction level %to the software levels and finally a top level abstract level. If the system has been %designed correctly no `undetected faults' should be present here. % \section{An Outline of the FMMD Technique} The FMMD methodology takes a bottom up approach to the design of an integrated system. % Each component is assigned a well defined set of failure modes. The system under inspection is then searched for functional groups of components that perform simple well defined tasks. These functional groups are analysed with respect to the failure modes of the components. % The `functional group', after analysis, has its own set of derived failure modes. % The number of derived failure modes will be less than or equal to the sum of the failure modes of all its components. % % A `derived' set of failure modes, is at a higher abstraction level. % Thus we can now treat our `functional group' as a component in its own right, with its own set of failure~modes. We can create a `derived component' and assign it the derived failure modes as analysed from the `functional group'. % Derived Components may now be used as building blocks, to model the system at ever higher levels of abstraction, building a hierarchy until the top level is reached. % Any unhandled faults will appear at this top level and will be `un-resolved'. A formal description of this process is dealt with in Chapter \ref{fmmddefinition}. % % %This principally focuses %on simple control systems for maintaining temperature %and for industrial burners. It is hoped that a general mathematical %framework is created that can be applied to other fields of safety critical engineering. \subsection{Automated Systems and Safety} Automated systems, as opposed to manual ones are now the norm in the home and in industry. % Automated systems have long been recognised as being more efficient and more accurate than a human opperator, and the reason for automating a process can now be more likely to be cost savings due to better effeciency than a not paying a salary to a human operator \ref{burnereffency}. % For instance early automated systems were mechanical, with cams and levers simulating control functions. % A typical control function could be the fuel air mixture profile curves over a the firing range. % Because fuels vary slightly in calorific value, and air density changes with the weather, no optimal tuning can be optional. In fact for asethetic reasons (not wanting smoke to appear at the flue) the tuning was often air rich, causing air to be heated and unnecessarily passed through the burner, leading to direct loss of energy. An automated system analysing the combustion gasses and automatically adjusting the fuel air mix can get the efficiencies very close to theoretical levels. As the automation takes over more and more functions from the human operator it also takes on more responsibility. A classic example of an automated system failing, is the therac-25. This was an X-ray dosage machine, that, due to software errors caused the deaths of several patients and injured more during the 1980's. The Therac-25 was a designed from a manual system, which had checks and interlocks, and was subsequently computerised. Software bugs were the primary causes of the radiation overdoses. \cite{therac} Any new safety critical analysis methodology should be able to model software, electrical and hardware faults using a common notation. Ideally the tool should be automated so that it can seamlessly analyse the entire system, and apply rigorous checking to ensure that no fault conditions are missed. % http://en.wikipedia.org/wiki/Autopilot \paragraph{Importance of self checking} To take an example of an Aircraft Autopilot, simple early devices\footnote{from the 1920's simple aircraft autopilots were in service}, prevented the aircraft straying from a compass bearing and kept it flying straight and level. Were they to fail the pilot would notice quite quickly and resume manual control of the bearing. Modern autopilots control all aspects of flight including the engines, take off and landing phases. The automated system do not have the common sense of a human pilot; and if fed the incorrect sensory information can make horrendous mistakes. This means that simply reading sensors and applying control corrections cannot be enough. Checking for error conditions must also be incorporated. It could also develop an internal fault, and must be able to recognise and cope with this. \begin{figure}[h] \centering \includegraphics[width=300pt,bb=0 0 678 690,keepaspectratio=true]{introduction/mv_opamp_circuit.png} % mv_opamp_circuit.png: 678x690 pixel, 72dpi, 23.92x24.34 cm, bb=0 0 678 690 \caption{Milli-volt amplifier with added safety Resistor} \label{fig:millivolt} \end{figure} % % %5 % \begin{figure} % \vskip 7cm % \special{psfile=introduction/millivoltsensor.ps hoffset=0 voffset=0 hscale=35 vscale=35 }\caption[Milli-Volt Sensor with safety resistor]{ % Milli-Volt Sensor with safety resistor % \label{fig:millivolt}} % \end{figure} \paragraph{Component added to detect errors} The op-amp in the circuit in figure \ref{fig:millivolt}, supplies a gain of $\approx 180$ \footnote{ applying formula for non-inverting op-amp gain\cite{aoe} $\frac{150 \times 10^3}{820}+ 1 = 183$ }. The safety case here is that any amplified signal between 0.5 and 4 volts on the ADC will be considered in range. This means that between 3mV and 21mV on the input correctly amplified can be measured.\footnote{this would be a typical thermocouple amplifier circuit where milli-volt signals are produced by the Seebeck effect\cite{aoe}} Should the sensor become disconnected the input will drift up due to the safety resistor $R18$. This will cause the opamp to supply its maximum voltage, telling the system the sensor reading is invalid. Should the sensor become shorted, the input will fall below 3mV and the op amp will supply a voltage below 0.5. Note that the sensor breaking and becoming open, or becoming disconnected is the `Raison d'être' of this safety addition. This circuit would typically be used to amplify a thermocouple, which typically fails by going open circuit. It {\em does} detect several other failure modes of this circuit and a full analysis is given in appendix \ref{mvamp}. % Note C14 shorting is potentially v dangerous could lead to a high output by the opamp being seen as a % low temperature. % \paragraph{Self Checking} This introduces a level of self checking into the system. Admittedly this is the simplest failure mode scenario (that the sensor is not wired correcly or has become disconnected). % This safety resisitor has a side effect, it also checks for some internal errorsi that could occur in this circuit. Should the input resistor $R22$ go OPEN this will be detected. Should the gain resistors $R30$ or $R26$ go OPEN or SHORT a fault condition will be detected. % \paragraph{Not rigorous, but tested by time} This is a typical example of an industry standard circuit that has been thought through, and in practise works and detects most failure modes. But it is not rogorous. It does not take into account every failure mode of every component in it. However it does lead on to an important concept of three main states of a safety critical system. % \paragraph{Working, safe fault mode, dangerous fault mode} We can say that a safety critical system may be said to have three distinct overall states. Operating normally, operating in a safe mode with a fault, and operating dangerously with a fault. % The main role of the system designers of safety critical equipment should be to reduce the possibility of this last condition. % Software plays a critical role in almost every aspect facet of our daily lives - from , to driving our cars, to working in our offices. % Some of these systems are safety-critical. % Failure of software could cause catastrophic consequences for human life. % Imagine the antilock brake system (ABS) in your car. % A software failure here could render the ABS inoperable at a time when you need it most. % For these types of safety-critical systems, having guidelines that define processes and % objectives for the creation of software that focus on software quality, or the ability % to use software that has been developed under this scrutiny, has tremendous value % for developers of safety-critical systems. \section{Motivation for developing a formal methodology} A feature of many safety critical systems specifications, including EN298, EN230 \cite{EN298} \cite{EN230} is to demand, at the very least that single failures of hardware or software cannot create an unsafe condition in operational plant. Further to this a second fault introduced, must not cause an unsafe state, due to the combation of both faults. \vskip 0.3cm This sounds like an entirely reasonable requirement. But to rigorously check the effect a particular component fault has on the system, we could check its effect on all other components. Should a diode in the powersupply fail in a particular way, by perhaps introducing a ripple voltage, we should have to look at all components in the system to see how they will be affected. %However consider a typical %small system with perhaps 1000 components each %with an average of say 5 failure modes. Thus, to ensure complete coverage, each of the effects of the failure modes must be applied to all the other components. Each component must be checked against the failure modes of all other components in the system. Mathematically with components as 'c' and failure modes as 'Fm'. \equation \label{crossprodsingle} checks = \{ \; (Fm,c) \; \mid \; \stackrel{\wedge}{c} \; \neq \; c \} \endequation Where demands are made for resilience against two simultaneous failures this effectively squares the number of checks to make. \equation \label{crossproddouble} doublechecks = \{ \; (Fm_{1},Fm_{2},c) \; \mid \\ \; c_{1} \; \neq \; c_{2} \; \wedge \; Fm_{1} \neq Fm_{2} \; \} \endequation If we consider a system which has a total of $N$ failure modes (see equation \ref{crossprodsingle}) this would mean checking a maximum of \equation NumberOfChecks = \frac{N ( N-1 )}{2} \endequation for individual component failures and their effects on other components when they fail. For a very small system with say 1000 failure modes this would demand a potential of 500,000 checks for any automated checking process. \vskip 0.3cm European legislation\cite{EN298} directs that a system must be able to react to two component failures and not go into a dangerous state. \vskip 0.3cm This raises an interesting problem from the point of view of formal modelling. Here we have a binary cross product of all components (see equation \ref{crossproddouble}). This increases the number of checks greatly. Given that the binary cross product is $ (N^{2} - N)/2 $ and has to be checked against the remaining $(N-2)$ components. \equation \label{numberofchecks} NumberOfchecks = \frac{(N^{2} - N) ( N - 2)}{2} \endequation Thus for a 1000 failure mode system, roughly a half billion possible checks would be required for the double simultaneous failure scenario. This astonomical number of potential combinations, has made formal analysis of this type of system, up until now, impractical. Fault simulators %\cite{sim} are commonly used for the gas certification process. Thus to manually check this number of combinations of faults is in practise impossible. A technique of modularising, or breaking down the problem is clearly necessary. \section{Challenger Disaster} One question that anyone developing a safety critical analysis design tool could do well to answer, is how the methodology would cope with known previous disasters. The Challenger disaster is a good example, and was well documented and invistigated. The problem lay in a seal that had an operating temperature range. On the day of the launch the temperature of this seal was out of range. A bottom up safety approach would have revealed this as a fault. The FTA in use by NASA and the US Nuclear regulatory commisssion allows for enviromental considerations such as temperature\cite{NASA}\cite{NUK}. But because of the top down nature of the FTA technique, the safety designer must be aware of the environemtnal constraints of all component parts in order to use this correctly. This element of FTA is discussed in \ref{surveysc} \section{Therac 25} %% Here need more detail of what therac 25 was and roughly how it failed %% with refs to nancy %% and then highlight the fact that the safety analysis did not integrate software and hardware domains. \section{Problems with Natural Language} Written natural language desciptions can not only be ambiguous or easy to misinterpret, it is also not possible to apply mathematical checking to them. A mathematical model on the other hand can be checked for obvious faults, such as tautologies and contradictions, but also intermediate results can be extracted and these checked. Mathematical modeling of systems is not new, the Z language has been used to model systems\cite{ince}. However this is not widely understood or studied even in engineering and scientific circles. Graphical techniques for representing the mathematics for specifying systems, developed at Brighton and Kent university have been used and extended by this author to create a methodology for modelling complex safety critical systems, using diagrams. This project uses a modified form of euler diagram used to represent propositional logic. %The propositional logic is used to analyse system components. \section{Determining Component Failure Modes} \subsection{Electrical} Generic component failure modes for common electrical parts can be found in MIL1991. Most modern electrical components have associated data sheets. Usually these do not explicitly list failure modes. % watch out for log axis in graphs ! \subsection{Mechanical} Find refs \subsection{Software} Software must run on a microprocessor/microcontroller, and these devices have a known set of failure modes. The most common of these are RAM and ROM failures, but bugs in particular machine instructions can also exist. These can be checked for periodically. Software bugs are unpredictable. However there are techniques to validate software. These include monitoring the program timings (with watchdogs and internal checking) applying validation checks (such as independent functions to validate correct operation). \subsection{Environmentally determined failures} Some systems and components are guaranteed to work within certain environmental constraints, temperature being the most typical. Very often what happens to the system outside that range is not defined. \section{Project Goals} \begin{itemize} \item To create a Bottom up FMEA technique that permits a connected hierarchy to be built representing the fault behaviour of a system. \item To create a procedure where no component failure mode can be accidentally ignored. \item To create a user friendly formal common visual notation to represent fault modes in Software, Electronic and Mechanical sub-systems. \item To formally define this visual language in concrete and abstract domains. \item To prove that the derived~components used to build the hierarchies provide traceable fault handling from component level to the highest abstract system 'top level'. \item To formally define the hierarchies and procedure for bulding them. \item To produce a software tool to aid in the drawing of diagrams and ensuring that all fault modes are addressed. \item to provide a data model that can be used as a source for deterministic and probablistic failure mode analysis reports. \item To allow the possiblility of MTTF calculation for statistical reliability/safety calculations. \end{itemize}