Robin_PHD/introduction/introduction.tex
2011-01-27 10:41:05 +00:00

697 lines
36 KiB
TeX

\section{Introduction}
%% $$ \int_{0\-}^{\infty} f(t).e^{-s.t}.dt \; | \; s \in \mathcal{C}$$
\paragraph{Scope of thesis}
This thesis describes the application of, a common notation mathematical notation to
describe the design of safety critical systems/PEC's.
The initial motivation for this study was to create a system
applicable to industrial burner controllers\footnote{Burner Controllers cover the disiplines of
combustion, high pressure steam and hot water, mechanical control, electronics and embedded software.}.
The methodology developed was designed to cope with
both the deterministic\footnote{Deterministic failure mode analysis traces failure mode effects} and probablistic approaches
\footnote{Probablistic failure mode analysis tries to determine the probability of given SYSTEM failure modes, and pfrom these
can determine an overall failure rate, in terms of probability of failure on demand, or failure in time (or Mean Time to Failure (MTTF).}.
\glossary{name={safety critical},description={A safety critical system is one in which its failure may result in death or serious injury to humans, an environmental catastrophe or severe loss or damage}}
\paragraph{Safety Critical Controllers, knowledge and culture sub-disiplines}
The maturing of the application of the programmable electronic controller (PEC)
\glossary{name={PEC},description={A Programmable Electronic controller, will typically consist of sensors and actucators interfaced electronically, with some firmware/software component in overall control}}
for a wide range safety critical applications, has led to a fragmentation of subdisiplines
which speak imperfectly to one another.
The main three sub-disiplines are Electrical, Software and Mechanical Engineering.
Additional disiplines are defined by application area of the PEC. All of these sub-displines
are in turn split into even finer units.
The practicioners of these fields tend to view a PEC in different ways.
Discoveries and culture in one field diffuse only slowly into the conciousness of a specialist in another.
Too often, one disipline's unproven assumptions or working methods, are treated as firm boundary conditions
for an overlapping field.
For failure mode analysis a common notation, across disiplines is a very desirable and potentially useful
tool.
\paragraph{Safety Assessment/analysis of PEC's}
\glossary{name={safety assessment},description={A critical appraisal, typically following legal or formal guidelines, which will encompass design, and failure effects analysis}}
For a anyone responsible for ensuring or proving the safety of a PEC must be able
to understand the process being controlled, the mechanical and electrical
sensors and actuators and the software. Not only must the
safety engineer understand more than four potential disiplines, he/she
must be able to trace failure modes of components to SYSTEM levels failure modes,
and classify these according to their criticallity.
\paragraph{Desirability of a common failure mode notation}
Having a common failure mode notation accross all disciplines in a project
would allow all the specialists to prepare failure mode
analysis and then bring them together to model the PEC.
\paragraph{Visual form of the notation}
The visual notation developed was initially designed for electronic fault modelling.
This notation deals with failure modes of components using concepts derived from
Euler and Spider diagrams.
However, as the notation dealt with generic failure modes, it was realised that it could be applied to mechanical and software domains as well.
This changed the target for the study slightly to encompass these three domains in a common notation.
\paragraph{PEC's: Legal and Insurance Issues}
In most safety critical industries the operators of plant have to demonstrate a through consideration of safety.
There is also usually a differentiation between the manufacturers
and the the plant operators.
The manufacturers have to ensure
that the device is adaquately safe for use in its operational context.
This usually means conforming to device specific standards~\footnote{in Europe, conformance to European Norms (EN) are legal requirements
for specific types of controllers, and in the USA conformance to Underwriters Laboratories (UL) standards
are usually a mimimum requirement to take out insurance}, and offering training
of operators.
Operators of safety critical plant are concerned with maintenance and legal obligations for
periodic safety checks (both legal and insurance driven).
\section{Background}
I completed an MSc in Software engineering in 2004 at Brighton University while working for
an Engineering firm as an embedded `C' programmer.
The firm specialise in industrial burner controllers.
Industrial Burners are potentially very dangerous industrial plant.
They are generally left running unattended for long periods.
They are subject to stringent safety regulations and
must conform to specific `EN' standards.
For a non-safety critical product one can merely comply with the standards, and `self~certify' by applying a CE mark sticker.
Safety critical products are categorised and listed. These require
certification by an independent and `competent body' recognised under European law.
The certification process typically involves stress testing with repeated operation cycles
over a specified a range of temperatures, electrical stress testing with high voltage interference,
power supply voltage ranges with surges and dips, electro static discharge testing, and
EMC (Electro Magnetic Compatibility). A significant part
of this process however, is `static testing'. This involves looking at the design of the products,
from the perspective of environmental stresses, natural input fault conditions\footnote{For instance in a burner controller, the gas supply pressure reducing},
components failing, and the effects on safety this could have.
Some static testing involves checking that the germane `EN' standards have
been complied with\footnote{for instance protection levels of an enclosure for the device, or down rating of electrical components}.
Failure Mode Effects Analysis (FMEA) was also applied. This involved
looking in detail at selected critical sections of the product and proposing
component failure scenarios.
For each failure scenario proposed either a satisfactory
answer was required, or a counter proposal to change the design to cope with
a theroretical component failure eventuality.
FMEA was time consuming, and being directed by
experts undoubtly ironed out many potential safety faults before the product saw
light of day.
However it was quickly apparent that only a small proportion
of component~failure modes was considered. Also there was no formalism.
The component~failure~modes investigated were not analysed within
any rigourous or mathematically proven framework.
\subsection{ Blanket Risk Reduction Approach }
The suite of tests applied for a certified product amount to a `blanket' approach.
That is to say that by applying electrical, repeated operations, and environmental
stress testing it is hoped that the majority of latent faults are discovered.
The FMEA and static testing only looked at the most obviously safety critical
aspects, and a small minority of the total component base for a product.
Systemic faults, or mistakes are missed by this form of static testing.
\subsection{Possibility of applying mathematical techniques to FMEA}
My MSc project was a diagram editor for Constraint diagrams.
I wanted to apply constriant diagram techniques to FMEA
and began thinking about how this could be done. One
obvious factor was that a typical safety critical system could
have more than 1000 component parts. Each component
would typically have several failure modes.
Trying to apply a rigorous methodology on an entire product
was going to be impractical. To do this with complete coverage
each component failure mode would have to have been checked against
the other thousand or so components for influence, and then
a determination of the effects on the system would have had to have been
made. Thus millions of checks would have to have been performed, and
as FMEA is an `expert only' time consuming technique, this idea was
obviously impractical. Note that most of the checks made would be redundant.
Most components affect the performance of a few that they are placed to work with
to perform some particular low-level function.
\paragraph{Top down Approach}
A top down approach has several potential problems.
By its nature it means that at the start of the process
a set of system or top level faults or undesireable outcomes are defined.
It then must break the system down into modules and
decide which of these can contribute to a system level fault mode.
Potentially failure modes, be they from components or the interaction
between modules can be missed. A disturbing example of this
is the NASA space shuttle in 1986, which missed the fault mode of an O
ring. This was made even worse, by the fact that the `O' ring had a specified temperature
range where the probability of this fault occuring was dramatically raised when below
the temperature range. This was a known and documented feature of a safety critical component
and it was ignored in the safety analysis.
\paragraph{Bottom-up Approach}
A bottom-up approach looked impractical at first due to the sheer number
of component failure modes in a typical system.
However were this bottom-up approach to be modular, (reducing the order of cross checking), and build a hierachy
of modules rising up until all components are covered, we
can model an entire complex system.
This is the core concept behind this study.
By working from the bottom up, at the lowest level taking the
smallest functional~groups of components
and analysing these, we can obtain a set of failure modes
for the functional~groups. We can then treat these
as `higher level' components and combine them
to form new `functional~groups'.
In this way all failure modes from all components must be at the very least considered.
Also a hierarchy is formed when the top level errors are formed
naturally from the lower levels of analysis.
Unlike a top~down analysis, we cannot miss a top level fault condition.
\paragraph{Repeated Circuitry Sub-Systems}
In all safety critical real time systems the author has worked with
all have repeated sections of hardware.
for instance self checking digital inputs, analog inputs, sections of circuitry to
generate {\ft} loops, micro-processors with watchdog secondary
circuity.
In other words spending time on analysing these lower level sub-systems
seems worthwhile, since they will be used in many designs, and are often
repeated within a SYSTEM
(and thus the analysis results may be re-used).
In general terms we can describe
these circuitry sub-systems
as collections of components or smaller sub-systesm, that interact to perform a given function.
We can call these collections {\fg}s.
In these `safety critical' circuitry sections, especially ones claiming to
be self-checking, the actual level of safety depends upon not
just the MTTF/reliability of the components, but the
{\fg}s reaction to a component failure
within the ciruit.
That is to say how the circuit section or {\fg}
reacts to component failures within it.
We may find for instance that the circuit reacts to most component failure modes
in ways that we can detect that there has been a failure.
Some can component failure modes in the {\fg} can lead to serious errors, such as an incorrect reading
that we cannot immediately detect.
%
We will, if these specific component
failures occur, not know and feed incorrect data into our system.
%
Figure \ref{fig:millivolt} shows a typical industrial
circuit to measure and amplify millivolt signals.
It will detect a disconneted milli-volt source (the most common
failure, and usually due to wiring faults) and some other internal component failures.
It can however provide an incorrect (slightly low reading) if
one of two resistors fail in particular ways.
% Although statistically unlikely, in a very critical system
% this may have to be considered.
To the author, it seems that paying attention
to the way {\fg}s of components interact and proving
a safety case for them is a very important aspect
of detecting `undetected failures' in safety critical product design.
\paragraph{Multi-disipline} Most safety critical systems are composed of mechanical, electrical and
computing elements. A tragic example of the mechanical and electrical elements
interfacing to a computer is found in the THERAC25 x-ray dosage machine.
With no common notation to integrate the saftey analyis between the electrical/mechanical and computing
domains, synchronisation errors occurred that were in some cases fatal.
The interfacing between the hardware and software for the THERAC-25 was not considered
in the design phase.
Niel Story in the formal methods chapter of "safety critical computer systems"
describes the different formal languages suitable for hardward and software and
bemaons the fact that no single language is suitable for for such a broad range of tasks \cite{sccs}[pp. 287].
\paragraph{Requirements for a rigorous FMEA process}
It was determined that any process to apply
FMEA in rigorous and complete (in terms of complete component coverage) had to be
a bottom~up process to eliminate the possibility of missing component failure modes.
It also had to naturally converge to a failure model of the system.
It had to take potentially thousands of component failure modes and simplify
these into system level errors.
To analyse the large number of component failure modes, and resolve these to perhaps a handful
of system failure modes, would require
a process of modularisation from the bottom~up.
\begin{list}{$*$}{}
\item The analysis process must be `bottom~up'
\item The process must be modular and hierarchical
\item The process must be multi-dicipline and must be able to represent hardware, electronics and software
\end{list}
\section{Safety Critical Systems}
\glossary{name={safety critical},description={A safety critical system is one in which its failure may result in death or serious injury to humans, an environmental catastrophe or severe loss or damage}}
%
%How safe is "safe"?
%The word "safety" is too general—it really doesn't mean anything definitive. Therefore, we use terms such as safety-related and safety-critical.
%
%A safety-related device provides or ensures safety. It is required for machines/vehicles, which cause bodily harm or death to human being when they fail. A safe state can be defined (in other words, safety-related). In case of a buzz saw, this could be a motor that seizes all movements immediately. The seizure of movement makes the machine safe at that moment. IEC 61508 defines the likelihood of failures of this mechanism, the Safety Integrity Levels (SIL). SIL 3 is defined as the likelihood of failing less than 10-7% per hour. This is a necessary level of safety integrity for products such as lifts, where several people's lives are endangered. The buzz saw is likely to require SIL 2 only, it endangers just one person.
%
%Safety-critical is a different matter. To understand safety-critical imagine a plane in flight: it is not "safe" to make all movement stop since that would make the plane crash. A safe state for a plane is in the hangar, but this is not an option when you're in flight. Other means of ensuring safety must be found. One method used in maritime applications is the "CANopen flying master" principle, which uses redundancy to prevent failure. For the above example an SIL 4, meaning likelihood of failing less than 10-8% per hour is necessary. This is also true for nuclear power station control systems, among other examples.
%
\subsection{General description of a Safety Critical System}
A safety critical system is one in which lives may depend upon it or
it has the potential to become dangerous\cite{sccs}.
%(/usr/share/texmf-texlive/tex/latex/amsmath/amstext.sty)
%An industrial burner is typical of plant that is potentially dangerous.
%An incorrect air/fuel mixture can be explosive.
%Medical electronics for automatically dispensing drugs or maintaining
%life support are examples of systems that lives depend upon.
\subsection{Two approaches : Probablistic, and Deterministic}
There are two main philosophies applied to safety critical systems certification.
\paragraph{Probablistic safety Measures}
One is a general number of acceptable failures per hour\footnote{The common metric is Failure in Time (FIT) values - failures per ${10}^{9}$
hours of operation} of operation or
a given statistical failure on demand.
This is the probablistic approach and is embodied in the European Standard
EN61508 \cite{en61508} (international standard IOC1508).
\glossary{name={deterministic},description={Deterministic in the context of failure mode analysis, traces the causes of SYSTEM level events to base level component failure modes}}
\glossary{name={probablistic},description={Probablistic in the context of failure mode analysis, traces the probability of base level failure modes causing of SYSTEM level events/failure modes}}
\fmodegloss
\paragraph{Deterministic safety Measures}
The second philosophy, applied to application specific standards, is to investigate
components for sub-systems in the critical safety path and to look at component failure modes
and ensure that they cannot cause dangerous faults.
%With the application specific standards detail
%specific to the process are
The simplest deterministic safety measure is to require that no single component failure
mode can cause a dangerous error.
This philosophy is first mentioned in aircraft safety operation reseach (WWII)
studies. Here potential single faults (usually mechanical) were traced to
catastrophic failures \cite{boffin}.
EN298, the European Gas burner standard, goes further than this
and requires that no two single component faults may cause
a dangerous condition.
%
% \begin{example}
% \label{exa1}
% Test example
% \end{example}
%
% And that is example~\ref{exa1}
\subsection{Overview of regulation of safety Critical systems}
Reference chapter dealing specifically with this but given a quick overview.
\subsubsection{Overview system analysis philosophies }
- General safety standards
- specific safety standards
\subsubsection{Overview of current testing and certification}
Ref chapter specifically on this but give an overview now
A modern industrial burner has mechanical, electronic and software
elements, that are all safety critical. That is to say
unhandled failures could create dangerous faults.
%To add to these problems
%Operators are often under pressure to keep them running. An boiler supplying
%heat to a large greenhouse complex could ruin crops
%should it go off-line. Similarly a production line relying on heat or steam
%can be very expensive in production down-time should it fail.
%This places extra responsibility on the burner controller.
%
%
% This needs to become a chapter
%\subsection{Mechanical components}
%describe the mechanical parts - gas valves damper s
%electronic and software
%give a diagram of how it all fits A
%together with a
%\subsection{electronic Components}
%
%\subsection{Software/Firmware Components}
%
%
%\subsection{A high level Fault Hierarchy for an Industrial Burner}
%
%This section shows the component level, leading up higher and higher in the abstraction level
%to the software levels and finally a top level abstract level. If the system has been
%designed correctly no `undetected faults' should be present here.
%
\section{An Outline of the FMMD Technique}
{\fmmdgloss}
%\glossary{name={FMMD},description={Failure Mode Modular De-Composition}}
The FMMD methodology takes a bottom up approach to
the design of an integrated system.
%
Each component is assigned a well defined set of failure modes.
The system under inspection is then searched for functional groups of components that
perform simple well defined tasks.
These functional groups are analysed with respect to the failure modes of the
components.
%
The `functional group', after analysis, has its own set of derived
failure modes.
\fmodegloss
%
The number of derived failure modes will be
less than or equal to the sum of the failure modes of all its components.
%
%
A `derived' set of failure modes, is at a higher abstraction level.
%
Thus we can now treat our `functional group' as a component in its own right,
with its own set of failure~modes. We can create
a `derived component' and assign it the derived failure modes as analysed from the `functional group'.
%
Derived Components may now be used as building blocks, to model the system at
ever higher levels of abstraction, building a hierarchy until the top level is reached.
%
Any unhandled faults will appear at this top level and will be `un-resolved'.
A formal description of this process is dealt with in Chapter \ref{fmmddefinition}.
%
%
%This principally focuses
%on simple control systems for maintaining temperature
%and for industrial burners. It is hoped that a general mathematical
%framework is created that can be applied to other fields of safety critical engineering.
\subsection{Automated Systems and Safety}
Automated systems, as opposed to manual ones are now the norm
in the home and in industry.
%
Automated systems have long been recognised as being more efficient and
more accurate than a human opperator, and the reason for automating a process
can now be more likely to be cost savings due to better effeciency
than a not paying a salary to a human operator \ref{burnereffency}.
%
For instance
early automated systems were mechanical, with cams and levers simulating
control functions.
%
A typical control function could be the
fuel air mixture profile curves over a the firing range.
%
Because fuels vary slightly in calorific value, and air density changes with the weather, no optimal tuning can be optional.
In fact for asethetic reasons (not wanting smoke to appear at the flue)
the tuning was often air rich, causing air to be heated and
unnecessarily passed through the burner, leading to direct loss of energy.
An automated system analysing the combustion gasses and automatically
adjusting the fuel air mix can get the efficiencies very close to theoretical levels.
As the automation takes over more and more functions from the human operator it also takes on more responsibility.
A classic example of an automated system failing, is the therac-25.
This was an X-ray/electron~beam dosage machine, that, due to software errors
caused the deaths of several patients and injured more during the 1980's.
The Therac-25 was a designed from a manual system, which had checks and interlocks,
and was subsequently computerised. Software safety interlock problems were the primary causes of the radiation
overdoses.
\cite{safeware}[App. A]
Any new safety critical analysis methodology should
be able to model software, electrical and hardware faults using
a common notation.
Ideally the tool should be automated so that it can
seamlessly analyse the entire system, and apply
rigorous checking to ensure that no
fault conditions are missed.
% http://en.wikipedia.org/wiki/Autopilot
\paragraph{Importance of self checking}
To take an example of an Aircraft Autopilot, simple early devices\footnote{from the 1920's simple aircraft autopilots were in service},
prevented the aircraft straying from a compass bearing and kept it flying straight and level.
Were they to fail the pilot would notice quite quickly
and resume manual control of the bearing.
Modern autopilots control all aspects of flight including the engines, take off and landing phases.
The automated system do not have the
common sense of a human pilot; and if fed the incorrect sensory information
can make horrendous mistakes. This means that simply reading sensors and applying control
corrections cannot be enough.
Checking for error conditions must also be incorporated.
Equipment can also develop an internal faults, and strategies
must be in-pcae to recognise and cope with them.
\begin{figure}[h]
\centering
\includegraphics[width=300pt,keepaspectratio=true]{introduction/mv_opamp_circuit.png}
% mv_opamp_circuit.png: 577x479 pixel, 72dpi, 20.35x16.90 cm, bb=0 0 577 479
\caption{Milli-Volt Amplifier with added Safety Resistor (R18)}
\label{fig:millivolt}
\end{figure}
% \begin{figure}[h]
% \centering
% \includegraphics[width=300pt,bb=0 0 678 690,keepaspectratio=true]{introduction/mv_opamp_circuit.png}
% % mv_opamp_circuit.png: 678x690 pixel, 72dpi, 23.92x24.34 cm, bb=0 0 678 690
% \caption{Milli-volt amplifier with added safety Resistor}
% \label{fig:millivolt}
% \end{figure}
%
% %5
% \begin{figure}
% \vskip 7cm
% \special{psfile=introduction/millivoltsensor.ps hoffset=0 voffset=0 hscale=35 vscale=35 }\caption[Milli-Volt Sensor with safety resistor]{
% Milli-Volt Sensor with safety resistor
% \label{fig:millivolt}}
% \end{figure}
\paragraph{Component added to detect errors}
The op-amp in the circuit in figure \ref{fig:millivolt}, supplies a gain of $\approx 184$ \footnote{
applying formula for non-inverting op-amp gain\cite{aoe} $\frac{150 \times 10^3}{820}+ 1 \approx 184$ }.
The safety case here is that
any amplified signal between a range say, of 0.5 and 4 volts on the ADC will be considered in range.
This means that between 3mV and 21mV on the input correctly amplified
can be measured.\footnote{this would be a typical thermocouple amplifier circuit where milli-volt signals
are produced by the Seebeck effect\cite{aoe}}
Should the sensor become disconnected the input will drift up due to the safety resistor $R18$.
This will cause the opamp to supply its maximum voltage, telling the system the sensor reading is invalid.
Should the sensor become shorted, the input will fall below 3mV and the op amp will
supply a voltage below 0.5. Note that the sensor breaking and becoming open, or
becoming disconnected is the `Raison d'être' of this safety addition.
This circuit would typically be used to amplify a thermocouple, which typically
fails by going open circuit.
It {\em does}
detect several other failure modes of this circuit and a full analysis is given in appendix \ref{mvamp}.
\fmodegloss
% Note C14 shorting is potentially v dangerous could lead to a high output by the opamp being seen as a
% low temperature.
%
\paragraph{Self Checking}
This introduces a level of self checking into the system.
Admittedly this is the simplest failure mode scenario (that the
sensor is not wired correcly or has become disconnected).
%
This safety resisitor has a side effect, it also checks for internal errors
that could occur in this circuit.
Should the input resistor $R22$ go OPEN this would be detected.
Should the gain resistors $R30$ or $R26$ go OPEN or SHORT a fault condition will be detected.
%
\paragraph{Not rigorous, but tested by time}
This is a typical example of an industry standard circuit that has been
thought through, and in practise works and detects most commonly encountered failure modes.
But it is not rogorous: it does not take into account every failure
mode of every component in it.
However it does lead on to an important concept of three main states of a safety critical system.
%
\paragraph{Working, safe fault mode, dangerous fault mode}
We can say that a safety critical system may be said to have three distinct
overall states.
Operating normally, operating in a safe mode with a fault, and operating
dangerously with a fault.
%
The main role of the system designers of safety critical equipment should be
to reduce the possibility of this last condition.
% Software plays a critical role in almost every aspect facet of our daily lives - from , to driving our cars, to working in our offices.
% Some of these systems are safety-critical.
% Failure of software could cause catastrophic consequences for human life.
% Imagine the antilock brake system (ABS) in your car.
% A software failure here could render the ABS inoperable at a time when you need it most.
% For these types of safety-critical systems, having guidelines that define processes and
% objectives for the creation of software that focus on software quality, or the ability
% to use software that has been developed under this scrutiny, has tremendous value
% for developers of safety-critical systems.
\section{Motivation for developing a formal methodology}
A feature of many safety critical systems specifications,
including EN298, EN230 \cite{en298} \cite{en230}
is to demand,
at the very least that single failures of hardware
or software cannot
create an unsafe condition in operational plant. Further to this
a second fault introduced, must not cause an unsafe state, due
to the combation of both faults.
\vskip 0.3cm
This sounds like an entirely reasonable requirement. But to rigorously
check the effect a particular component fault has on the system,
we could check its effect on all other components.
Should a diode in the powersupply fail in a particular way, by perhaps
introducing a ripple voltage, we should have to look at all components
in the system to see how they will be affected.
%However consider a typical
%small system with perhaps 1000 components each
%with an average of say 5 failure modes.
Thus, to ensure complete coverage, each of the effects of
the failure modes must be applied
to all the other components.
Each component must be checked against the
failure modes of all other components in the system.
Mathematically with components as 'c' and failure modes as 'Fm'.
\equation
\label{crossprodsingle}
checks = \{ \; (Fm,c) \; \mid \; \stackrel{\wedge}{c} \; \neq \; c \}
\endequation
Where demands
are made for resilience against two
simultaneous failures this effectively squares the number of checks to make.
\equation
\label{crossproddouble}
doublechecks = \{ \; (Fm_{1},Fm_{2},c) \; \mid \\ \; c_{1} \; \neq \; c_{2} \; \wedge \; Fm_{1} \neq Fm_{2} \; \}
\endequation
If we consider a system which has a total of
$N$ failure modes (see equation \ref{crossprodsingle}) this would mean checking a maximum of
\equation
NumberOfChecks = \frac{N ( N-1 )}{2}
\endequation
for individual component failures and their effects on other components when they fail.
For a very small system with say 1000 failure modes this would demand a potential of 500,000
checks for any automated checking process.
\vskip 0.3cm
European legislation\cite{en298} directs that a system must be able to react to two component failures
and not go into a dangerous state.
\vskip 0.3cm
This raises an interesting problem from the point of view of formal modelling. Here we have a binary cross product of all components
(see equation \ref{crossproddouble}).
This increases the number of checks greatly. Given that the binary cross product is $ (N^{2} - N)/2 $ and has to be checked against the remaining
$(N-2)$ components.
\equation
\label{numberofchecks}
NumberOfchecks = \frac{(N^{2} - N) ( N - 2)}{2}
\endequation
Thus for a 1000 failure mode system, roughly a half billion possible checks would be required for the double simultaneous failure scenario. This astonomical number of potential combinations, has made formal analysis of this
type of system, up until now, impractical. Fault simulators %\cite{sim}
are commonly used for the gas certification process. Thus to
manually check this number of combinations of faults is in practise impossible.
A technique of modularising, or breaking down the problem is clearly necessary.
\section{Examples of disasters caused by designs \\ missing component errors}
\subsection{Challenger Disaster}
One question that anyone developing a safety critical analysis design tool
could do well to answer, is how the methodology would cope with known previous disasters.
The Challenger disaster is a good example, and was well documented and invistigated.
The problem lay in a seal that had an operating temperature range.
On the day of the launch the temperature of this seal was out of range.
A bottom up safety approach would have revealed this as a fault.
The FTA in use by NASA and the US Nuclear regulatory commisssion
allows for enviromental considerations such as temperature\cite{nasafta}\cite{nucfta}.
But because of the top down nature of the FTA technique, the safety designer must be aware of
the environemtnal constraints of all component parts in order to use this correctly.
This element of FTA is discussed in \ref{surveysc}
\subsection{Therac 25}
The therac-25 was a computer controlled radiation therapy machine, which
overdosed 6 people between 1985 and 1987.
An earlier computerised version of the therac-25 (the therac-20) used the same software but kept the
hardware interlocks from the previous manual operation machines. The hardware interlocks
on the therac-20 functioned correctly and the faulty software in it caused no accidents.
A safety study for the device, using Fault Tree Analysis % \cite{nucfta}
carried out in 1983
excluded the software \cite{safeware}[App. A].
\section{Practical problems in using formal methods}
%% Here need more detail of what therac 25 was and roughly how it failed
%% with refs to nancy
%% and then highlight the fact that the safety analysis did not integrate software and hardware domains.
\subsection{Problems with Natural Language}
Written natural language desciptions can not only be ambiguous or easy to misinterpret, it
is also not possible to apply mathematical checking to them.
A mathematical model on the other hand can be checked for
obvious faults, such as tautologies and contradictions, but also
intermediate results can be extracted and these checked.
Mathematical modeling of systems is not new, the Z language
has been used to model physical and software systems\cite{ince}. However this is not widely
understood or studied even in engineering and scientific circles.
Graphical techniques for representing the mathematics for
specifying systems, developed at Brighton and Kent university
have been used and extended by this author to create a methodology
for modelling complex safety critical systems, using diagrams.
This project uses a modified form of euler diagram used to represent propositional logic.
%The propositional logic is used to analyse system components.
\section{Determining Component Failure Modes}
\subsection{Electrical}
Generic component failure modes for common electrical parts can be found in MIL1991.
Most modern electrical components have associated data sheets. Usually these do not explicitly list
failure modes.
% watch out for log axis in graphs !
\subsection{Mechanical}
Find refs
\subsection{Software}
Software must run on a microprocessor/microcontroller, and these devices have a known set of failure modes.
The most common of these are RAM and ROM failures, but bugs in particular machine instructions
can also exist.
These can be checked for periodically.
Software bugs are unpredictable.
However there are techniques to validate software.
These include monitoring the program timings (with watchdogs and internal checking)
applying validation checks (such as independent functions to validate correct operation).
\subsection{Environmentally determined failures}
Some systems and components are guaranteed to work within certain environmental constraints,
temperature being the most typical. Very often what happens to the system outside that range is not defined.
\section{Project Goals}
\begin{itemize}
\item To create a Bottom up FMEA technique that permits a connected hierarchy to be
built representing the fault behaviour of a system.
\item To create a procedure where no component failure mode can be accidentally ignored.
\item To create a user friendly formal common visual notation to represent fault modes
in Software, Electronic and Mechanical sub-systems.
\item To formally define this visual language in concrete and abstract domains.
\item To prove that the derived~components used to build the hierarchies
provide traceable fault handling from component level to the
highest abstract system 'top level'.
\item To formally define the hierarchies and procedure for bulding them.
\item To produce a software tool to aid in the drawing of diagrams and
ensuring that all fault modes are addressed.
\item to provide a data model that can be used as a source for deterministic and probablistic failure mode analysis reports.
\item To allow the possiblility of MTTF calculation for statistical
reliability/safety calculations.
\end{itemize}