milli volt sensor with added safety resistor better description and diagram

This commit is contained in:
Robin Clark 2010-06-28 21:10:44 +01:00
parent 33231e4405
commit ebc4bd19bf
2 changed files with 53 additions and 30 deletions

View File

@ -325,41 +325,64 @@ Checking for error conditions must also be incorporated.
It could also develop an internal fault, and must be able to recognise and cope with this.
\begin{figure}
\vskip 7cm
\special{psfile=introduction/millivoltsensor.ps hoffset=0 voffset=0 hscale=35 vscale=35 }\caption[Milli-Volt Sensor with safety resistor]{
Milli-Volt Sensor with safety resistor
\label{fig:millivolt}}
\begin{figure}[h]
\centering
\includegraphics[width=300pt,bb=0 0 678 690,keepaspectratio=true]{introduction/mv_opamp_circuit.png}
% mv_opamp_circuit.png: 678x690 pixel, 72dpi, 23.92x24.34 cm, bb=0 0 678 690
\caption{Milli-volt amplifier with added safety Resistor}
\label{fig:millivolt}
\end{figure}
%
% %5
% \begin{figure}
% \vskip 7cm
% \special{psfile=introduction/millivoltsensor.ps hoffset=0 voffset=0 hscale=35 vscale=35 }\caption[Milli-Volt Sensor with safety resistor]{
% Milli-Volt Sensor with safety resistor
% \label{fig:millivolt}}
% \end{figure}
\paragraph{Component added to detect errors}
For exmaple, if the sensor supplies a range of 0 to 40mV, and RG1 and RG2 are such that the op-amp supplies a gain of 100
any signal between 0 and 4 volts on the ADC will be considered in range. Should the sensor become disconnected the
opamp will supply its maximum voltage, telling the system the sensor reading is invalid.
The op-amp in the circuit in figure \ref{fig:imillivolt}, supplies a gain of $\approx 180$ \footnote{
applying formula for non-inverting op-amp gain\cite{aoe} $\frac{150 \times 10^3}{820}+ 1 = 183$ }.
The safety case here is that
any amplified signal between 0.5 and 4 volts on the ADC will be considered in range.
This means that between 3mV and 21mV on the input correctly amplified
can be measured.\footnote{this would be a typical thermocouple amplifier circuit where milli-volt signals
are produced by the Seebeck effect\cite{aoe}}
Should the sensor become disconnected the input will drift up due to the safety resistor $R18$.
This will cause the opamp to supply its maximum voltage, telling the system the sensor reading is invalid.
Should the sensor become shorted, the input will fall below 3mV and the op amp will
supply a voltage below 0.5.
%
\paragraph{Self Checking}
This introduces a level of self checking into the system.
We need to be able to react to not only errors in the process its self,
but also validate and look for internal errors in the control system.
Admittedly this is the simplest failure mode scenario (that the
sensor is not wired correcly or has become disconnected).
%
This safety resisitor has a side effect, it also checks for some internal errorsi
that could occur in this circuit.
Should the input resistor $R22$ go OPEN this will be detected.
Should the gain resistors $R30$ or $R26$ go OPEN or SHORT a fault condition will be detected.
%
\paragraph{Not rigorous, but tested by time}
This is a typical example of an industry standard circuit that has been
thought through, and in practise works and detects most failure modes.
But it is not rogorous. It does not take into account every failure
mode of every component in it.
This leads on to an important concept of three main states of a safety critical system.
% To improve productivity, performance, and cost-effectiveness, we are developing more and more safety-critical systems that are under computer control. And centralized computer control is enabling many safety-critical systems (e.g., chemical and pesticide factories) to grow in size, complexity, and potential for catastrophic failure.
% We use software to control our factories and refineries as well as power generation and distribution. We also use software in our transportation systems including airplanes, trains, ships, subways, and even in our family automobiles. Software is also a major component of many medical systems in which safe functioning is critical to the safety of patients and operators alike. Even when the software does not directly control safety-critical hardware, software can provide operators and users with safety-critical data with which they must make safety-critical decisions (e.g., air traffic control or medical information such as blood bank records, organ donor information, and patient medical records). As we have come to rely more on software-intensive systems, we have come to rely more on those systems functioning safely.
% Many accidents are caused by problems with system and software requirements, and “empirical evidence seems to validate the commonly stated hypothesis that the majority of safety problems arise from software requirements and not coding errors” [Leveson1995]. Major accidents often result from rare hazards, whereby a hazard is a combination of conditions that increases the likelihood of accidents causing harm to valuable assets (e.g., people, property, and/or the environment). Most requirements specifications are incomplete in that they do not specify requirements to eliminate these rare hazards or mitigate their consequences. Requirements specifications are also typically incomplete in that they do not specify what needs to happen in exceptional “rainy day” situations or as a response to each possible event in each possible system state although accidents are often caused by the incorrect handling of rare combinations of events and states that were considered to be either impossible or too unlikely to worry about, and were therefore never specified. Even when requirements have been specified for such rare combinations of events and conditions, they may well be ambiguous (an unfortunately common characteristic of requirements in practice), partially incomplete (missing assumptions obvious only to subject matter experts), or incorrect, or inconsistently implemented. Thus, the associated hazards are not eliminated or the resulting harm is not properly mitigated when the associated accidents occur. Ultimately, safety related requirements are important requirements that need to be better engineered.
% The goal of this column is to define safety requirements and clarify how they differ from safety constraints and from functional, data, and interface requirements that happen to be safety critical. I start by defining safety in terms of a powerful quality model and show how quality requirements (including safety requirements) can be specified in terms of the components of this quality model. I will then show how to use the quality model to specify safety requirements. Then, I will define and discuss safety constraints and safety-critical requirements. Finally, I will pose a set of questions regarding the engineering of these three kinds of safety-related requirements for future research and experience to answer.
Safety critical systems in the context of this study, means that a safety critical system may be said to be in three distinct
However it does lead on to an important concept of three main states of a safety critical system.
%
\paragraph{Working, safe fault mode, dangerous fault mode}
We can say that a safety critical system may be said to have three distinct
overall states.
Operating normally, operating in a lockout mode with a detected fault, and operating
dangerously with an undetected fault.
The main role of the system designers of safety critical equipment should be to eliminate the possibility of this last condition.
Operating normally, operating in a safe mode with a fault, and operating
dangerously with a fault.
%
The main role of the system designers of safety critical equipment should be
to reduce the possibility of this last condition.
% Software plays a critical role in almost every aspect facet of our daily lives - from , to driving our cars, to working in our offices.
% Some of these systems are safety-critical.

Binary file not shown.

After

Width:  |  Height:  |  Size: 20 KiB