Mike Gearon Posted April 23, 2020 Posted April 23, 2020 If for example you have a control system that keeps the altitude constant and uses a human that can only see an altimeter, nothing from outside then you have to put 0.5 as the probability of his failure. I’m not seeing this. Condtion 1. An automated altitude control has a mathematical probability of failure of almost nothing. Say .005% chance of failure just to put an arbitrary figure on it. Condition 2. Altimeter has failed. A)Pilot has a failure rate that’s tiny based on looking out the window and keeping the plane clear of terrain until landing time. Even then a good chance. For instance Wing spa sights the runway Half way up at 1,000ft AGL downwind. B) Condition changes when in cloud. Near 100% failure unless cloud clears. I get what you’re saying. The example bothers me.
facthunter Posted April 23, 2020 Posted April 23, 2020 I can cite an example of the application of this line of thinking. Above FL 310 if you have an autopilot failure you require extra vertical separation form other cleared traffic and must notify ATC ask for and be given a clearance complying with this condition. It's accepted the human may have extra difficulty keeping the plane within the normal tolerance of +or-200 ft. Having had to do it I can attest to the difficulty, especially if people are walking about the cabin or turbulence exists (when they wouldn't be walking about the cabin).. It doesn't mean the human has failed and a hazardous condition exists. You just change the stipulated parameters. With earlier autopilots they couldn't handle turbulence and the real pilot had to disconnect it and fly the plane manually. Some auto land x wind limits were reduced below the manually performed ones.. Later equipment is better and perhaps some pilots worse especially after long haul flights where the company MANUAL requires an autopilot approach be the normal technique. Autopilot approaches have two autopilots that compare each other and the system has to be proven at short periods by being exercised.. Nev
old man emu Posted April 23, 2020 Author Posted April 23, 2020 Everything will fail at some time after it has been made. Even God's products fail. The way I read Geoff's post is that you accept that a man-made object will fail sometime. You then determine the probability of each component of the whole object failing. Let's say we have three components on an assembly, sat the assembly is a shaft on which there is a bearing, a gear and a split pin which holds the gear in position on the shaft. Let's say that the bearing is likely to fail after 1000 hours; the gear after 10,000 hours and the split pin after 500 hours. The probabilities of failure are: bearing 1/1000 (0.001); gear 1/10,000 (0.0001), and the split pin 1/500 (0.002). It's clear that the condition of the split pin should be checked every 500 hours (or maybe every 100 hours to err on the side of caution); the bearing every 1000 hours, and you'd probably never check the gear during the assembly's service life. But what are the chances of all three objects failing at the same time? The probability of failure of one of the objects is independent of the other two objects (Although the failure of the split pin could lead to the subsequent failure of the gear). What's the probability of all three components failing at the same instant? Multiply the individual probabilities of the three events together to obtain the combined probability. 0.001 x 0.0001 x 0.002 = 0.0000000002 = 2x10^-10 However, what is the probability that the split pin will fail, causing the gear to slip and then the bearing to fail? There is only one way that this sequence can happen. Since there are three items the probability that the split pins goes first is 1/3. That leaves two things to fail - the gear and the bearing. The probability that the gear goes before the bearing is therefore 1/2. Finally, the probability that the bearing will go last is 1 because there is no other thing to go. Multiply the individual probabilities of the three events together to obtain the combined probability. 0.33 x 0.5 x 1 = 0.165 So the probability of the three components failing at the same time in that order is (0.33 x 0.002) x (0.0001 x 0.5) x (1 x 0.002) = 6.6 x 10^-11 2
Flightrite Posted April 23, 2020 Posted April 23, 2020 Oh I've got a headache now, don't think I'll ever step in to another plane again!??? 1
kgwilson Posted April 23, 2020 Posted April 23, 2020 Change the split pin for a circlip that fails in 1000 hours and replace it every 500 hours & check the serviceability of the other components every time & replace them at their mttf.
facthunter Posted April 23, 2020 Posted April 23, 2020 With poorly made articles you get a wide variation of actual service life. An effective quality control process should ensure uniformity of the product dimensionally and in all other aspects that affect performance. Using wood as an example it's very hard to get a consistent guarantee of the material's performance, so a lot of people don't like designing or working with it in aircraft and a lot of test pieces have to be provided to back up the structural calculations as the build progresses. Nev
Geoff_H Posted April 23, 2020 Posted April 23, 2020 Thanks "emu" you have clarified the science better than I ever could. I am at the bottom end of the spectrum so I am not so good at explaining things.
Bruce Tuncks Posted April 23, 2020 Posted April 23, 2020 I liked your arithmetic logic OME, but it does contain the fallacy that new parts are reliable until age weakens them. While this is intuitive, is is contrary to reliability centered maintenance theory.
spacesailor Posted April 23, 2020 Posted April 23, 2020 I am finding some, or an little more than I like, of Post production parts are Not having quality control testing, like the original parts that could be put into production line products. spacesailor
old man emu Posted April 23, 2020 Author Posted April 23, 2020 it does contain the fallacy that new parts are reliable until age weakens them Quite so, but one has to start from somewhere. My workings failed to take into account the variable {age}, but further refinement of the variables would make the calculated results more reliable. I suppose we have all experienced something breaking the first time we used it, and at the same time we might possess something that has worked flawlessly since Pontius was a pilot.
Geoff_H Posted April 23, 2020 Posted April 23, 2020 The reliability and security of an engineering design is given a rigorous design review of which the probability of failure is a part. First we look at the consequences of failure of the piece of equipment. We assign a failure consequences number. A consequence of minor nuisance gets a small number, many deaths and large financial cost the most. Just large financial costs gets a high number! Then each subsystem gets analysed by a team of independent experts and analysed for failure in a dedicated set of ways, over/ under temperature, load, height (tanks etc) and so on, including any other failure. Then each failure system is analysed for ways of preventing the failure, the corrective action. If it is human intervention the probability is 0.5 of failure. If the failure is owing to an equipment failure then maintenance, replacement within failure times, and replacement are looked at to mitigate the impact of the failure. This is a quick, and lacks some details in trying to keep it brief. When it came to a separation of the moving parts of a gas turbine, we used to say it was contained within the casing until the Qantas A380 engine incident. We don't always get things safer with this system, but it is far safer than no system.
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now