Maintenance Analysis and Improvement Tools [part 2]

HOME | FAQ | Books | Links


AMAZON multi-meters discounts AMAZON oscilloscope discounts


<<Prev.

FMEA Procedure

The basic steps for performing a FMEA are outlined below:

1. Establish the objective of FMEA and identify analysis team members who should consist of representatives from key stake holders, e.g., design, operations, maintenance, and materials. If the FMEA is design-based, the designer should lead this effort.

2. Describe the asset or process and its functions. A good under standing of the asset or process under consideration is important as this understanding simplifies the analytical process.

3. Create a block diagram of the asset or process. In this diagram, major components or process steps are represented by blocks connected together by lines that indicate how the components or steps are related. The diagram shows the logical relationships of components and establishes a structure around which the FMEA can be developed. Establish a coding system to identify system elements.

4. Create a spreadsheet or a form and list items or components and the functions they provide. The list should be organized in a logical manner according to the subsystem and sub-assemblies, based on the block diagram.

5. Identify which component could fail, how it will fail (failure modes), and for each component why it will fail (possible cause). A failure mode is defined as the manner in which that item / component could potentially fail while meeting the design intent.

Examples of potential failure modes include:

• Broken / Fractured

• Corrosion

• Deformation

• Clogged / Contamination

• Excess Vibration

• Electrical Short or Open

• Eroded / Worn

A list of suggested failure modes and causes is shown in FIG. 7.

6. A failure mode in one component can be the cause of a failure mode in another component. Failure modes should be listed for functions of each component or process step. At this point, the failure mode should be identified whether or not the failure is likely to occur.

7. Identify the causes for each failure mode. A failure cause can be a design weakness, or any operator or maintenance workman ship that may result in a failure. The potential causes for each failure mode should be identified and documented. The causes should be listed in technical terms and not in terms of symptoms.

Examples of potential causes include:

• Improper torque applied

• Improper operating conditions

• Corrosion

• Coordination

• Design induced -- wrong part used or wrong control circuit

• Improper alignment

• Excessive loading

• Excessive current /voltage

8. Identify failure mechanisms that will create a random or wear out type of failure event.

9. List the effects of each of the failure modes. A failure effect is defined as the result of a failure mode on the function of the component or process.

Examples of failure effects include:

• Injury to the user-operator

• Creation of hazards material leak -- environmental spill

• Loss of all or partial functions

• Creation of unbearable, bad odors

• Degraded performance

• High noise levels

10. Establish a numerical ranking for the severity of the effect. A common industry standard scale uses 1 to represent no effect and 10 to indicate very severe with failure affecting system operation and safety without warning. The intent of the ranking is to help the analyst determine whether a failure would be a minor nuisance or a catastrophic occurrence. This enables the designers and engineers to prioritize the effects of failures and address the critical issues first.

11. Estimate the probability factor for each cause. A numerical weight should be assigned to each cause that indicates how likely that cause is (probability of the cause occurring). A common industry standard scale uses 1 to represent not likely and 10 to indicate inevitable.

12. What is the likelihood that the current control measures will detect at the onset the cause of the failure in time and thus pre vent it from happening?

13. Calculate and review Risk Priority Numbers (RPN). The Risk Priority Number is a mathematical product of the numerical Severity, Probability, and Detection ratings: RPN = (Severity) x (Probability) x (Detection). To use the Risk Priority Number (RPN) method to assess risk, the analysis team must:

• Rate the severity of each effect of failure.

• Rate the likelihood of occurrence for each cause of failure.

• Rate the likelihood of prior detection for each cause of failure (e.g., the likelihood of detecting the problem by the operator or during PM inspection before it fails).

The RPN is calculated by obtaining the product of the three ratings. The RPN can then be used to compare risks within the analysis and to prioritize corrective action items.

14. Identify current practices including PM plan or controls in place.

The controls strive to prevent the failures or detect them for corrective actions before they impact the operations. The designer or engineer uses analysis, monitoring, testing, and other techniques to detect failures. These techniques should be assessed to determine how well they are expected to identify or detect failure modes. FMEA should then be updated; plans should be made to address those failures to eliminate them. In some FMEA applications, the dollar cost of business loss, which consist of downtime cost, failure fixing cost, and frequency of failure, is also calculated to prioritize the corrective action plan 15. Determine recommended actions to address potential failures that have a high RPN. These actions may include specific inspections, PdM /CBM tasks, or improved operations; selection of better components or materials; limiting environmental stresses or the operating range; redesign of the item to avoid the failure mode; monitoring mechanisms; changing frequency of preventive maintenance; and inclusion of back-up systems or redundancy.

16. Assign responsibility and a target completion date for these actions. This makes responsibility clear-cut and facilitates tracking.

17. Track actions taken and re-evaluate risk. After actions have been taken, re-assess the severity, probability, and detection; then review the revised RPNs. Determine if any further actions are required. Update the FMEA as the design, process, or the assessment changes or new information becomes known.

Benefits of FMEA

FMEA helps designers and engineers to improve the reliability of assets and systems to produce quality products. FMEA analysis helps to incorporate reliability and maintainability features into the asset design to eliminate or reduce failures, thereby reducing overall life cycle cost.

Properly performed, FMEA provides several benefits. These include:

• Early identification and elimination of potential asset/process failure modes

• Prioritization of asset /process deficiencies

• Documentation of risk and actions taken to reduce risk

• Minimization of late changes and associated cost

• Improved asset (product) / process reliability and quality

• Reduction of Life Cycle Costs

• Catalyst for teamwork among design, operations, and maintenance

Fault Tree

A fault tree is constructed starting with the final failure. It progressively traces each cause that led to the previous cause. This continues until the trail can be traced back no further. Each result of a cause must clearly flow from its predecessor. If it becomes evident that a step is missing between causes, it’s added in and need is explained.

Once the fault tree is completed and checked for logical flow, the investigating team determines what changes to make to prevent or break the sequence of causes and consequences from occurring again. It’s not necessary to prevent the first, or root cause, from happening. It’s merely necessary to break the chain of events at any point so that the final failure cannot occur. Often the fault tree leads to an initial design problem. In such a case, redesign becomes necessary. Where the fault tree leads to a failure of procedures, it’s necessary either to address the procedural weakness or to install a method to protect against the damage caused by the procedural failure.

4 Six Sigma and Quality Maintenance Tools Six Sigma and Quality

Six Sigma is a quality improvement initiative, developed directly from Total Quality Management (TQM). It uses much the same toolset and concepts. The major emphasis of Six Sigma is its focus on reducing process variation to very low levels.

Jack Welch, retired chairman of General Electric, has been Six Sigma's most influential advocate. Companies such as Motorola and Allied Signal have been incubators and proponents of the movement.

Mikel Harry, the principal architect of Six Sigma methodology at Motorola, is one of its most enthusiastic champions.

The name Six Sigma warrants some explanation. Imagine that we make potato chips and weigh bags of potato chips as they come out of a bagging process. The bags are supposed to weigh 200 grams (about 7 ounces), but the actual weight will vary. If bags are overweight, we are giving away chips. If they are underweight, we are taking advantage of our customers. To analyze the process variation, we record the weights, and use a software program to construct a histogram of the distribution.

We hope that the distribution will be centered on 200 grams and that there wouldn't be long tails on either side. If our specification calls for all bags to be more than 190 grams and less than 210 grams, we can draw the specification limits on the histogram.

In plotting the histogram, we found a bag that weighed 188 grams.

Obviously, it’s out of specification. It may cause trouble with our customers if we ship it. What do we do? How many out of specification bags do we expect to find if we measure 1000 bags? We need to use statistical tools to predict this. We can find the average weight - or mean weight - of a bag of potato chips from all the bags we have weighed and calculate a standard deviation, which provides us an idea of how much variation there is around the mean. A high standard deviation means that we have a lot of variation in the process. This is where Six Sigma comes in. (The Greek letter sigma (s) is typically used to symbolize the standard deviation in statistical equations.) With enough data, we can try to fit a curve to the data - drawing a line that best approximates the mathematical function that really describes what is going on in the process. Let's take a normal curve as an example of one that we might decide to use. If our histogram can be closely described by a normal distribution, then 68% of the bags we measure will weigh within one standard deviation, or one sigma, of the mean value. If the mean is 200 grams, and the standard deviation is 10 grams, then 68% of the bags would weigh between 190 and 210 grams. Again, if this is a normal distribution, we would find that 95.5% of the bags weighed with in 2 sigma, or 20 grams of the mean. If we consider 3 sigma, or 30 grams, we would find that 99.7% of all bags would weigh between 170 grams, and 330 grams. A very few, just 0.3% of all bags, would weigh less than 170 grams or more than 230 grams.

If this is the case, our process is wider than our specification limits.

Some of the bags weigh less than 180 grams, and some weigh more than 220 grams. Does that matter? The answer depends on product or process.

If we were measuring something requiring fine tolerances, or something expensive, or where precise temperatures or mixture compositions were critical, we wouldn't want to find instances where our process was producing outputs out of the specification limits. In that case, we need to reduce the process variation. In the example above, if we can control the process variation and reduce sigma (s) to say 3 grams, then we can be sure to meet and exceed the specification requirements, (200 grams +/- 10 grams), 99.7 % of the time.

The Six Sigma quality movement takes process variation very much to heart. In fact, Six Sigma advocates believe that for many processes, there should be six sigma control between the mean and the specification limits, so that the process is making only a few bad (3.4) parts per million.

Of course, by relaxing the specifications, we can meet Six Sigma requirements, but that isn't usually the way to please customers. Instead, the variation in the process needs to be driven towards zero, so that the histogram gets narrower, and fits more comfortably inside the specification limits.

Six Sigma methodologies are not new. They combine elements of statistical quality control, breakthrough thinking, and management science - all valuable, powerful disciplines. The application of quality tools and process improvement can help in achieving excellent results.

Core of Six Sigma and Implementation

As we have discussed, Six Sigma is a statistical concept that measures a process in terms of defects. Achieving Six Sigma means that the process is delivering only 3.4 defects per million opportunities (DPMO). Thus, it means the process is working nearly perfectly. One sigma represents 691,462 defects per million opportunities, which translates to a percent age of good (non-defective) output of only 30.85 %. This is really very poor process performance. FIG. 8 lists sigma level, DPMO, and percentage quality output.

To have a good understanding of Six Sigma and the DPMO concept, let us consider lost luggage at the airport. Many of us have experienced the frustration of watching the baggage carousel slowly revolve while waiting for luggage that never arrives. The baggage handling capabilities of many airlines is around the three or four sigma level. Let us assume it’s three. It means that there are about 66,000 defects for every one million baggage transactions. This amount equates to approximately 93% probability that we will get our bags with our scheduled flight. Seven percent of us are not going to receive our bags. If quality level is four, we will still have at least one person who does not get his/her baggage. These defects increases costs for the airlines, because airline employees must deal with misplaced luggage and very unhappy passengers. Those defects - missing bags can result in lost business in the future. Similarly, in our manufacturing / maintenance process, these defects prohibit us from meeting customers' needs on time; they increase costs.

Deviations, variations, defects, or waste - whatever we want to call it, the end result is the same. It costs money. No matter what business we are in - distribution, manufacturing, process industry, or services - any defects or hidden waste in any of our processes will impact our bottom line.

======

FIG. 8 Six Sigma Chart

======

To get an accurate view of critical processes, we need to understand the limits on variations. Root causes of variation are explored, and the classic Deming's PDCA (Plan, Do, Control and Act) cycle is used to plan improvements, implement and test them, evaluate if they worked, and then standardize if they did. However, for problem solving in Six Sigma, the PDCA cycle has been modified slightly to have a five-phase methodology called DMAIC.

DMAIC represents the five phases in the Six Sigma methodology:

• D - Define

• M - Measure

• A - Analyze

• I - Improve

• C - Control

The DMAIC method involves completing the necessary steps in a sequence. Skipping a phase or jumping around won’t produce the desired results. It’s a structured process to solve problems with proper implementation and follow-up. Each phase requires the following: Define Phase (D)

• Identify the problems.

• Create a project to combat one or more problems.

• Define the parameters (boundaries) of the project.

• Determine the vital few factors to be measured, analyzed, improved, and controlled.

Measure Phase (M)

• Select critical to quality (CTQ) characteristics in the process / product, e.g., Y CTQ, where Y might be corrective maintenance cost of an asset or system, downtime, etc.

• Define performance standard for Y - what is the desired goal to achieve?

• Establish and validate process to measure Y. Analyze phase (A)

• Define improvement objectives for Y.

• Define and identify sources (Xs) creating variations in Y.

• Review/sort potential sources for change in Y.

• Identify the vital few sources (Xs) creating variations or defects Improve Phase (I)

• Determine variable relationships among vital few factors - Xs.

• Establish operating range for vital few Xs.

• Validate measurement plan for Xs.

Control Phase (C)

• Establish plan to control vital few Xs.

• Implement the plan - process control vital few Xs.

• Review the data per plan.

• Make changes as necessary until the process is in control and stable.

Pareto Analysis - 80/20 Principles

A Pareto chart is a bar graph that arranges information in such a way that priorities for process improvement can be established easily. It’s a tool for visualizing the Pareto principle, which states that a small set of problems, the "vital few," affecting a common outcome tend to occur more frequently than the remainder. In other words, 80% of the effects (failures) are created by 20% of the causes (assets / components). A Pareto chart can be used to determine which subset of problems should be solved first, or which problems deserve the most attention. Pareto charts are often constructed to provide a before-and-after comparison of the effect of improvement measures. Basically, it’s a resource optimization tool that helps to focus on vital few issues to maximize gains with limited resources.

The Pareto chart is used to illustrate occurrences of problems or defects in a descending order. It can be used both during the development process as well as when components or products are in use. It can depict which assets are failing more in a specific area or what type of components are having more failures than others. FIG. 9 shows an example of a Pareto chart for a compressor system with its components and failure history.

Pareto charts are a key improvement tool because they help us identify patterns and potential causes of a problem. One trick, often overlooked, is to create several Pareto charts out of the same set of data. Doing so can help you quickly scan a number of factors that might contribute to a problem and focus on those with the greatest potential payback for your efforts.

FIG. 9 A Pareto Chart for a Compressor System

History of Pareto - Where It Came From

The Pareto Principle, or 80/20 Rule, was first developed in 1906 by Italian economist Vilfredo Pareto who observed an unequal distribution of wealth and power in a relatively small proportion of the total population.

His discovery has since been called many names, including the Pareto Principles, Pareto Laws, the 80/20 rule or principle, the principle of least effort, and the principle of imbalance. The 80/20 principle states that there is a built-in imbalance between causes and results, inputs and outputs, and efforts and rewards. This imbalance is shown by the 80/20 relationship. In business, many applications of the 80/20 Principle have been validated.

For example, 20% of products usually account for about 80% of dollar sales value. The same is true for 20% of customers accounting for 80% of dollar sales value. In turn, 20% of products or customers usually also account for about 80% of an organization's profits.

Dr. Joseph M. Juran, the world-renowned quality expert, is credited with adapting Pareto's economic observations to business applications.

Dr. Juran observed that 20% of the defects caused 80% of the problems.

Project managers know that 20% of the work (the first 10% and the last 10%) consume 80% of time and resources. We can apply the 80/20 Rule almost universally, from the science of management to the physical world.

Pareto Analysis is also used in inventory management through an approach called ABC Classification, as discussed in Section 5.

Example of Pareto Principle

Generally, the Pareto Principle is the observation - not the law that most things in life are not distributed evenly. It can mean all of the following things:

• 20% of the process issues result in 80% of defects

• 20% of the assets cause 80% of the failures

• 20% of the failures cost 80% of the corrective maintenance budget

• 20% of the input creates 80% of the result

• 20 % of criminals accounts for 80% of the value of all crimes

• 20% of motorists cause 80% of accidents

• 20% of your clothes will be worn 80% of the time

• And on and on…

Recognize that the numbers don't have to be exactly 20% and 80%.

The key point is that most things in life, effort, reward, output, etc., are not evenly distributed; some contribute more than others. The number 20% could be anything from 10-30%; similarly 80% could be 60-90%.

What we need to remember: 20% is the vital few; the remaining are the others. Thus, 20% of the assets could create 60, 80, or 90% of the failures.

Pareto analysis helps us to prioritize and focus resources to gain maxi mum benefit.

FIG. 10 shows an example of Pareto analysis of plant assets.

The value of the Pareto Principle is that it reminds us to focus on the 20% that are important. Of the things we do during our daily routine, only 20% of our activities really matter -- they produce 80% of our benefits.

These are the activities we must identify and emphasize.

5. Lean Maintenance Tools

"Lean" is a new buzzword. Words such as lean production, lean manufacturing, lean maintenance, lean management, lean enterprise, and lean thinking have been abundantly discussed in literature in the last few years.

But what does "Lean" really mean?

As the word says, Lean means literally - LEAN. We all need to be lean to become or stay healthy. We need to get rid of fat - waste which we carry around with us. Similarly, in our work environment, we need to be efficient and effective (in others words, lean) to stay healthy and survive in today's competitive environment. Appropriate use of tools is necessary to be efficient and effective, and to ultimately create value for our customers. In fact, many subject matter experts and authors are saying the same mantra -- get rid of the waste. For example, Kevin S. Smith, President of TPG Productivity, Inc., states, "Lean is a concept, a methodology, a way of working; it's any activity that reduces the waste inherent in any business process." In their famous guide Lean Thinking, James P. Womack and Daniel T.

Jones write that the critical starting point for lean thinking is value. Value can only be defined by the ultimate customer. It's only meaningful when expressed in terms of a specific product (a good or a service, and often both at once) that meets the customer's needs at a specific price at a specific time.

FIG. 10 Pareto Analysis of Plant Assets

Lean Background

Lean philosophy or thinking is not new. At the turn of the century, Henry Ford, founder of the Ford Motor Company, was implementing lean philosophy. Of course he didn't use the word lean at that time.

John Krafcik, a Massachusetts Institute of Technology (MIT) researcher in the late 1980s, coined the term Lean Manufacturing while involved in a study of best practices in automobile manufacturing. The MIT study had examined the methodology developed at Japanese auto giant Toyota under the direction of production engineer Taiichi Ohno, who later became known as the father of TPS, the Toyota Production System, a model of the lean system. At the end of World War II, with Toyota needing to improve brand image and market share, Ohno reputedly turned to Henry Ford's classic book, Today and Tomorrow for inspiration. One of Ford's guiding principles had been the elimination of waste.

Ohno is credited with developing the principles of lean production.

His philosophy, which focused on eliminating waste and empowering workers, reduced inventory and improved productivity. Instead of maintaining resources in anticipation of what might be required for future manufacturing, as Henry Ford did with his production line, the management team at Toyota built partnerships with suppliers. In effect, under the direction of Ohno, Toyota automobiles became made-to-order. By maximizing the use of multi-skilled employees, the company was able to flatten their management structure and focus resources in a flexible manner. Because of this, Toyota was able to make changes quickly; they were often able to respond more quickly to market demands than their competitors.

To illustrate the lean thinking, Shigeo Shingo, another Japanese Lean and Quality expert, observed that only the last turn of a bolt actually tightens it - the rest is just movement. This ever finer clarification of waste is a key to establishing distinctions between value-added activity, waste, and non-value-added work. Non-value adding work is a waste that must be removed. Ohno defined three broad types of waste: Muri, Mura, and Muda. These are three key Japanese words in lean terminology.

1. Muri: Overburden

2. Mura: Unevenness

3. Muda: Waste, non-value-added work

Muri is all the unreasonable work that an organization imposes on workers and machines because of poor organizational design, such as carrying heavy weights, unnecessary moving, dangerous tasks, even working significantly faster than usual. It’s pushing a person or a machine beyond its natural limits. This may simply be asking a greater level of performance from a process than it can handle without taking shortcuts and informally modifying decision criteria. Unreasonable work is almost always a cause of multiple variations.

Mura is a traditional Japanese term for unevenness, inconsistency in physical matter or human spiritual condition. Mura is avoided through JIT systems which are based on little or no inventory, by supplying the production process with the right part, at the right time, in the right amount, and first-in, first-out component flow. Just in Time systems create a "pull system" in which each sub-process withdraws its needs from the preceding sub-processes, and ultimately from an outside supplier. When a pre ceding process does not receive a request or withdrawal, it does not make more parts. This type of system is designed to maximize productivity by minimizing storage overhead.

Muda is a traditional Japanese term for activity that is wasteful and doesn't add value or is unproductive.

The original seven muda are:

1. Transportation. Moving material and parts that are not actually required for the process.

2. Inventory. No extra inventory should be in the system. All components, work-in-process, and finished product not being processed are waste.

3. Motion. People or equipment moving or walking more than required to perform the work are waste.

4. Waiting. Waiting for the next step or activity. Time not being used effectively is a waste.

5. Overproduction. Production ahead of demand or need.

6. Inappropriate Processing. Produce only what is needed and when needed with well-designed processes and assets.

7. Defects. The simplest form of waste involves components or products that don’t meet the specification. They lead to additional inspections and defects that must be fixed.

First, Muri focuses on the preparation and planning of the process, or what work can be avoided proactively by design. Next, Mura focuses on how the work design is implemented and the elimination of fluctuation at the scheduling or operations level, such as quality and volume. Muda is then discovered after the process is in place and is dealt with. It’s seen through variation in output. It’s the role of management to examine the Muda in the processes and eliminate the deeper causes by considering the connections to the Muri and Mura of the system. The Muda and Mura inconsistencies must be fed back to the Muri, or planning, stage for the next project.

Lean requires the use of a set of tools that assist in the identification and steady elimination of waste. Examples of such tools are Brainstorming, Cause and Effects Analysis, Five S, Kanban (pull systems), Poke-yoke (error-proofing), Pareto Analysis, and Value Stream Mapping.

Lean Maintenance

Much has been written and talked about lean concepts in manufacturing, but what about lean maintenance? Is it merely a subset of lean manufacturing? Is it a natural spinoff from adopting lean manufacturing practices? Lean maintenance is neither a subset nor a spinoff. Instead, it’s a prerequisite for success as a lean organization. Can we imagine lean JIT concepts to work without having reliable assets or good maintenance practices? Of course, we want maintenance to be lean - efficient and effective - without waste. Lean maintenance has nothing to do with thinning out warm bodies, or more directly, reducing maintenance resources.

Rather, it has to do with enhancing the value-added nature of our maintenance and reliability efforts.

In maintenance, our customers are inherently internal to our organization - they are our operations / production departments. One of the primary responsibilities of maintenance is to provide plant capacity to its customers. Let's face a fundamental truth: We can't be successful with Lean manufacturing if we don't have reliable assets, reliable machines.

Lean maintenance is not performing lean (less) corrective or preventive actions. It’s not about facilitating a poor maintenance program.

Maintenance customers expect maintenance and reliability programs to be optimized - effective and efficient, and fully supporting the need to operate at designed or required capacity reliably.

The majority of maintenance activities revolve around systems and the processes that move people, material, and machine together such as preventive maintenance programs, predictive maintenance programs, planning and scheduling, computerized maintenance management systems, and store room and work order systems. We need to apply the principles of Lean to these maintenance programs and processes to drive out the non-value-added activities.

Value stream mapping for key maintenance processes to identify non value-added activities should be performed. It will be a good practice to create current and future states of the maintenance processes in order to develop a plan to reduce and eliminate wasteful activities. In developing the current- and future-state maps of maintenance process, we must also assess the skills and knowledge of our maintenance personnel. A poorly skilled person operating within a great system will produce poor results.

Likewise, if we have a good preventive maintenance program, yet our PMs are poorly structured and designed, our PMs will achieve poor results. Therefore we need to optimize PM using tools such FMEA/RCM to give new life to our efforts.

The endless pursuit of waste elimination is the essence of lean maintenance. Eliminate waste by understanding the seven wastes discussed earlier in relation to maintenance. Identify where they exist and eliminate them. For example:

1. Transportation. Plan and provide materials and tools to reduce the number of extra trips to the store room to hunt for the right parts.

2. Inventory. Eliminate or minimize extra inventory in the system.

Keep only the right material / parts / tools in the store room.

3. Motion. Minimize people movement by improved planning.

4. Waiting. Minimize waiting for the next step - another skilled person or part - by improved planning and scheduling.

5. Overproduction. Develop optimized PMs, FMEA/RCM-based, perform root cause analysis to reduce failures, etc.

6. Inappropriate Processing. Use the right tools and fixtures to improve maintenance processes.

7. Defects. Eliminate rework and poor workmanship. Educate / train maintenance personnel appropriately.

Most organizations, even without proclaiming a Lean maintenance effort, might actually be engaged in the very activity that will get them there. For example, TPM and Lean share many traits. Standardization, 5S, and mistake-proofing are just a few other examples. More importantly, TPM recognizes that the operator is just as responsible for asset reliability as the maintenance person. One of the key objectives of TPM is to eliminate the six major losses: breakdown, set up and adjustments, idling and minor stoppages, operating at reduced speeds, defects, and reduced yield.

It’s to these losses that we employ a CMMS, work orders, planning and scheduling, and other system tools in order to mitigate them.

Our challenge going forward is to identify activities which don't add value to maintenance and reliability. Use tools to analyze problems (waste in the system), develop value-added solutions, and implement practices that have been discussed earlier in the guide to become a lean organization.

Value Stream Mapping (VSM)

Processes, both manufacturing and service, can be compared to a river. They flow in a natural direction and carry material (products and information) with them from one point to another. Now, visualize all that a river carries with it. In addition to value-added elements such as water and fish, it also carries other elements which may not be adding any values, or instead may be harming it. Similarly, our processes have many non-value-added and wasteful activities which need to be identified and eliminated. Value Stream Mapping identifies waste and helps to stream line the process for higher productivity.

VSM is a tool commonly used in continuous improvement programs to help understand and improve the material and information flow within processes and organizations. As part of the lean methodology, VSM captures the current issues and presents a realistic picture of the whole process from end to end in a way that is easy to understand. To some, VSM is a paper and pencil tool that helps us to see and understand the flow of information and material as it makes its way through the value stream. It helps to identify steps that are not adding any value - they are waste and need to be removed from the process or improved.

VSM is a structured process and is usually carried out in eight steps.

Step 1 - Identify the problem and set expectations.

Step 2 - Select the team.

Step 3 - Select the process to be mapped.

Step 4 - Collect data and produce the current state map.

Step 5 - Critique / evaluate the current state. identify waste and non-value added items.

Step 6 - Map the future state;

Step 7 - Create an action plan and implement.

Step 8 - Measure and evaluate results; readjust the plan, if needed.

FIG. 11 is an example of the mapping process. In creating the maps, sometimes standard symbols are used to identify wait, flow, operation, storage, etc. Also, the following color coding of activities is used to display the value of activities:

• Green - Value-added activities

• Yellow - Non-value-added activities, but may be required to meet regulatory or organizational requirements

• Red - Non-value-added activities (waste)

Value stream mapping is not a one-time event. Successful organizations apply value stream mapping continuously to their processes to get better results.

FIG. 11 The Mapping Process

6. Other Analysis and Improvement Tools

The Theory of Constraints

The Theory of Constraints (TOC) is an improvement tool originally developed by Eliyahu M. Goldratt and introduced in his guide The Goal.

It’s based on the fact that a system is like a chain with its weakest link.

At any point in time, there is most often only one aspect of that system that is limiting its ability to achieve most of its goal. For that system to attain any significant improvement, its constraint must be identified and the whole system must be managed with this constraint in mind.

Therefore, if we want to increase throughput, we must identify and eliminate the constraint (or bottleneck).

Process Flow Types

There are four primary process flow types in the TOC lexicon. The four types can be combined in many ways in larger facilities.

• A-Flow - Material flows in a sequence, such as in an assembly line. The primary work is done in a straight sequence of events (one-to-one). The constraint is the slowest operation. The general flow of material is many-to-one, such as in a plant where many sub-assemblies converge for a final assembly. The primary problem in A-Flow process is synchronizing the converging lines so that each supplies the final assembly point at the right time.

• I-Flow - Material flows in a sequence, such as in an assembly line. The primary work is done in a straight sequence of events (one-to-one). The constraint is the slowest operation.

• T-Flow - The general flow is that of an I-Flow (or has multiple lines), which then splits into many assemblies (many-to-many).

Most manufactured parts are used in multiple assemblies and nearly all assemblies use multiple parts.

• V-Flow - The general flow of material is one-to-many, such as a process that takes one raw material and makes many final products. Classic examples are meat rendering plants and steel manufacturers. The primary problem in V-Flow is "robbing" where one operation (A) immediately after a diverging point "steals" materials meant for the other operation (B). Once the material has been processed by operation A, it cannot come back and be run through operation B without significant rework.

Once the key constraint has been identified, we need to do everything possible to maximize the rate of flow through that constraint, and not let any other portion of the system run at a faster rate. In a well-optimized system, only one asset is running at full capacity and the remaining run at only their partial capacity.

The key steps in implementing an effective TOC approach are:

1. Communicate the goal of the process or organization. For example, "Make 100 units or more of X products per hour."

2. Identify the constraint. This is the factor within a process or system that prevents the organization from meeting that goal.

3. Exploit the constraint. Make sure that the constraint is dedicated to what it uniquely adds to the process, and not doing things that it should not do.

4. Elevate the constraint. If required, increase the capacity of the constraint by adding additional equipment or reducing the set-up time.

5. If, as a result of these steps, the constraint has moved to another system, return to Step 1 and repeat the process for the new system.

Affinity Analysis or Diagram

The primary purpose of an Affinity analysis, also known as the KJ method after its author Jiro Kawakita, is to organize ideas, data, facts, opinions, and issues in naturally-related groups, specifically when a problem is complex.

An affinity diagram is the result of a creative process focused on finding the major themes affecting a problem by generating a number of ideas, issues, and opinions. The process identifies these ideas, groups naturally related items, and identifies the one concept that ties each grouping together. The team working on a problem reaches consensus by the cumulative effect of individual sorting decisions rather than through discussion.

In many problem solving situations, brainstorming is a common tool used to gather issues and ideas to solve problems. As a mechanism for allowing a group of individuals to get ideas and issues on the table, brain storming is one of the best methodologies. However, too often, such sessions generate large quantities of issues. These issues can become complex to review and difficult to interpret. It can also be challenging to high light particular trends or themes from the gathered issues following a brainstorming session.

A variety of methods are available to utilize gathered ideas efficiently. Of these methods, Affinity diagrams represent an excellent tool both to group ideas in a logical way and to capture themes that have developed during the brainstorming. The following steps are used to develop Affinity diagram ( FIG. 12):

Identify the problem or issue to solve.

a. Conduct a brainstorming meeting.

b. Record ideas and issues on "post-it notes" or cards.

c. Gather post-it notes and cards into a single place (e.g., a desk or wall). Sort the ideas into similar groups, patterns, or themes based on the team's thoughts. Continue until all notes and cards have been sorted and the team is satisfied with their groupings.

d. Label each group with a description of what the group represents and place the name at the top of each group.

e. Capture and discuss the themes or groups and how they may relate.

Although affinity diagrams are not complicated, getting the most from them takes a little practice. For example:

• Make sure that ideas and issues that have been captured are understood. Usually brainstorming sessions have a habit of simplifying issues or agreeing without understanding the concepts being discussed.

• Don’t place the notes in any order. Furthermore, don’t deter mine categories or headings in advance.

• Allow plenty of time for grouping the ideas. Maybe post the randomly-arranged notes in a public place and allow grouping to happen over a few days, if needed.

• Use an appropriate number of groups within the diagram. Too many can become confusing and unmanageable.

Affinity diagrams are great tools for assimilating and understanding large amounts of information. The next time you are confronting a large amount of information or number of ideas and feel overwhelmed at first glance, use the affinity diagram approach to discover all the hidden linkages.

FIG. 12 Preparing an Affinity Diagram

Barrier Analysis (BA)

Barrier analysis is a technique often used particularly in process industries. It’s based on tracing energy flows, with a focus on barriers to those flows, to identify how and why the barriers did not prevent the energy flows from causing damage. Assets and systems generally have barriers, defenses, or controls in place to increase their safety. Barrier analysis can be used to establish the type of barriers that should have been in place to prevent the incident, or could be installed to increase system safety.

Therefore, Barrier Analysis can be used either proactively, to help to design effective barriers and control measures, or reactively, to clarify which barriers failed and why.

Barrier analysis offers a structured way to visualize the events related to system failure. Barriers can be physical, human action, or system con trolled. Barriers can be classified as:

• Physical barriers

• Natural barriers

• Human action barriers

• Administrative barriers

To perform Barrier Analysis, identify the issue to be analyzed. For example, consider the hydraulic supply line - 1 1/2 inch, 3000 psi hydraulic fluid supply line - ruptured by overpressure.

List all the barriers that were in place, but failed to stop this rupture.

Consider the circumstances of the incident and assess the performance of each barrier on this situation. For barriers that are assessed as having failed, consider why they failed. Evaluate the impact of each failure on the incident being analyzed. If the failure was causal, then efforts should be focused on aspects of the system in terms of quality and safety improvement. Record findings and establish improvements needed. Occasionally detailed analysis has to be performed using the Fishbone, Five Whys, or other techniques before recommendations should be made.

7. Summary

Organizations must continually improve processes, reduce costs, and cut waste to remain competitive. To make improvements in any process, data needs to be analyzed, utilizing tools and techniques to develop and implement corrective actions. A variety of methods, techniques, and tools are available to us, ranging from a simple checklist to sophisticated modeling software. They can be used effectively to lead us to appropriate corrective actions. We have discussed a few of these tools such as 5 whys, cause-and-effects diagrams (aka fishbone diagram), Pareto analysis, root cause analysis, FMEA, Six Sigma, Lean, and Value Stream Mapping.

Applying continuous improvement tools discussed can optimize work processes and help any organization to improve results, regardless of the size or type of business environment.

8. QUIZ

__1 Into what categories can root cause analysis be classified?

__2 What steps are needed to perform an RCA?

__3 When should we use the fishbone tool?

__4 How is a fishbone diagram constructed? Explain the steps that are needed.

__5 What is the purpose of the FMEA?

__6 What key steps / elements are needed to perform an FMEA?

__7 What does 6 Sigma mean?

__8 If a process is performing at 5 Sigma level, what percent of defective parts can be expected?

__9 What is the difference between Deming's improvement cycle and DMAIC?

__10 What is the primary objective of Pareto analysis?

9. References and Suggested Reading

FMEA, 3rd Edition. Reference manual AIAG. Harry, Mikel and Richard Schroeder. Six Sigma: The Breakthrough Management Strategy. Currency Publications, 2000. Koch, Richard. The 80/20 Principle: The Secret to Success by Achieving More with Les. Currency Books, 1998.

Latino, Robert and Kenneth Latino. Root Cause Analysis: Improving Performance for Bottom Line Results. CRC Press, 2002.

Tague, Nancy R. The Quality Tool Box. The Quality Press, 2005.

+++++

Prev. | Next

Article Index    HOME   Project Management Articles