top of page

The 7 Basic Quality Tools: How and When to Employ Them


Utilizing the Seven Basic QC Tools consistently in your job and life can help you determine the solution to 95% of the problems that stand in the way to success due to quality issues. To clarify, I am in no way a self-help guru, and determining the root cause can do little to help you implement a solution in the workplace or day-to-day life, but it will point you down the path toward long-term correction. Many solutions in the workplace are apparent, but resource allotment can easily pull a team from a long-term solution when the short term is so much more attainable. That is the way it is in life, as well- When we follow the path of least resistance, we usually end up in that same inescapable situation. Karou Ishikawa (1985) is credited with this statement concerning these tools: " much as 95 percent of all quality-related problems in the factory can be solved with seven fundamental quantitative tools". Before I go further, I want to clarify that eight tools rather than seven will be discussed because, over time, the classic stratification tools have sometimes been replaced with Flowcharting in some industries. Ishikawa's statement can provide three key insights into these tools: 1. They are applicable in problem-solving situations most commonly encountered; 2. Excluding Flowcharts and Cause-and-Effect diagrams, they are quantitative and rely on numerical data; 3. They are most widely used as aids in tracking, monitoring, and analyzing data- rather than for planning functions

If you can't describe what you are doing as a process, you don't know what you're doing.

-W. Edwards Deming


Stratification (per the definition of the ASQ) is the act of sorting data, people, and objects into distinct groups or layers, used in combination with other data analysis tools. I'm sure many of us reading this have dealt with the pain of ordering a massive mountain of non-sorted data into the appropriate "buckets" that will allow meaningful patterns to emerge. Stratification of this type is a critical step during the data collection planning process to determine the most effective strata for effective and efficient analysis. My personal preference is to have the collected data noted by geometric location, line, workstation, shift, date, product/process, and operator. Any other factors can be added as required, but this usually covers the critical elements broadly. The best way to use the stratification tool is to set up the data collection process so that all Data are collected stratified, allowing each component to be easily analyzed. Any analyst can easily recognize emergent patterns once the data are stratified; otherwise, too much time can easily be wasted sifting through data later.

Flow Charts create their own kind of "bucket" Each step of the process is separated and clearly defined, preferably with the next step considered the "customer" of the previous step The graphic representation of the process displays the elements, components, or tasks associated with a process A Flowchart can be a high-level view of a process and quite simple or can zero in on every step of the process (30,000 ft vs Ground Level) The high level would be employed when you are trying to determine where the problem is After you have narrowed the problem down to a specific portion of the process a more detailed evaluation of the flow can help you while you are determining a root cause, but Flowcharts are usually aids, and only every now and then are apparent root causes revealed through a flowchart Flowcharting should be the first step in a problem-solving exercise If the flowchart already exists, be sure it is up to date with the current process So you know, in the modern data-driven world, Stratification has become built-in to our data analysis, and as the Process-driven concept evolved, in an environment where Stratification became taken for granted, Flowcharts gained much more prominence It is still very critical to understand both concepts.

Pareto Chart

Per ASQ definition, a Pareto Chart is a bar graph with some unique properties. The lengths of the bars represent frequency or cost (time or money, for instance) and are arranged with the longest bars on the left and the shortest to the right. In this way, the chart visually depicts which situations are more significant. The Term Pareto chart exists thanks to Vilfredo Pareto, who originally postulated the concept of the "80/20" rule to explain economic occurrences known as the "vital few" and the "trivial many." Pareto observed that approximately 20% of economic factors were the most vital to the outcome economic situations, with the remaining 80% of the factors being only of trivial impact on the outcome. Juran and Gryna(1980) adapted the principle for quality applications to help quality improvement professionals focus on the 20% of categories/factors that had the most impact on the process.

As for when to use a Pareto chart, it can frequently be applied throughout the process. At the beginning of a new Quality Improvement project, the attempt to narrow down the most vital areas to address for improvement is made. After the project has been identified, the team can use a Pareto chart to detect the best areas to focus on, especially if there are too many possible paths to address. The aforementioned is an excellent reason to have stratified data. It makes creating a Pareto chart much more manageable. If the team has a visual indication of the most likely area for a needed solution, cross-functional teams will find it easier to harmonize. Pareto's are also a great communication tool for the team, predominantly when the cross-functional group consists of many members who may not understand what areas are of most concern.

You will find that a Pareto chart can help you in day-to-day decisions, such as choosing a new vehicle. However, some choices may be quickly eliminated from consideration based upon the frequency of complaints and when you are down to just a few, the analysis must continue.

The implementation of a Pareto chart today is straightforward. Minitab, Excel, and many other forms of software can quickly produce a Pareto chart from the raw data. If you don't have the software, the process is relatively simple:

  1. Rank the categories of data from highest frequency or relative frequency from highest to lowest from left to right.

  2. Label each axis in a way that clearly defines the categories and the unit of measure

  3. Draw bars for each category that corresponds to their respective counts. Keep the width of each bar the same.

  4. Add Cumulative Counts and lines depicting the rising total. The final category on the right should be marked "Other" to avoid a potentially long list of trivial counts.

Cause-and-Effect Diagrams (Ishikawa)

Next is the Cause and Effect Diagram a.k.a. C&E Diagram, Fishbone (the most common term), or its proper term "Ishikawa diagram". This tool was developed and popularized during the 1960s by Kaoru Ishikawa, one of the founding fathers of modern quality management. The term fishbone is obviously derived from its resemblance in shape to a fish skeleton. The tool was developed to visually document the analysis of factors that impact a single problem or opportunity. The Causes are the factors, and the Effect is the problem or Opportunity.

The C&E Diagram is most effectively employed in problem-solving situations where the root cause of the problem or primary cause(s) of the opportunity is/are unclear, and the team members have situational awareness of the issues and potential causes. The tool is best utilized with a cross-functional team led by a facilitator who can help move roadblocks along. The C&E Diagram can often be employed during an analysis stage of a six-sigma project or the root cause investigation of corrective action.

Using a Fishbone diagram is relatively simple but very useful for the visual recording of team discussions. First, the problem or opportunity is stated inside a rectangle using a short description on the right side of the diagram- the "head" of the fishbone. Next, the major contributing factors are identified and stratified based on each factor's category. Sometimes one factor can be noted in multiple categories. The common categories used are usually the 6Ms (Mother Nature-environment-, Manpower-people-, Methods, Machinery-equipment-, Materials and Measurement). There are other variants based upon the task (service/management/sales), and the categories can be adjusted as the team desires; just be sure your net of categories is sufficient. After the major causes have been identified, the driving factors of each major cause are identified as "bones" added on to the original bones of the main skeleton, and the best way to inquire about each cause and sub-cause is to use the classic "why might this have happened?". At this phase, you do not yet need concrete data (though it will help immensely), so it should be subject matter experts helping to focus the potential causes to focus the team.

At the end of the exercise, you should have a likely driving factor or factors for each category of bone that was relevant, allowing the investigation to proceed further. Sometimes at this point, the team can feel that there are too many things attacking our process. Still, the next step would be to continue, using Pareto Analysis based upon common standard KPIs being impacted by all factors on all of the driving factors to determine which ones are of greatest priority, and also to look at XY Diagrams for each driving factor vs Effect (see next section).

Back to using a Fishbone diagram to choose a new Car- If you gathered several auto enthusiast friends and help them hash your fishbone diagram out based upon your opportunity to purchase a new car, you may not determine what car to buy, but you would likely determine what areas of concern to watch for in your search, and focus your coming investigation based upon your team's input. You might even find yourself doing some research to determine if any of the cars or types of cars should be eliminated based upon the root causes of the most concerning factors you flagged.

Scatter Diagram/XY Diagram

Scatter Diagrams (also known as XY Diagrams) are used to graphically display quantitative indications of a relationship between two variables (usually an input factor and an output). What this effectively does is evaluate the performance of the input variable(X) to the output variable(Y), hence, the term XY diagram. What is being investigated is how these two variables correlate. When two sets of data are strongly linked together, they have a high Correlation. The term Correlation is made of Co- (meaning "together") and Relation. Correlation is Positive when the values increase together, Correlation is Negative when one value decreases as the other increases, and when there is no discernable relationship, there is no Correlation.

The method of employing an XY diagram does not require any deep statistical knowledge. The input variable should use the X-axis, and the resulting output variables from the same operation, product, or time period, etc. should be noted in the corresponding Y-axis (Excel, Minitab, and most other statistical software (JMP or R) are very effective at producing XY diagrams, or you can do it by hand by simply entering creating a graph on some graphing paper using a ruler (adjust the scale you draw the graph at so the most of the page is used). You do not need to know the correlation coefficient to see a strong relationship, as any strong correlation will usually be revealed visually.

In your research concerning that new car, you might see a strong positive correlation between red cars and speeding tickets and a strong negative correlation between hatchbacks and repair costs, but neither can be directly attributed to causality. Correlation does not imply causality; it only gives you a statistical leg to stand upon. Statistical Significance is a different matter entirely.

Check Sheets

Check Sheet/Tally Sheet Example

Check Sheets (also known as Tally Sheets) are a tool used to summarize and visually represent a tally of event occurrences. This tool is most effective during an investigation when output events such as defects (or perhaps input events known/suspected to contribute to a defect) are being counted.

This method of utilization can be performed by hand using pen and paper but can also easily be performed on an electronic tablet. If the check sheet data is going to be moved to computer format, the data will be more reliable if it is initially entered using a computer format that allows for the transfer of data without error, as data transfer is a frequent point of failure.

Your check sheet should be uniquely designed to capture the specific data required to analyze the process, so any and all information pertaining to the process should be included. Generally, the elements that should be included at a minimum are ID of Product & Process; Time frame of data collection period (per the data collection plan); Individual accountable for product or process and Individual responsible for the data collection, and traceability elements. In addition, the Check Sheet should include a clear data collection space to record data for event occurrences, as well as a space for comments.

The Data are recorded during the time frame for all events designated to be monitored. A simple checkmark, dot, or X for each type of event will be recorded when it is observed (just one mark for each occurrence). After the designated time frame is complete, the tally sheet itself will give a rough graphic representation of what events were most prevalent, but the data can efficiently be utilized using other tools such as Pareto Charts or Histograms (see next section).

When you are talking to all of your friends while searching for that new car, you might enter the results into a check sheet on your phone anytime a bad report comes in from a bad dealership and keep a tally sheet for good reports. That will help you decide which dealership to choose based upon customer service check sheet results.


Histograms are normally used to visually represent the frequency of occurrences of events, with event occurrences sorted into categories of defined ranges across the horizontal axis (called bins). Histograms help graphically display the distribution of event occurrences used to present "continuous data," that is, Data that represents measured quantity (such as miles on a car). The data would then be collected into specific range categories to present a histogram when it is critical to understand how a particular set of data are distributed relative to each other. The Data are recorded in each column or category as they occur, and columns are not sorted by frequency.

To construct a histogram from a continuous variable, you first need to determine the amount of data to be used. If you were researching used cars, Miles on the odometer would be your horizontal, split into bins, with the recommended number of bins equal to √n. Each bin contains the number of occurrences in the data set that are contained within that bin. The resulting histogram (if enough data were collected) would clearly display your current bell curve of used car odometer readings and provide a visual of what the norm should be for a used car. This would help you avoid buying a car with too many miles on it and might help you spot a good deal.

When presenting the graphic, always provide a descriptive title, label each axis, provide a measurement scale for each axis, label the columns, and provide a data summary.

Histograms are usually considered large sample tools and are not really very reliable for small sample sizes. I would go with 50 for viability and would not go with anything under 30. Statistical software, even Excel, will help you with histogram creation.


9000 11000 11000 12000 12000 11000 12000 12000 13000 13000 14000 14000 15000 15000 20000 20000 20000 20000 30000 30000 30000 30000 30000 30000 40000 40000 40000 40000 40000 40000 45000 45000 45000 45000 45000 45000 45000 55000 55000 55000 55000 55000 55000 55000 55000 65000 65000 65000 65000 65000 65000 65000 65000 65000 65000 75000 75000 75000 75000 75000 75000 75000 75000 85000 85000 85000 85000 85000 85000 95000 95000 95000 95000 95000 95000 95000 95000 95000 105000 105000 105000 105000

Control Chart / Run Chart

-Clearly, something happened between 10 and 11-

Run Char Example

If you add control limits to the run chart, it will become a control chart. The control chart is also known as the Shewhart chart because it was developed by Walter A. Shewhart while working in Bell Labs. These charts are used to study the change in the process over some time. Though I am a strong believer in the power of SPC (statistical process control), as I have witnessed it in action, I will address only the Run Chart aspect of the 7 basic tools and save Control Charting and general SPC for a later post.

Run Charts are very effective tools used to track and monitor a metric or parameter without regard to control limits or tolerances and are frequently used to help Quality Engineers become fully aware of how a metric/process is performing over time. This knowledge provides another signpost on the journey toward root cause analysis.

The basic method of creating a run chart is

  1. Select a single parameter to monitor

  2. Set a scale for the y-axis in a way that will distribute the data throughout the scale.

  3. ID the time intervals for the graph. (It should be based upon how Data is collected)

  4. Collect and chart the data. (Minitab, Excel or any other SPC software can be used)

  5. Calculate the Average and plot the average on the Run Chart.


These tools, while helpful for most quality assurance teams, are primarily intended for quality control, or to be employed in concert as the precursor to Root Cause Analysis when a problem has many potential causes and the team needs to help focus on the most likely causes, allowing the team to move toward a final Root Cause Analysis in which the team can come to a consensus and execute a full-blown corrective action. The tools are versatile and, because they are basic, can be employed in many different combinations.

Though you can never really know if that car you purchase will break down quicker than you hope, or if its color will trigger a ticket-writing spree by every officer that observes you, what employing the tools in real life can do is help you focus on the more likely problem areas so you can move on to final analysis and final decision. I would advise you not to get caught up in overanalyzing, or you will just go in circles without deciding (commonly referred to as Analysis Paralysis). If you are searching for a root cause in order to correct it, narrow down the potential causes to the most likely causes as indicated by data and team input. Then first, pursue the probable cause(s) with the most return potential to the customer, and then address the issues in order of importance to the other stakeholders as determined by strategic alignment.


CSSBB Primer. (2014). West Terre Haute, Indiana: Quality Council of Indiana.

Kubiak, T. a. (2017). The Certified Six Sigma Black Belt Handbook Third Edition. Milwaukee: ASQ Quality

Tague, N. R. (2005). The Quality Tool Box. Milwaukee: Quality Press.


Quality Concepts offers IASSC Certified Lean Six Sigma Training at the Yellow Belt, Green Belt, and Black Belt levels: $185.00 all belt levels: click below to explore the training offered

Quality Concepts also offers Lean Six Sigma Certification at Green Belt and Black Belt levels: Just click below to see the options available

124 views0 comments

Recent Posts

See All


Post: Blog2_Post
bottom of page