Pharma Quality Departments Seek for Improvement Vehicle

Global Pharmaceuticals Companies Seek Continuous Improvement Vehicle for Quality Department

Quality within quality


According to best practice, it is universally agreed that quality should be built into the manufacturing process or product, rather than addressed as a separate entity. Therefore, strictly speaking, Quality Control does not add value to the final product from a level zero customer perspective. However, when we consider the compliance rules and legislations surrounding quality testing for drug manufacturing, quality assurance then becomes a fundamental stage gate from a level one internal process perspective. This interpretation of a QC department’s role justifies the need for a modularized Quality Control department, an approach which is now the accepted norm amongst many pharmaceuticals.

In an industry where product and labor costs are extremely high, firms have an inherent need to continuously challenge and raise performance levels to sustain their competitive advantage. Further challenges include a need to reduce lead times and, in turn, reduce excessive inventory costs. Also, as quality analysts need to be skilled scientists, the cost per resource is considerably higher than that for a benchmark operator. As a consequence, the requirements of this resource must be continuously scrutinized. To further complicate the matter, the high variability in testing per product and product demand itself stresses the need for an evolving and adaptable platform to provide tangible and sustainable improvement initiatives.

In the 1950’s, Deming established the Plan, Do, Check, Act (PDCA) process, an iterative methodology used as a platform for sustainable and continuous improvement. This principle was based on the scientific method of “hypothesis,
experimentation and evaluation”, or “plan, do, check”. The additional step of “act” transformed what was a linear process into a continuous cycle. This longstanding methodology has proven successful at a micro level but offers limited scope for improvement. A fresh approach now explores the potential of PDCA at a macro scale.

In this new approach to PDCA each cornerstone of the PDCA methodology evolves into a series of detailed phases. A ‘Plan’ phase becomes a diagnostic of S&OP, budget assessment, and resource & skills sizing; in which the need for analytical capacity modeling becomes crucial. The ‘Do’ phase retains the function of executing business as usual, but calls for standard practices to be defined. ‘Checking’ must therefore be structured around a systematic and consistent metric system, which will monitor and highlight both positive and negative movements from standard work. Finally, the ‘Act’ phase is there not simply to reiterate, but now becomes a decision-making activity, driven by the results of the check phase, in which innovative solutions for performance improvement can be implemented.

This new approach to PDCA was implemented with a particular focus on providing tools to help support each phase of the process. Key implementation areas included:

  • Meticulous planning – aligning resource requirements to anticipated demand
  • Adhering to the plan – handling the work in progress to achieve necessary throughput
  • Evaluating performance – reviewing success cases and noting harmful factors
  • Generating corrective improvement initiatives – next steps for improved performance

The execution and iteration of this process is essentially based on the four cornerstones of the longstanding PDCA continuous improvement process. Although the process steps of this methodology may appear simplistic or apparent, they in fact require specific definitions and careful administration. Implemented correctly, the PDCA process is a vehicle which can generate sustainable improvement
initiatives. This article explores the details of the PDCA framework and the intricacies that ought to be considered when executing each cornerstone of the framework.

PLAN

Planning of resources is the first action to best ensure that staffing costs are aligned to demand. When it comes to labor allocation, evaluating the total resources required and the optimum division of labor are given equal significance.

To fully understand the bottom line requirement for both staffing and equipment, two main parameters must be clarified.

1. Time standards: Measurement and documentation of the time required to complete any tasks, core and non–core activities. The term, ‘time standards’ emphasizes that the recorded timings must be achieved under normal operation;
not slow or idealist times. Planning to a standard practice mitigates any risk of developing an unfeasible plan and also ensures realistic delivery objectives.

2. Accurate bill of tests: Collection and updating of bill of tests or demand data that the organization anticipates to receive. The Quality Control department in the pharmaceuticals industry often experiences high variation in demand levels. Although foresight of demand is readily available, expedited demand is also a regular feature and changes to demand scheduling occur frequently. The key to capturing this information is the effectiveness of the S&OP process, which must rapidly comprehend and communicate the significant changes to the upcoming
work load.

All in all, a comprehensive collection of time standards and an accurate bill of tests provide enough information to create capacity modeling tools. These tools enable management to confidently and objectively evaluate the resources required to fulfill the anticipated demand, with a correctly planned headcount.

DO

Execution of the plan is essential to the conduct of the core business and leveraging the best practical tools can ensure that priorities within the work in progress (WIP) are addressed correctly. Optimized scheduling techniques make sure that available resources are utilized to their full potential. Given that labor costs are the biggest factor in quality testing costs, optimizing resource utilization is vital. If the planning phase is correctly executed, then resource utilization will naturally be optimized. Therefore, the ‘DO’ phase can shift the focus onto throughput and administering the necessary levels of throughput across products.

Where throughput is too low, delivery performance requirements will not be met, and this manifests as unsatisfied customers. On the flip side, unnecessarily high throughput may also be detrimental, as this is an indicator of surplus resources and subsequently surplus cost, which can be better utilized elsewhere. In a steady state environment, striking an adequate level of throughput is a relatively simple
objective. Unfortunately, reality harbors many variables, such as vacation requests, competence shortfalls, and even human error. These inevitable obstacles complicate the task of converting anticipated demand into well scheduled throughput. However, despite these complexities, many of the risks to performance delivery may be mitigated if the following factors are taken into consideration:

1. A mathematical approach to short term, adaptive scheduling; calculating the bottom line frequency of each assay, accounting for the throughput required per product. For example, a particular assay may run 5 samples, if we anticipate 20 samples arriving with a required turnaround of 14 days, then the bottom line suggests that 4 assays must be run in a 2 week period.

2. The competence flexibility of the analysts must also be accounted for. Adhering to principles, such as keeping the most flexible analysts available, introduces contingency which may be freed to tackle expedited work when needed.

Results have shown, that in laboratories struggling to meet on-time delivery requirements, turnaround times could be reduced by 30% without additional resources if scheduling tools, such as a rhythm wheel, are introduced.

CHECK

The check phase is arguably the most significant element, as it encourages results-driven action, which is fundamental to continuous improvement. It is therefore imperative that the methods used to check performance are easy to generate,
remain consistent across the organization and can be easily interpreted and made useful to employees of all levels within the firm.

Key performance indicators, or KPIs, which use commonly recognized measures such as lead time or productivity, provide a standard platform upon which a ‘checking’ report can be created. It is worth noting that such metrics can be used not only to assess historical performance (lagging indicators), but they can also provide an indication of future performance (leading indicators) and the foresight period of the metrics then used to define how often the checking needs to occur.

As mentioned, careful design and specific definitions of the metrics are pivotal to the success of this type of reporting function; a misunderstood metric does not add value and may even divert attention away from the critical issue areas. The chosen performance measures must be aligned with the primary business objectives, so it is vital that these goals are clearly understood and universally agreed. Tefen’s
approach is to replicate the hierarchy of the business objectives into performance measures, thus assessing the effect of local optimizations at a shop floor level, whilst retaining the higher level measures which can be reported upwards.
Using this balanced scorecard system ensures that everyone is assessed using the same metrics. Also a ‘cause and effect’ path is created throughout the levels of the metrics, where effecting change at a lower level will result in change in performance at the higher level. An added bonus to this is that a cultural shift is enforced, so that the organization works towards a unified business goal.

Once performance measure metrics are established, users must be trained to interpret the data for useful insight. No single metric can provide an entire performance synopsis, hence the need to explore cost / benefit to achieve any performance change. Also, bear in mind that individual metrics may conflict with another. An example of this is seen in the figure below:

Check phase metric

ACT

Assuming that the former phases have been completed successfully, the necessary actions for improvement should present themselves. Quantitative metrics will also allow you to interpret the priority of each action, to which urgency levels may be attached.

Once actions are generated, these should be reported visually; exposure itself becomes an effective tool to surge performance improvements. The notion of result-based actions must again be reinforced, so it is important to continue regularly monitoring the actions. Fortunately, the performance measures in the ‘checking’ phase will double up for this function. A point to note is that individuals may show reluctance to the accountability attached to such publicity and so a culture of support must be emphasized.

As explained earlier, PDCA is an iterative process, to be used as a perpetual continuous improvement vehicle. Although the period of the PDCA loop may vary across the industry, subsequent iterations can be initiated as soon as the current loop is complete.

By initiating the process with a mathematic approach to planning, management can confidently address the required resource allocations to fulfill the anticipated demand at a sufficient and competitive level. Executing the work within adapted scheduling systems, accounting for expedited demand and resource flexibility, will best ensure that delivery performance is adhered to. Evaluating performance to identify positive and negative cases highlights focus areas, so attention is administered where required. Finally, materializing the improvement initiatives and reporting the results of any actions will guarantee movement in a positive and progressive direction.

The four cornerstones of the methodology may seem apparent or simplistic and hence it is common to see insufficient attention paid to each process stage. However, if conducted properly and given the relevant supporting tools, PDCA becomes a powerful catalyst for positive change.

Case Study

In 2013 a manufacturing site for a big pharmaceutical company was looking to challenge the operational performance of their Quality department and become the industry benchmark for Quality Control. Pressure on improved productivity and cycle time was raised; the site was anticipating a 50% increase in demand whilst also needing to reduce lead times by 8 days (a 30% reduction). There was a requirement to assess the feasibility of whether the extra demand could be absorbed by the current available resources, given a shifted paradigm of improved performance.

The results included area-based improvement of 30% reduction in cycle time with a 50% improvement in productivity. Creating a holistic capacity overview provided a means to size the Quality testing operation on a quarterly basis. This allowed management to make proactive decisions on training requirements as well as organizational shifts towards the areas in which demand was highest. Finally, a
significant cultural shift was achieved, fuelled by the newly implemented visual management devices, leading to a department which focused on achieving a clearly specified, customer-focused goal.


By Bhavin Mistry, Consultant, Tefen UK