We've mentioned before that of the four perspectives we study when we analyze a business (product, process, system and behavior), one is significantly more complex than the others and subsequently takes much more time to change. That one is behavior.
We tend to zero in on management behavior, as opposed to employee behavior, because we find that management behavior is critical to a well-run organization and, in turn, significantly influences employee behavior. Management behavior is, very simply, what managers do during the course of the day. In a broad sense, they actively manage others, train staff, do administration and some in-process work and, of course, fix problems as they come up. But until they actually spend some time observing and categorizing these activities, most managers don't have a very good sense of how their time is divvied up.
Therefore it's very helpful to have a proper analysis of how your time is spent -- and then to have a model that prescribes how that time might be spent more effectively. This takes the generally vague notion of "behavior" and gives it some analytical structure.
The tricky part about this is how behavior profiles (and models) change by industry and company, and within companies by their organizational hierarchy. How front line managers allocate their time is naturally significantly different from how corporate executives allocate theirs. But understanding the current and desired profile at each organizational level can be very helpful in making sure that your organization is aligned and optimizing its valuable management resources.
One of things that we find paralyzes some managers and prevents them from fixing operating problems is something one of our healthcare clients termed the “X- Factor." The X-Factor means problems that were initiated externally (i.e., outside the department) and were therefore difficult, if not impossible, to fix because local managers had no authority.
It's not surprising that external factors can, and do, routinely affect performance simply because organizations are made up of processes that run horizontally through vertically organized functions. And functions within organizations are often in conflict with one another. For example, a company’s procurement department wants to purchase supplies in large quantities so it can negotiate the best price, but the people managing inventories want it to buy in small lot sizes to keep inventory levels down. Organizations are a complex web of compromises and trade-offs.
The X-Factor is alive and well in most companies -- and it impacts the performance of one department over another. However, we find that it is rarely as significant as managers think. Often there is not enough effort spent separating the myth from the reality. One of the first things we do when we encounter a problem that is deemed “unfixable,” due to X-Factor conditions, is simply quantify the source causes. What we are trying to determine is how much of the problem is caused by external factors versus those factors that are within the control of local management. Often we will find that there is plenty of scope to incrementally improve a process quite independent of the X-Factor issues.
Then we also look more closely at the underlying external factors and further break them down into specific issues. Here we often find that there is more ability to influence external departments than local managers realize. Sometimes just educating external groups about the specific issues and quantifying the impact can influence what they do, when they do it and/or how often they do it -- whatever it is that’s actually creating the problem.
Over many years we've had a few less than flattering nicknames thrown our way. It’s all part of the job when you are somewhat of an intruder in an organization. The funniest was probably "Cushman bait." This was the nickname jokingly given to our consultants at a large aerospace manufacturing plant. "Cushman" was the brand name of the utility vehicle employees drove around the plant. It wasn’t all that funny to our consultants at the time, but it is pretty funny if you didn't take it literally!
This particular plant was unionized, although that’s not really relevant, as it's fairly common for workers (unionized or not) to be less than thrilled that we are spending time up close and personal, observing them do their work. What is almost always surprising is how much their opinion changes by the time we’ve finished doing our “observations.” Employees are often initially worried that our watching them work is some kind of "Big Brother" intrusion and that the outcome won't be beneficial to them. By the time we're finished, however, most employees agree that there is no better way to understand their daily issues than to spend a day in their shoes and see the world through their eyes, completely unfiltered. It's arguably the most honest way to really understand what they have to deal with on a daily basis.
To get past their initial resistance and to help ensure that the observation experience is positive, we follow a few helpful guidelines:
1. Clarify the purpose.
Take the time to properly inform employees of the purpose of watching work where and when it happens, which is to see the inherent operating problems that impede the process -- not to watch individuals. We never attach an individual's name to an observation: it's irrelevant to the purpose.
2. Be transparent.
Share what you are observing with the employee and keep them informed about what you plan to do with the information. Remind them that it is not an assessment of them personally in any way.
3. Protect your sources.
When you share observations with management, it's critical that you again stress the purpose of the observation (i.e., the process, not people). Sometimes there is a knee-jerk reaction to reprimand an employee when problems are observed. However, you can't let management do this or employees will simply shut down. Also, as we have discussed previously, most operating problems have more to do with the process and how it's managed than they do with individuals.
4. Follow up.
After completing a series of observations, you need to close the loop. It's helpful to employees if you let them know what was collectively learned -- and what resulting changes are being examined and tested.
In the previous Observation, we discussed the need to make internal performance improvement (PI) groups more accountable, and by doing so make the operating groups that use them more accountable as well. In this Observation, we are going to add a few more thoughts on some of the problems we have seen that can limit the effectiveness of PI groups.
1. Too "process-focused"
PI groups are often the offspring of some type of process-oriented improvement methodology. This is useful but can be limiting in terms of generating tangible financial results. Many process improvements require changes in the way that managers plan and control their resources and in how they interact with their staff. For example, changing a process results in changes to the time and scheduling parameters associated with that process. This, in turn, requires a change in how the process is scheduled, and how the new expectations are communicated and followed up on. To be more effective, PI groups need to spend more time understanding the management control system and the managers' actual behaviors.
2. Too "stretched"
In an effort to control what is often perceived as overhead costs, PI groups tend to be kept relatively small. This works fine if the projects they are focused on are also relatively small. However, projects are often quite large in scope. Larger-scope projects are attractive because the financial returns are more appealing but, by design, they require more resources than are often available. The net result is that the burden for implementation of good ideas falls onto operating managers. Often it is the implementation of ideas, not the ideas themselves, that gets stalled in organizations. If PI groups effectively become "advisors" that generate reports, they aren't very useful to line managers. For changes to stick, they need to be owned by the people who have to make those changes and live with them. Getting people to own change takes the investment of a great deal of time as they need to first understand why change is necessary and then to gain confidence that change will be beneficial to them in some way.
3. Too "corporate"
Finally, as mentioned in the previous Observation, PI groups are often initially put together to enact a corporate vision or objective. Although there is nothing inherently wrong in being a "corporate" function -- and a strong argument can be made that it needs to be a corporate function -- this can cause resistance to change as strong as that typically reserved for external consultants.
These days we work with more and more companies that have their own internal performance improvement (PI) groups. Twenty years ago, these groups were more often quality or operational audit groups. Then they morphed into Six Sigma and its Lean variants. We are often asked to help either build these groups or work closely with them to help transfer some of our knowledge and methods. This may seem to create a bit of a conflict, as the more we build up internal teams the less a company needs us, but often this is the only way to make broad changes across an organization and for them to be sustainable. A large part of our business comes from referrals from satisfied clients, so helping them build internal capability is actually more self-serving than it appears.
The secret to making internal groups work is to make them accountable. Most PI groups are, somewhat paradoxically, a costly “free” service and would not survive long if they were a stand alone business. Some groups believe they are accountable, but accountability is not achieved by producing reports that claim "X" amount of benefit over the next few years. Results need to be actually measured in the financials and built into operating budgets. To make PI groups truly effective, there should be a financial charge for their services. Because of this, operating units need to be able to choose them or find alternatives (or do the job themselves). Although performance improvement targets can be mandated from above, the actual delivery and execution of those improvements have to be owned by operational managers. PI groups have many competitive advantages (lower cost and inside relationships, to name two) over other options. Creating a competitive environment forces them to focus on where they can be most effective in delivering services that create genuine value for operating groups.
Very few firms do much of this. PI groups rarely want this kind of real accountability because it puts their jobs at risk. Often these internal groups are initially set up to help implement a specific corporate objective (e.g., roll out Lean Six Sigma), so they are more like forced medicine for operational functions. Corporate executives are looking for a cohesive approach and do not want to fragment the execution or decision making by turning over control to operational groups. And lack of true accountability has a fairly predictable outcome. Over time, the size and cost of these groups tend to grow, and eventually a new CEO arrives and determines that the PI group is an overhead burden that can be shuttered relatively easily.
Before we work for a client we do what we call an "opportunity analysis," which, as it sounds, is designed to help us figure out if there is any opportunity to improve and where it might be. It’s usually conducted over two to three weeks. Clients are often curious how we go about doing it and afterward, in most cases, are intrigued by the amount we can learn about their organization in a very short space of time.
There are a few tricks to it that we can share, which may be useful for managers looking at their own functional areas. In any given functional area we do four basic things:
1. Figure out what drives the financial numbers.
The first thing we do is create what we call a "profit driver model," which starts with financial numbers and determines what operating activities drive the financial number. When you're trying to find opportunity, financial numbers on their own aren't overly helpful; you need to understand what creates them. For example, a revenue number in a retail store is the result of the number of orders multiplied by the average price per order. It's easier analytically to find potential opportunity by studying the types and patterns of orders and then, in turn, what drives those orders.
2. Look for gaps in the process.
Every process has constraints -- and that is where opportunity often resides. It's usually easiest to first determine the major product or service "streams," follow a specific order through the stream, map it out visually and then determine where breakdowns occur and which steps govern the pace of the process. If capacity is an issue, the constraints are what you need to study.
3. Find the disconnects in the management system.
To control a process, managers need to plan work, assign it, follow up on the progress, and then report on what happened. It's helpful to map this out with all the actual documents or tools that the manager uses. Look for breakdowns where one tool is not properly linked to another. Schedules are usually a good place to start. Check their timeliness, accuracy and usefulness.
4. Observe what managers actually do.
Spending a “day in the life” of a manager is often fascinating. Seeing first-hand how managers spend their time tells you a lot about the nature of an organization and its culture. It shows you how managers and employees interact - and how well the management tools support the manager. These can all be very helpful insights into what existing behaviors help or hinder the effectiveness of the organization. Management behaviors are deeply ingrained and are often the hardest thing to change - and the most commonly overlooked aspect of an improvement program.
The "hockey-stick forecast" is a fairly common concept for people who deal regularly with future plans of one type or another. This is the trend graph that shows a general downward trend in performance in the past few periods' actual results, and then a sudden and dramatic upturn in forecast results. It looks like a hockey stick. The business world is full of strategic plans littered with charts like these.
When we review a business to understand where it is coming from or headed, one of the things we pay close attention to is if there are any "hockey-stick forecasts" in its budgets. They are often buried within the numbers, so we often need to do some digging.
We find "hockey-stick forecasts” where performance in an area is expected to improve dramatically year over year. The fact that performance is expected to improve is not as much the issue as trying to understand the underlying logic as to why it will improve. What you have to decipher is if the improvement is related to changes in the product or service mix, margins or productivity.
Sales forecasts are a common "hockey-stick". If sales are expected to increase by any amount greater than what has been demonstrated over the past few years, you need to understand the underlying drivers of that improvement. Sales forecasts are often driven by the optimism of salespeople and their customers, but implicit in sales growth is a myriad of sub-drivers and corresponding activities. Here are just a few questions that can be helpful when trying to understand where sales growth is expected to come from:
- Will the growth come from existing or new customers? If new, what marketing or sale activities will increase or improve?
- Is the growth from existing or new products or services? Existing or new markets served? Larger order sizes?
- Is the market growing or is business being taken away from competitors? How will this impact pricing and margins?
Hockey-stick forecasts can be very useful analytical flags to help you better understand underlying operating assumptions. You can also check to see if management has the necessary tools to measure and track those assumptions. When companies struggle to hit their budgets, you'll often find a few gaps between what was planned and what was actually managed.
There's quite a lot of internal debate about where the catchy expression, "in the day, for the day," originated. Some claim it was a past client; others say it was one of our own project directors: still others claim it is a common expression that’s been around for a while. Whatever its origin, it’s becoming a very popular way to describe how front line managers should think and act.
It is a useful analytical device that we use to help find opportunity to improve. “In the day, for the day" refers to information about what is happening on the current day -- not yesterday or last week. The closer to real time that you give managers feedback and information on what’s happening, the quicker they can help influence the performance of their staff. This seems fairly obvious, but it's not common in many industries. Managers often get performance feedback sometime after an event has occurred, which naturally limits its usefulness. Information that arrives “in the day, for the day” is therefore helpful: managers can address issues while they are relevant and impacting work flow. There are many benefits, e.g., identifying quality issues before too many products or services have been delivered. The other subtle benefit of tools that provide information “in the day, for the day" is that for them to be effective, managers must engage with their employees on a regular basis to help correct off-schedule conditions. This has long-term benefits for both managers and employees and, of course, the productivity of the company.
To make an assessment in most functional areas, you simply compile all the reports that a front line manager reviews and determine how many actually provide information “in the day, for the day.” It's often surprising how rare these reports are. If you don't have them, you probably need them. If you're like us, you will start overusing the expression "in the day, for the day" to the point that it shifts from being catchy to irritating -- but it's still useful.
We spend countless hours in many different industry sectors observing and analyzing how work gets processed. We watch employees, managers and the tools they use in order to determine how much of their day is truly productive. Unless we are observing a highly automated process, there is a good chance a typical observation will reveal that somewhere between 35% to 50% of their workday is not truly "productive." This seems remarkable at face value. It means that roughly three to four hours of an average person's workday is not productive. As we've discussed before, this doesn’t mean that a person isn't working; it just means that what they're doing may not be adding any real value. When we share these observations with our clients, the fact that there are some operating problems buried within most processes never surprises anyone, but the magnitude of that loss almost always does.
What’s also surprising is that this hasn't changed significantly over the last 20 years, despite all the advancements in technology and management training.
So, how can all this waste still exist? The simple answer is that the magnitude is often not obvious or measured by anyone, so it goes largely unchallenged and unmanaged. Work standards, when they exist, are often based on last year's actuals, which only serves to hide last year's problems within the standard. When attempts are made to "zero base" standards, they are often softened by too many buffers so they better match the current process.
Sometimes there is also a psychological game at play; no one likes to think they regularly operate at 60% of a standard. In truth, it really doesn't matter, as the standard is only designed to help a manager identify performance gaps, but in practice managers often find it too demoralizing and believe it demotivates their employees. We find many environments where managers prefer to better their own standards ("We were 105% of standard last month!"), but this is not overly helpful for continuous improvement.
The most effectively managed environments we see go to great lengths to measure and identify all the problems they can. They don't see this as an indictment; they see it as a baseline. And the only thing that matters is whether or not the problems are lessening over time.
Editor's Note: In the previous Observation, we discussed why it's hard to eliminate backlogs, --a task that’s often brought forth if you are trying to improve the productivity of an area that uses some kind of backlog-management system (e.g., engineering work orders). We suggested that one reason it's tough is that backlogs are a form of security for many people. Sean Brown, president of Egan Visual (manufacturer of visual communication systems and business furniture) and former Carpedia partner, offers some additional thoughts on the topic:
A key to overcoming this objection is communicating that a large backlog is not the job security people think it is. Certainly there are operational efficiencies that you might be able to leverage when there is a backlog, but there are potentially hidden, but very real risks and costs lurking. Rather than being viewed as a "healthy amount of work ahead of us," employees should recognize that a backlog often represents two things: customer dissatisfaction and a competitive threat.
The larger the backlog, the longer your lead times are (from order to cash). Unless you are in an environment where customers nail down their orders months before they need something, this backlog reflects an unfilled need in their organization. This means they are dissatisfied on some level. Before they order, they own the dissatisfaction. Now that the order is in your backlog, your name is on it. Enter the competitive threat: many companies have won share and dominated industries by competing on lead times: "Order from us and get your stuff right away."
Identifying and communicating the source of the backlog and its burning platform will help build understanding and momentum in backlog elimination. This will in turn help generate some (one-time) cash early in the improvement initiative, directly contributing to the project's ROI.