Jim Collins’ belief that "good is the enemy of great" is a brilliant insight, and it applies to pretty much everything. Essentially, he means that if people get in the habit of accepting things as "good enough," it inevitably leads to marginalization and mediocrity. He’s observed that a common trait of great companies is that they hold themselves to very high standards. We believe he is right – and that a key distinction between the good and the great companies we work for is their disciplined insistence on excellence. But we also have come to recognize a practical reality, which is that you simply can't be great at everything. There are not enough hours in the day, and companies would come to a grinding halt if they managed entirely by this edict. So how do you apply Collins’ mantra?
The really good companies we work for are skilled at choosing where they should focus and where they shouldn't. They consciously select what they need to be great at, and then aggressively focus on those items. That's a lot harder than it sounds. It requires an obsession for simplicity, which most companies don't have. Most companies become more complicated over time, and if they aren't very careful the complexity feeds on itself (we argue internally that we repeatedly fall into this same trap). Complexity spreads through organizations like a cancer. Each new management level adds information filters; each new manager adds reports and distribution lists and meetings each new market or product variation adds requirements throughout the supply chain; and on it goes. Success may seem to breed success, but often it just creates an appetite for more complexity.
We gain some insight on how well a company is focused by looking at C-level reporting and having executives explain to us what is most important, what issues they commonly drill down on, and how they manage others. At the mid-level, we also look at management reporting and meetings. Meetings are a surprisingly rich source of insight on complexity. Frequency of meetings, stated purpose, who attends and what actually gets accomplished can speak volumes about the culture of an organization. Complexity is insidious. To keep it in check requires leaders to be relentless in making sure the organization stays focused on where it needs to be great.
There’s a Latin expression that will resonate with anyone who has struggled to implement change in an organization: "Cui bono?" Commonly attributed to the Latin orator Cicero, it means "To whose benefit?" In a legal context, it insinuates that the guilty party can usually be found among the individuals who have something to gain from the crime. The adage speaks to understanding people's motivations, which is obviously very important when trying to implement change in an organization.
In change-management vernacular, the expression that managers are more familiar with is "What's in it for me?" (or WIIFM, as it’s usually written on management-training posters). "What's in it for me?" implies that people naturally have their own best interests in mind, so when you’re trying to sell them on the virtues of changing what they do, you need to articulate why the change will be better for them. But it's a mistake to stop there.
"What's in it for me?" is similar but not the same as "To whose benefit?" -- and the distinction is important. “What's in it for me?” is asked by individuals trying to determine if the changes will either make them better or worse off personally. "To whose benefit" stems from individuals trying to determine who the changes are really going to benefit. Many people become cynical if a change is oversold as something that is designed to make them personally better off. It insults their intelligence if it isn't at least balanced with how the change will also help the company, and perhaps leave them worse off in some ways. Many changes are not "win-win." Often, something must be given up in exchange for something else that is hopefully better. So the key is to try to show why the net balance is better, recognizing the losses as well as trumpeting the gains. It's more honest and, in our experience, it's a more successful approach for most organizations.
We've mentioned before that of the four perspectives we study when we analyze a business (product, process, system and behavior), one is significantly more complex than the others and subsequently takes much more time to change. That one is behavior.
We tend to zero in on management behavior, as opposed to employee behavior, because we find that management behavior is critical to a well-run organization and, in turn, significantly influences employee behavior. Management behavior is, very simply, what managers do during the course of the day. In a broad sense, they actively manage others, train staff, do administration and some in-process work and, of course, fix problems as they come up. But until they actually spend some time observing and categorizing these activities, most managers don't have a very good sense of how their time is divvied up.
Therefore it's very helpful to have a proper analysis of how your time is spent -- and then to have a model that prescribes how that time might be spent more effectively. This takes the generally vague notion of "behavior" and gives it some analytical structure.
The tricky part about this is how behavior profiles (and models) change by industry and company, and within companies by their organizational hierarchy. How front line managers allocate their time is naturally significantly different from how corporate executives allocate theirs. But understanding the current and desired profile at each organizational level can be very helpful in making sure that your organization is aligned and optimizing its valuable management resources.
One of things that we find paralyzes some managers and prevents them from fixing operating problems is something one of our healthcare clients termed the “X- Factor." The X-Factor means problems that were initiated externally (i.e., outside the department) and were therefore difficult, if not impossible, to fix because local managers had no authority.
It's not surprising that external factors can, and do, routinely affect performance simply because organizations are made up of processes that run horizontally through vertically organized functions. And functions within organizations are often in conflict with one another. For example, a company’s procurement department wants to purchase supplies in large quantities so it can negotiate the best price, but the people managing inventories want it to buy in small lot sizes to keep inventory levels down. Organizations are a complex web of compromises and trade-offs.
The X-Factor is alive and well in most companies -- and it impacts the performance of one department over another. However, we find that it is rarely as significant as managers think. Often there is not enough effort spent separating the myth from the reality. One of the first things we do when we encounter a problem that is deemed “unfixable,” due to X-Factor conditions, is simply quantify the source causes. What we are trying to determine is how much of the problem is caused by external factors versus those factors that are within the control of local management. Often we will find that there is plenty of scope to incrementally improve a process quite independent of the X-Factor issues.
Then we also look more closely at the underlying external factors and further break them down into specific issues. Here we often find that there is more ability to influence external departments than local managers realize. Sometimes just educating external groups about the specific issues and quantifying the impact can influence what they do, when they do it and/or how often they do it -- whatever it is that’s actually creating the problem.
Over many years we've had a few less than flattering nicknames thrown our way. It’s all part of the job when you are somewhat of an intruder in an organization. The funniest was probably "Cushman bait." This was the nickname jokingly given to our consultants at a large aerospace manufacturing plant. "Cushman" was the brand name of the utility vehicle employees drove around the plant. It wasn’t all that funny to our consultants at the time, but it is pretty funny if you didn't take it literally!
This particular plant was unionized, although that’s not really relevant, as it's fairly common for workers (unionized or not) to be less than thrilled that we are spending time up close and personal, observing them do their work. What is almost always surprising is how much their opinion changes by the time we’ve finished doing our “observations.” Employees are often initially worried that our watching them work is some kind of "Big Brother" intrusion and that the outcome won't be beneficial to them. By the time we're finished, however, most employees agree that there is no better way to understand their daily issues than to spend a day in their shoes and see the world through their eyes, completely unfiltered. It's arguably the most honest way to really understand what they have to deal with on a daily basis.
To get past their initial resistance and to help ensure that the observation experience is positive, we follow a few helpful guidelines:
1. Clarify the purpose.
Take the time to properly inform employees of the purpose of watching work where and when it happens, which is to see the inherent operating problems that impede the process -- not to watch individuals. We never attach an individual's name to an observation: it's irrelevant to the purpose.
2. Be transparent.
Share what you are observing with the employee and keep them informed about what you plan to do with the information. Remind them that it is not an assessment of them personally in any way.
3. Protect your sources.
When you share observations with management, it's critical that you again stress the purpose of the observation (i.e., the process, not people). Sometimes there is a knee-jerk reaction to reprimand an employee when problems are observed. However, you can't let management do this or employees will simply shut down. Also, as we have discussed previously, most operating problems have more to do with the process and how it's managed than they do with individuals.
4. Follow up.
After completing a series of observations, you need to close the loop. It's helpful to employees if you let them know what was collectively learned -- and what resulting changes are being examined and tested.
In the previous Observation, we discussed the need to make internal performance improvement (PI) groups more accountable, and by doing so make the operating groups that use them more accountable as well. In this Observation, we are going to add a few more thoughts on some of the problems we have seen that can limit the effectiveness of PI groups.
1. Too "process-focused"
PI groups are often the offspring of some type of process-oriented improvement methodology. This is useful but can be limiting in terms of generating tangible financial results. Many process improvements require changes in the way that managers plan and control their resources and in how they interact with their staff. For example, changing a process results in changes to the time and scheduling parameters associated with that process. This, in turn, requires a change in how the process is scheduled, and how the new expectations are communicated and followed up on. To be more effective, PI groups need to spend more time understanding the management control system and the managers' actual behaviors.
2. Too "stretched"
In an effort to control what is often perceived as overhead costs, PI groups tend to be kept relatively small. This works fine if the projects they are focused on are also relatively small. However, projects are often quite large in scope. Larger-scope projects are attractive because the financial returns are more appealing but, by design, they require more resources than are often available. The net result is that the burden for implementation of good ideas falls onto operating managers. Often it is the implementation of ideas, not the ideas themselves, that gets stalled in organizations. If PI groups effectively become "advisors" that generate reports, they aren't very useful to line managers. For changes to stick, they need to be owned by the people who have to make those changes and live with them. Getting people to own change takes the investment of a great deal of time as they need to first understand why change is necessary and then to gain confidence that change will be beneficial to them in some way.
3. Too "corporate"
Finally, as mentioned in the previous Observation, PI groups are often initially put together to enact a corporate vision or objective. Although there is nothing inherently wrong in being a "corporate" function -- and a strong argument can be made that it needs to be a corporate function -- this can cause resistance to change as strong as that typically reserved for external consultants.
These days we work with more and more companies that have their own internal performance improvement (PI) groups. Twenty years ago, these groups were more often quality or operational audit groups. Then they morphed into Six Sigma and its Lean variants. We are often asked to help either build these groups or work closely with them to help transfer some of our knowledge and methods. This may seem to create a bit of a conflict, as the more we build up internal teams the less a company needs us, but often this is the only way to make broad changes across an organization and for them to be sustainable. A large part of our business comes from referrals from satisfied clients, so helping them build internal capability is actually more self-serving than it appears.
The secret to making internal groups work is to make them accountable. Most PI groups are, somewhat paradoxically, a costly “free” service and would not survive long if they were a stand alone business. Some groups believe they are accountable, but accountability is not achieved by producing reports that claim "X" amount of benefit over the next few years. Results need to be actually measured in the financials and built into operating budgets. To make PI groups truly effective, there should be a financial charge for their services. Because of this, operating units need to be able to choose them or find alternatives (or do the job themselves). Although performance improvement targets can be mandated from above, the actual delivery and execution of those improvements have to be owned by operational managers. PI groups have many competitive advantages (lower cost and inside relationships, to name two) over other options. Creating a competitive environment forces them to focus on where they can be most effective in delivering services that create genuine value for operating groups.
Very few firms do much of this. PI groups rarely want this kind of real accountability because it puts their jobs at risk. Often these internal groups are initially set up to help implement a specific corporate objective (e.g., roll out Lean Six Sigma), so they are more like forced medicine for operational functions. Corporate executives are looking for a cohesive approach and do not want to fragment the execution or decision making by turning over control to operational groups. And lack of true accountability has a fairly predictable outcome. Over time, the size and cost of these groups tend to grow, and eventually a new CEO arrives and determines that the PI group is an overhead burden that can be shuttered relatively easily.
Before we work for a client we do what we call an "opportunity analysis," which, as it sounds, is designed to help us figure out if there is any opportunity to improve and where it might be. It’s usually conducted over two to three weeks. Clients are often curious how we go about doing it and afterward, in most cases, are intrigued by the amount we can learn about their organization in a very short space of time.
There are a few tricks to it that we can share, which may be useful for managers looking at their own functional areas. In any given functional area we do four basic things:
1. Figure out what drives the financial numbers.
The first thing we do is create what we call a "profit driver model," which starts with financial numbers and determines what operating activities drive the financial number. When you're trying to find opportunity, financial numbers on their own aren't overly helpful; you need to understand what creates them. For example, a revenue number in a retail store is the result of the number of orders multiplied by the average price per order. It's easier analytically to find potential opportunity by studying the types and patterns of orders and then, in turn, what drives those orders.
2. Look for gaps in the process.
Every process has constraints -- and that is where opportunity often resides. It's usually easiest to first determine the major product or service "streams," follow a specific order through the stream, map it out visually and then determine where breakdowns occur and which steps govern the pace of the process. If capacity is an issue, the constraints are what you need to study.
3. Find the disconnects in the management system.
To control a process, managers need to plan work, assign it, follow up on the progress, and then report on what happened. It's helpful to map this out with all the actual documents or tools that the manager uses. Look for breakdowns where one tool is not properly linked to another. Schedules are usually a good place to start. Check their timeliness, accuracy and usefulness.
4. Observe what managers actually do.
Spending a “day in the life” of a manager is often fascinating. Seeing first-hand how managers spend their time tells you a lot about the nature of an organization and its culture. It shows you how managers and employees interact - and how well the management tools support the manager. These can all be very helpful insights into what existing behaviors help or hinder the effectiveness of the organization. Management behaviors are deeply ingrained and are often the hardest thing to change - and the most commonly overlooked aspect of an improvement program.
The "hockey-stick forecast" is a fairly common concept for people who deal regularly with future plans of one type or another. This is the trend graph that shows a general downward trend in performance in the past few periods' actual results, and then a sudden and dramatic upturn in forecast results. It looks like a hockey stick. The business world is full of strategic plans littered with charts like these.
When we review a business to understand where it is coming from or headed, one of the things we pay close attention to is if there are any "hockey-stick forecasts" in its budgets. They are often buried within the numbers, so we often need to do some digging.
We find "hockey-stick forecasts” where performance in an area is expected to improve dramatically year over year. The fact that performance is expected to improve is not as much the issue as trying to understand the underlying logic as to why it will improve. What you have to decipher is if the improvement is related to changes in the product or service mix, margins or productivity.
Sales forecasts are a common "hockey-stick". If sales are expected to increase by any amount greater than what has been demonstrated over the past few years, you need to understand the underlying drivers of that improvement. Sales forecasts are often driven by the optimism of salespeople and their customers, but implicit in sales growth is a myriad of sub-drivers and corresponding activities. Here are just a few questions that can be helpful when trying to understand where sales growth is expected to come from:
- Will the growth come from existing or new customers? If new, what marketing or sale activities will increase or improve?
- Is the growth from existing or new products or services? Existing or new markets served? Larger order sizes?
- Is the market growing or is business being taken away from competitors? How will this impact pricing and margins?
Hockey-stick forecasts can be very useful analytical flags to help you better understand underlying operating assumptions. You can also check to see if management has the necessary tools to measure and track those assumptions. When companies struggle to hit their budgets, you'll often find a few gaps between what was planned and what was actually managed.
There's quite a lot of internal debate about where the catchy expression, "in the day, for the day," originated. Some claim it was a past client; others say it was one of our own project directors: still others claim it is a common expression that’s been around for a while. Whatever its origin, it’s becoming a very popular way to describe how front line managers should think and act.
It is a useful analytical device that we use to help find opportunity to improve. “In the day, for the day" refers to information about what is happening on the current day -- not yesterday or last week. The closer to real time that you give managers feedback and information on what’s happening, the quicker they can help influence the performance of their staff. This seems fairly obvious, but it's not common in many industries. Managers often get performance feedback sometime after an event has occurred, which naturally limits its usefulness. Information that arrives “in the day, for the day” is therefore helpful: managers can address issues while they are relevant and impacting work flow. There are many benefits, e.g., identifying quality issues before too many products or services have been delivered. The other subtle benefit of tools that provide information “in the day, for the day" is that for them to be effective, managers must engage with their employees on a regular basis to help correct off-schedule conditions. This has long-term benefits for both managers and employees and, of course, the productivity of the company.
To make an assessment in most functional areas, you simply compile all the reports that a front line manager reviews and determine how many actually provide information “in the day, for the day.” It's often surprising how rare these reports are. If you don't have them, you probably need them. If you're like us, you will start overusing the expression "in the day, for the day" to the point that it shifts from being catchy to irritating -- but it's still useful.