Mark Spearman and James Choo explain the science behind optimal buffering strategy, revealing why OEE metrics can be counterproductive and demonstrating that capacity buffer and variability reduction are mathematically equivalent paths to faster, more predictable project delivery.
As project delivery faces unprecedented pressure to go faster while absorbing constantly evolving requirements, understanding the science that governs how work flows through production systems becomes critical. This session addresses a fundamental question practitioners encounter: how is it possible for productivity metrics to rise while projects fall further behind? The answer lies in understanding the interplay between capacity, inventory, and variability buffers.
For data centers where technology evolves constantly and inventory risks obsolescence, relying on inventory buffers for high productivity is the wrong answer. Creating appropriate capacity buffers requires sufficient upfront planning to understand what capacity is needed and what’s available, making outcomes predictable rather than uncertain.
[00:00:00] Gary Fischer, PE: Now we’re gonna switch gears a little bit and operation science like any other, science isn’t static this afternoon. So from here forward, we’ve really geared to updating you on leading edge research, advancement of science, and where PPM is going here. To kick off that conversation, our two folks, mark Spearman and James Chu.
[00:00:22] Gary Fischer, PE: So if you guys turn on your cameras and get ready to share, there’s James. Okay. So Mark is the founder and author of Factory Physics. He’s also serves as Technical Director of Project Production Institute, director of Technical Solutions for Strategic Project Solutions or for joining SP. S. Mark was an associate professor, industrial engineering and Management sciences at Northwest University, professor of Industrial.
[00:00:49] Gary Fischer, PE: Industrial systems engineering at Georgia Tech and department Head of Industrial Engineering and Systems Engineering at Texas a and m. So obviously well qualified to talk about the science. Along with that, we have James Chu, who is the Chief Technical Officer for Strategic Project Solutions, and another guy that you’ve seen many times needs a little introduction.
[00:01:10] Gary Fischer, PE: He serves on the PPI technical committee. Has extensive global experience in modeling optimization, controlling project production systems. So I think, James, you’re gonna kick us off? That’s correct.
[00:01:21] H.J. James Choo, PhD: Let me, kick us off and then I’ll hand it over to Dr. Spearman
[00:01:25] Gary Fischer, PE: to
[00:01:26] H.J. James Choo, PhD: talk about some of the latest developments.
[00:01:28] Gary Fischer, PE: Alright. All right.
[00:01:34] Gary Fischer, PE: Alright. We can see your screen.
[00:01:41] H.J. James Choo, PhD: The discussion will may actually get a little bit technical, but we wanna make sure that it is understood how important this discussion is because right now everybody, including Todd, Roberto, and the panel has been talking about how to go faster while absorbing ever changing requirements.
[00:01:59] H.J. James Choo, PhD: Especially in these sectors that we’re actually talking about, the energy sector as well as the digital sector, the design is constantly evolving. We are even working on projects where you’re actually having the foundations complete while they’re trying to determine what’s going to be built on top of that foundation, right?
[00:02:17] H.J. James Choo, PhD: So that discussion not only involves. What you’re making, the product design and the process design and how you’re making that, ed actually did a brilliant job of explaining, but because we actually have also a very limited capacity that everyone’s been talking about. To deliver this, we need to understand how we’re gonna get these product.
[00:02:41] H.J. James Choo, PhD: Through these processes with the limited amount of capacity and inventory with while observing variability. So that mixture, so the last three portion, the capacity, inventory, and the variability is what we’re actually the science that governs how that behaves is the operation science. So this is something that we hear quite often in the industry when that relationship is not correctly determined.
[00:03:10] H.J. James Choo, PhD: You actually get something like how is it possible for productivity metrics to rise while projects are falling further behind? Okay. And one of the things that we actually see, and Ed actually talked about it is root cause is insufficient planning. Now everybody in the panel and everybody else actually before talked about, we don’t wanna be relying on such a big, large portion of the administrative process.
[00:03:35] H.J. James Choo, PhD: So what is the things that we need to preplan. Or actually do a sufficient planning of, and that actually is, we believe the design of the production system that everyone already talked about. So what we’re gonna talk about is the optimal offering strategy. And one other thing that we keep seeing in the industry as we talk to people more and more organizations, is something that’s actually called OEE.
[00:04:01] H.J. James Choo, PhD: There seems to be a. Reliance on a metric called OEE, especially the guys that are involved in, the audience that’s involved in manufacturing or fabrication may actually see this on this metric. So what we’re gonna do is dive a little bit onto what you actually have to be careful when you start focusing on this metric.
[00:04:23] H.J. James Choo, PhD: Okay? So this is something that everyone has seen in the past, if you’ve been along the journey, that there’s, if there’s variability, it will be buffered. Okay? There’s. There’s no way around it. It will be buffered in combination of capacity, inventory, and time. And in this inventory we mean actually the stock of completed things.
[00:04:44] H.J. James Choo, PhD: Okay? So if you don’t actually have, if you try to squeeze one, let’s say you actually have, you don’t have enough capacity buffer and you don’t have enough inventory buffer. All that’s going to do is automatically increase the time buffer, which means the price will get delayed. So one way to think about this is how do we reduce the variability?
[00:05:07] H.J. James Choo, PhD: The bad variability, bad sources of variability while being ob able to absorb the beneficial variability, which comes from the business value. Okay? And by doing so, you’re gonna be able to reduce the buffer, however. If you actually also may have heard, we talk about the production system is considered lean if it uses minimal buffering cost.
[00:05:32] H.J. James Choo, PhD: And what we’re going to do is dive that in. Dive a little bit further into, with Mark’s help about how do we figure out the minimal buffering cost. Okay? So that’s one topic. Second topic that we talk about is this overall effective equipment effectiveness, OEE. But again, what we’re hearing. There when we actually visit these projects and the locations, they actually say Our management is enamored with OE, but improving it seems to make the situation worse.
[00:06:00] H.J. James Choo, PhD: What are we missing? Okay. To answer those two questions, we invite Mark. Mark over to you. Okay.
[00:06:08] Mark L. Spearman, PhD: Thank you. Do I have control of this now? All right, so we’re gonna start with a real simple production system. But as simple as you can have, you have demand, and you have an operation. And anytime there’s demand in an operation, there’s gonna be a stock in between because an operation doesn’t make things exactly when the demand occurs, and even in a service.
[00:06:36] Mark L. Spearman, PhD: System. There’s a stock, but it’s always backordered, but you can still think about it as a stock. And then the operation, of course, can’t always do things exactly when they want to. So there’s a queue. So the queue plus the operation, plus the stock equals a base stock level. And the simplest production inventory control system is a base stock system, which says whenever your base stock falls.
[00:07:02] Mark L. Spearman, PhD: Below e even by one unit, you start another unit. And so here we’re going to talk about one that has a base stock level of 20, which just means the queue plus what’s in operation, and there’s only one machine. Plus the stock is always equal to 20. Okay, next slide shows what happens when the capacity is 12.
[00:07:24] Mark L. Spearman, PhD: And what happens when the capacity is 10 and a half. So what we see here is the 12 is almost never late. The 10 and a half is late, basically more than half the time. In this case, these are simulation graphs. We let it run for 500 days and then for another a hundred days and plotted the graph. And both of these D two different systems have variability.
[00:07:50] Mark L. Spearman, PhD: With squared coefficients of variation of both the production and the demand as one. So it’s moderate variability. So that’s what we see. Now, let’s see what happens when we look at what the average inventory is and the average back orders versus this base stock level. Right now we’re at 20.
[00:08:16] Mark L. Spearman, PhD: So a 10 and a half capacity the 20 year. Both your average base stock, and I mean your average back orders and your average inventory are around, looks like about seven, but when you add capacity, it reduces both of them. And so this is an obvious example of where you increase the capacity buffer. You have decreased both the time buffer and the inventory buffer.
[00:08:42] Mark L. Spearman, PhD: ’cause you can think of back orders as time buffer. Okay, so what happens when we reduce variability though? Does inventory go up? Does it go down? Does it stay the same? What happens to back orders? And the next graph you’ll see in this case the demand is both 10 for both capacity is 12 on the left with the moderate variability that we saw before.
[00:09:09] Mark L. Spearman, PhD: The capacity is 10 and a half on the right, but now the variability’s been reduced. To 0.262. So you’ll see why we picked that number in a minute. But basically these two graphs they, look different because there’s stochastic variation, but they’re very, similar. As you see in the next graph where we look at the averages, the two averages, two average graphs are essentially the same.
[00:09:34] Mark L. Spearman, PhD: And so one has extra capacity, one has lower variability, so that tells us something very important. That is capacity, extra capacity equals reduced variability. If you, and if you’ll hit it one more time, so that, that’s a big takeaway. So you don’t have to reduce variability, but if you don’t, then you’re either gonna have everything late, lots of inventory, or you’ll need a lot of extra capacity to keep up with everything.
[00:10:06] Mark L. Spearman, PhD: So you’ve got these. We used to talk about three buffers, time buffer, inventory buffer, and capacity buffer. But we’ve realized now that the time and inventory buffer are just mirror images of each other. You get lots of inventory if you put your inventory target high and that reduces the time buffer.
[00:10:27] Mark L. Spearman, PhD: It reduces back quarters. Or if you have no inventory, everything’s gonna be time. But they’re both regulated by the standard deviation of lead time demand. And what that is, if you think of lead time, demand as a random variable, it is basically there’s a random time to replenish the stock. And there’s random demand during that time.
[00:10:56] Mark L. Spearman, PhD: So you have two sources of randomness, and when you compute the standard deviation of that, the time inventory buffer is always governed by that particular number. And it turns out that you can, if you know your the cost of carrying inventory, the, or the cost of being late, you can show that total cost is a linear function.
[00:11:21] Mark L. Spearman, PhD: Of the standard deviation of lead time demand, and that’s a new result. The capacity buffer is just how much installed capacity you have versus what the average demand is. And so I was listening. To earlier talks and especially the, this, the discussion just before now, and I was really pleased everybody’s talking about you gotta understand your, how you’re gonna produce the project before you start trying to schedule it, whatnot.
[00:11:52] Mark L. Spearman, PhD: And what I like to tell people is think rates not dates. And so if you’re thinking in terms of rates, now you’re talking about capacity. Capacity is a rate. Demand is a rate. And if you make sure you have enough capacity to meet that demand without having too much inventory, you’re gonna be able to deliver.
[00:12:14] Mark L. Spearman, PhD: But when you start looking at. Milestones and, things like this. You miss all that and that I was pleased. Hear, the discussion that, that indicated that. So how do these interact? We saw that you need extra capacity if you have more variability to get the same time and inventory. And so we can, look at it like that or we can look at.
[00:12:46] Mark L. Spearman, PhD: Basic equation. The capacity buffer times this time inventory buffer. So Capacity buffer member is your install capacity minus your demand time. Inventory buffer is the standard deviation of lead time demand. Multiply those two together and you get something very close to the average variability. Now it gets closer and closer.
[00:13:09] Mark L. Spearman, PhD: This becomes more and more an equation as you get higher and higher utilization. So low utilization’s not a problem, right? Because if you have low utilization, that means you have big capacity buffer, so you don’t have to worry too much about the time buffer. But as you start getting you don’t wanna run with low utiliz, extremely low utilization because you run outta money.
[00:13:30] Mark L. Spearman, PhD: You’re paying for a lot of capacity, you don’t use, but as you get. Tighter on this than this. This relationship becomes very close, and it’s not intuitive. A lot of people have never seen this, but it does have some big implications as we see in the next slide. So if we, and here’s just an example. So we have the demand and the capacity, and we compute the standard deviation.
[00:13:56] Mark L. Spearman, PhD: Lead time demand is 5.4. The extra capacity buffer is two because the capacity is 12 and demands 10, two times 5.4 is 11. And the average variability is roughly 11. If on the other side, the standard deviation of lead time demand is also 5.4, but the capacity buffer is only 0.5, and you multiply those two together and the average variability is 0.27 and that’s that 0.262 0.27 very close.
[00:14:31] Mark L. Spearman, PhD: So that shows how this kind of goes together on the next slide. Basically this, a graphical way to think about it. One of the earlier slides we showed that the time inventory buffer and the capacity buffer. Are gonna be some volume, some area. But if you take the time inventory buffer and multiply it by the buffer unit cost and the capacity buffer and multiply its buffer unit cost, then the total cost to run this is gonna be the perimeter of this rectangle divided by two.
[00:15:10] Mark L. Spearman, PhD: Right now, if you wanna minimize the total cost. And you realize that the area’s gonna stay the same because the variability’s not going away. You’re just trying to decide how much capacity should I have given a certain amount of variability? To minimize the perimeter, subject to a constant area is always gonna be a square.
[00:15:38] Mark L. Spearman, PhD: It for a rectangle. If it, wasn’t a rectangle, if it was anything, it would be a circle. But for a rectangle, it’s gonna be a square. And that’s gonna minimize the total buffering cost. And it, and when you minimize, when you have a square, that means the time, inventory cost equals the extra capacity cost.
[00:15:58] Mark L. Spearman, PhD: Now this tells us something about management. How close do we wanna run? To 200% utilization, which is one of the goals with the OEE. So let’s look at the next slide. So here we have 500 items in inventory and we work 14 hours. We have a 96.4 utilization, percent utilization, and we can optimize the inventory carrying costs and then optimize the back order costs.
[00:16:31] Mark L. Spearman, PhD: So we get a total time inventory cost at three 15 315,000 per year. And the extra capacity cost fairly low. It’s only 9,000 a year, so total cost is 3 24. But strategy two, we have more hours of work. So we’re paying, we’re, this is what really gets people, is you’re paying people not to work. But you’re not really, you’re paying people to be available to work when you have variation, and that’s difficult for some people to get their heads around.
[00:17:02] Mark L. Spearman, PhD: But now the utilization’s gone from 96 to basically 82. But what the total cost has dropped from 324 down to one 13 and notice that the time inventory costs and capacity costs are nearly equal. And so that’s something to pay attention to when you’re looking at your delivery system of the project.
[00:17:25] Mark L. Spearman, PhD: Okay, the next, so let’s look at what OEE suggests. OEE. It’s basically availability times performance, times quality, availability. It’s basically the time the system’s available. So what, that’s why they call it availability, but it is basically the meantime to failure divided by the meantime to failure, plus the meantime to repair.
[00:17:53] Mark L. Spearman, PhD: So it, it’s looking at the unscheduled downtime, but it also looks at things like loading time and product net. The time you’re, scheduled down, performance is running the, resource at the speed that it was designed to. Okay. So if you wanna get high performance, you wanna run at a high utilization.
[00:18:17] Mark L. Spearman, PhD: And then quality is the ratio of good products that you produce divided by the total. So that’s basically the yield. So we have availability, utilization, and yield. And if we look at that, increasing availability and yield’s always gonna make things better. ’cause you don’t wanna have machines that are down for maintenance and you don’t wanna be producing scrap.
[00:18:44] Mark L. Spearman, PhD: But increasing utilization is, really ridiculous. If you’re running faster than demand, you’re only increase inventory. And basically we’ve looked at projects and the. The, delivery process the rate at which stuff can be delivered to the site is greater than the rate at which it can be installed.
[00:19:09] Mark L. Spearman, PhD: And all it does is create a huge amount of inventory on the site, which then has to be moved around, gets broken, gets, becomes obsolete, all these terrible things. So there’s not. You don’t necessarily want a high utilization if they had oe just be availability times yield, and then times look at am I meeting demand?
[00:19:36] Mark L. Spearman, PhD: Then that, that would be fine. But when they add this utilization goal, if it’s going to be counterproductive. There is an optimal utilization that minimizes all costs, and it’s not a hundred percent. A better way to look at it is to look at utilization. In the next slide, that, so here’s two machines.
[00:19:58] Mark L. Spearman, PhD: We’re gonna look at the OEE, so the actual run rate. The theoretical run rate, the availability, the yield, set up time, all that, and the OEE for machine one is 73%. OEE for machine two is 47%. So if you had these two oee, which machine would you work on?
[00:20:23] Mark L. Spearman, PhD: Anybody. Machine two, right? ’cause it’s got a low OEE and we want to have a high OEE. Now let’s look at Utilization machine one. You’re about outta capacity. You’re at basically 99% Utilization machine. Two, you’re at 86% utilization. So if you’re gonna worry about a machine, you should worry about machine one.
[00:20:47] Mark L. Spearman, PhD: Because if you’re at 99% utilization, you’re gonna have big cues, long lead times, lots of whip things are probably gonna be late. And so you got all kinds of problems and OE is telling you to do just the opposite. So I hope that is a good takeaway. And if you want more of this, we’ve got a prize for you.
[00:21:11] Mark L. Spearman, PhD: We got operation Science applied. A new book coming out by me and my two co-authors from factory Physics for Managers. And we have this book has kind of introduction to what Operation Science is, and then there’s some technical chapters which go through how to optimize the capacity, how to optimize the inventory, how to optimize.
[00:21:38] Mark L. Spearman, PhD: Various things, very technical, lots of math. But if you don’t like the math, you can just skip it and go to the stories. And we’ve got stories from notable authors such as Todd Sabel and James Chu talking about, the project that we did in for the Heathrow Terminal five, we got a story about how Delta Airlines came back from the COVID.
[00:22:04] Mark L. Spearman, PhD: They completely shut down the fleet and how to bring it back up reasonably. And we’ve got some great stories from Intel, just bunch of different companies, and they’re represented by these different people on the, graph, except for Einstein. He didn’t have a, company, but he is our great motivation.
[00:22:25] Mark L. Spearman, PhD: So thank you very much. I’m gonna turn it back over to James. Thank you, mark.
[00:22:31] H.J. James Choo, PhD: Hope, hope that actually provided some more clarity in terms of the trade off between the three buffers that we actually talked about. Now, one of the things that now we actually have to bring back is what does this mean for data centers, right?
[00:22:47] H.J. James Choo, PhD: As we actually look at data centers, and you can actually, you’ve seen this many, times. So on the top you might can actually look at it again, as foundations or aru civil foundation system. Then the structural system and the mechanical system that’s actually going through that are being engineered, fabricated, delivered, and then it’s getting assembled on site, what type of buffer you have at which location.
[00:23:17] H.J. James Choo, PhD: Is going to be crucial in being able to determine whether you are going to be able to deliver your project as fast as possible. Okay? Now one of the things that we do know is having too much inventory buffer and it’s been set by many speakers so far, is going to be detrimental to your, not only to your cycle time.
[00:23:43] H.J. James Choo, PhD: Also to obsolescence, as we talked about it before. I mean you, those that are involved in the data center, the cooling technology is constantly evolving. So if you actually had this as well as, so are the chips that are going to go into the data centers where at a data Todd, or actually at a data cloud conference where someone actually stood up and said, what are we going to do with all the Nvidia chips that we bought already?
[00:24:08] H.J. James Choo, PhD: That’s going out getting outdated. So it’s, an interesting dichotomy that we actually have is that we can’t just rely on inventory buffers, which we used to do to make sure that we can have highest productivity on each of the operations, rather than how do we actually get things through the system as quickly as possible.
[00:24:28] H.J. James Choo, PhD: The other element that we wanna really emphasize is whenever we actually use the word variability. And Mark actually gave a very good example of how to actually think about it every time we actually say variability. A lot of industries practitioners think about uncertainty, but a lot of the variability that we design in come from the decisions we make about the product as well as the process as well as the how much of a big a, how big a batch that we use to pass the work from one to the next.
[00:25:01] H.J. James Choo, PhD: Again, don’t think about just uncertainty, but think about the variability that we’re actually designed into the project, which is going to make this project take longer. Okay? So number one, due to technological advancements, the design of the data centers keep evolving. Therefore, using inventory offering percent of higher productivity cannot be the answer.
[00:25:22] H.J. James Choo, PhD: Having an absolute amount of capa appropriate amount of capacity buffer will enable variability to be absorbed. They’re extending the schedule, which is the time buffer. However, creating capacity buffer requires sufficient upfront planning, which again, ed actually talked about of how the work will be done, which Todd talked about, to understand what capacity is needed and what capacity is available.
[00:25:46] H.J. James Choo, PhD: If you understand what capacity is needed and what capacity is available, then things that actually happen are not something that’s unforeseeable. It is predictable. Okay, so to learn more about how these concepts can be applied to your project, please contact info at project production.org and we’d be happy to look at, understand your situation and see how this actually might fit into your project environment.
[00:26:15] H.J. James Choo, PhD: Thank you.
[00:26:19] Gary Fischer, PE: Alright, very good. Good research and a practical application. On what it means for the types of challenges people have on their projects today. Not seeing any questions roll in. Maybe our audience all took lunch. I don’t know. Wake up guys. Let’s any questions before we move on to our next guest?
[00:26:42] Gary Fischer, PE: Give it a minute. There seems to be some latency in the system between when I, when you ask and when I get ’em.
[00:26:54] Gary Fischer, PE: James I wanna throw a question at you that came up earlier about the, ethics of suppliers and how you might deal with getting cut outta line when you were in the queue to get something you needed for your data center, and all of a sudden you find out somebody, somebody trumped you with a bigger wa of cash or something.
[00:27:16] Gary Fischer, PE: What advice would you give on that?
[00:27:20] H.J. James Choo, PhD: I think there are two sides to actually look at. One is, again I think Roberto already alluded to the alluded to this, which is to buy capacity, right? A product. So that’s one to protect your or, lead time and the cycle time associated with the delivery of the product.
[00:27:40] H.J. James Choo, PhD: The other one that is actually also in the factory physics book is if you are going to look at this as a production system, having some kind of virtual queue with risk-based with a common system that allows you to control the amount of the actual cycle time so that it becomes predictable.
[00:28:06] H.J. James Choo, PhD: Also reserve some capacity so that you can actually slide in any of the priority customers that the, fabricators would like to actually slide in. So that is actually from a supplier’s perspective, but working with the, with that kinda supplier to see if they actually can, are able to, or interested in having that kind of system implemented so that not only they can actually priorit prioritize their preferred customers.
[00:28:34] H.J. James Choo, PhD: But without delaying the the lead times for, or increasing the lead times for other customers. Yeah. Mark, do you wanna add anything there?
[00:28:48] Mark L. Spearman, PhD: No, I think that’s pretty good answer.
[00:28:52] Gary Fischer, PE: Okay. Thanks. The Robert who asked the question said, this is a big deal between healthcare and data center. Who’s more important to get their items first? I’m sure it’s alive and well. Yeah.
[00:29:03] Mark L. Spearman, PhD: One, one thing I will say is that this, gets to dispatching what, people do in, manufacturing who goes first and, what are you trying to keep straight.
[00:29:17] Mark L. Spearman, PhD: And what you find is if you don’t do basically either first in first out or first in system, first out, then you’re basically going to increase variability. Because what you’re doing is you’re reorganizing the queue. And as we know from Little’s Law, the cycle time is always going to be the amount of work in process, which hope you’re controlling, divided by the.
[00:29:46] Mark L. Spearman, PhD: Demand. And that doesn’t change when you do the reprioritization. But what does change is, not the average but the variance. And so if you’re trying to guarantee completion, say 90% guarantee, then basically if you increase the average, you’re going to, you’re going to decrease the probability of finishing on time.
[00:30:12] Mark L. Spearman, PhD: So what we found is. Resor res resorting cues, redo, reprioritizing is a bad thing to do. Once it gets started, get it through and control the queue of stuff that’s not yet started. So the virtual queue, but don’t try to re prioritize cues that are already in play,
[00:30:36] Gary Fischer, PE: the physical cue. Okay.
[00:30:38] Gary Fischer, PE: Alright, very good gentlemen. Thank you for sharing thoughts.
PPI works to increase the value Engineering and Construction provides to the economy and society. PPI researches and disseminates knowledge related to the application of Project Production Management (PPM) and technology for the optimization of complex and critical energy, industrial and civil infrastructure projects.
The Project Production Institute (PPI) exists to enhance the value Engineering and Construction provides to the economy and society. We are working to:
1) Make PPM the dominant paradigm for the delivery of capital projects,
2) Have project professionals use PPM principles, methods and tools in their everyday work,
3) Create a thriving market for PPM services and tools,
4) Fund and advance global PPM research, development and education (higher and trade), and
5) Ensure PPM is acknowledged, required and specified as a standard by government and regulatory agencies.
To that end, the Institute partners with leading universities to conduct research and educate students and professionals, produces an annual Journal to disseminate knowledge, and hosts events and webinars around the world to discuss pertinent and timely topics related to PPM. In order to advance PPM through access and insight, the Institute’s Industry Council consists of experts and leaders from companies such as Chevron, Google, Microsoft and Merck.
Join us in eliminating chronic poor project delivery performance. Become a member today.