Posts Tagged ‘Scenario management’

Coping with catastrophe. Can your supply chain recover?

Published February 25th, 2011 by John Westerveld 6 Comments

I spotted this article from the Wall Street Journal a few weeks ago and a couple of points caught my eye.  First, 2010 was a record breaking year for natural and man-made disasters. We had a virtual smorgasbord of catastrophes from flooding, to fire, to snowstorms. We had Volcanoes grounding air travel and earthquakes devastating an entire country. And of course, a major oil spill, the impact of which we will be feeling for years to come. According to the article, the economic losses from the disasters reached $222 billion US (three times that of 2009), which is only outdone by the human cost; 260,000 lives.

The other interesting point is that today’s interconnected world makes us far more aware and susceptible to the impact of these natural disasters. Instantaneous news through Twitter, Facebook, YouTube and blogs puts a human face on these events that simply couldn’t be done in a 15 minute segment on the evening news. The economic impact of a catastrophe on today’s global business is also far greater. Almost every large company has a distributed manufacturing base. Further, the suppliers which feed that manufacturing system also are spread around the world. This means that a disaster in some faraway place could have a devastating impact to a company’s ability to ship their product.

The article outlines a few suggestions on how to protect yourself from disasters at home and around the world;

1)      Create a business continuity plan – A business continuity identifies the activities/people that are critical to running the business, and ensures that these are protected.  As Wikipedia puts it; a business continuity plan works out how to stay in business in the face of a disaster.

2)      Supply Chain Risk Management – Specific to the supply chain (and typically part of the Business Continuity Plan) you need to assess your supply chain to determine which suppliers are critical and for those suppliers, identify workarounds should this supplier not be available. Of course, this applies to both external and internal suppliers. A plan, while critical, is not enough. My position has always been that responding to unplanned events is a key component of supply chain risk management. You can assess all aspects of your supply chain, create mitigation strategies for any possible events and still be surprised by some unexpected and unplanned situation. Your ability to respond will mean the difference between a financial catastrophe and a minor blip (for more reading on this, download my whitepaper here.)

3)      Insurance / Catastrophe Bonds – The Wall Street Journal article identified that companies can take out disaster insurance or catastrophe bonds. Of course, Insurance should never be your only disaster recovery plan. This white paper aptly describes disaster insurance as “the disaster recovery plan of last resort.” Like home insurance, you would take many steps to ensure that a disaster doesn’t impact your home (removal of dangerous trees, installing smoke / fire detectors, buying fire extinguishers), but if something DOES happen, you are glad to have the insurance to fall back on.

While I hope 2011 will be less “exciting” from a disaster perspective, I wouldn’t count on it.  Already we’ve had a few winter storms across the US that people have classed as “snowmaggeden” events that basically stopped all movement across the central and eastern US for several days. Further, the trend towards globalization will continue and as such our sensitivity to regional disasters will continue to increase. So with that in mind, do you have a disaster recovery plan? Does it include supply chain risk?  Comment back and keep the discussion going!

Enhanced by Zemanta

Posted in Best practices, Response Management, Supply chain risk management


The supply chain disruptions you’ll never plan for

Published December 22nd, 2010 by Carol McIntosh 1 Comment

The eruption of Iceland’s Eyjafjallajokull volcano last spring was fascinating for a number of reasons. For example, photos of lightning inside the plume of volcanic ash, such as these seen at National Geographic’s website, are mesmerizing.

More importantly, the ash cloud itself presented significant business ramifications for companies around the world. I believe we will study the volcano’s eruption—and, consequently, the disruptions to supply chains around the world—for years to come because the impact was both so widespread and pronounced.

A recent BusinessWeek article described how automotive manufacturer Nissan Motorwas forced to shut down three auto assembly lines in Japan because the factories ran out of tire-pressure sensors when a plane carrying a shipment from a supplier in Ireland was grounded.

I’ll wager that you expect disruptions from hurricanes and possibly tropical storms, and maybe even a blizzard across the Great Plains. Since those events are likely, it’s smart to create contingency plans that account for alternate transportation routes or even modes. Furthermore, you may even have contracts in place with suppliers for alternate parts, and perhaps even contracts with alternate suppliers for necessary parts or components. But sometimes an unexpected event—like a volcano eruption—disrupts the supply chain. The question then becomes: How will your company respond to this unanticipated event?

Obviously, the first challenge is to realize that an event of some type has occurred or is about to occur. But even more significant, is the response. How quickly your company responds and just what that response is, may have substantial impact on the company’s performance and, possibly, its bottom line.

That’s why it’s critical to have tools and processes in place to respond quickly to unanticipated events that aren’t covered by a mitigation strategy.

  • These tools must deliver visibility across the supply chain and provide alerts when an event is imminent, they must also include analytics so users can understand the importance of the event and the impact it will have.
  • Secondly, the tools must allow users to collaboratively simulate possible solutions, such as splitting orders, expediting orders and finding alternate sources.
  • The next capability may well be the most important. Once simulations are created, they must be compared and contrasted to determine which one best meets corporate goals and objectives. Using a multi-scenario scorecard allows users to compare the possible solutions and measure the impact of each potential resolution on key corporate metrics.

Consider these two possible solutions to a critical parts shortage….

The first solution is to use existing inventory and split apart orders. Customers will not receive their full order, but they will at least receive part of it. A second possible solution is to expedite shipment of parts from an alternate supplier to your facilities, fill the orders, and then expedite shipment to your customers.

The first possible solution results in a decrease in on-time delivery and, potentially, a decrease in revenue for the quarter. The second, however, will result in an increase in cost of goods sold and a corresponding decrease in margin.  How do you know which route to take?  These results, compared against a given target and appropriately weighted, provide an overall score for each solution. Thus, an analyst can then use those scores to determine which scenario best suits corporate objectives.   How you make those quick decisions and how well they align with corporate objectives can make or break a company’s bottom line.

Let me know what you think; either about responding to unanticipated supply chain events or photos of lightning in clouds of volcanic ash.

Posted in Response Management, Supply chain risk management


S&OP Scenario Planning: Go Deep or Go Home

Published September 17th, 2010 by Trevor Miles @milesahead 0 Comments

I, along with Bob Ferrari and Lora Cecere and about 300 other people, attended the IE Groups conference on S&OP and FP&A in Boston last week. Bob has commented on the conference here, to which Lora replied.  I’ll come back to their comments later.

What struck me was the debate about the level of detail that is required in the model used to generate a plan during an S&OP cycle.  Bob Stahl was adamant that all that is required for executive S&OP is the so called volume level with very aggregate numbers for demand, items, constraints and constraint consumption.  That was followed by a presentation by Kris Lutz , Director of Sales and Operations Planning for Staples.  Kris stated that Staples needs to know the mix because it has a big effect on their space requirements and cost.  He contrasted the space requirements – in-store and warehouse – for filing cabinets and memory sticks.  After all they are both storage right.  So you can make the case that they belong in the same product group.  If Staples only planned ‘Storage Products’ at the volume level it would give them an insufficiently accurate estimation of shelf and warehousing requirements – too little for filing cabinets and too much for memory sticks.  The counter argument could be that in this case they shouldn’t be in the same product family.  Well, how is that different from saying that volume and mix planning is required to get a sufficiently accurate understanding of constraint utilization?   They have to run several scenarios at the mix level to determine the boundaries of their shelf and warehouse storage requirements.

I like the ‘Storage Products’ example because it illustrates the complexity of a number of other decisions made during an S&OP cycle that require diving into the mix level.  The most obvious is new product introduction and product transitioning.  Some may argue that at the product volume level these product portfolio issues are irrelevant.  Try telling that to Apple.  Not that I have a deep understanding of Apple’s S&OP, but from what I have read and heard, their mid-term plans are very granular.  They most certainly don’t plan at the ‘Mobility’ and ‘Computing’ product levels, if they have such categorization.  Perhaps somewhere Steve Jobs has a 5 year plan at this level, but I bet his plan for the next 4 quarters is a lot more granular than that.

The reality is that aggregate planning just doesn’t cut it when there is a constraint.  The manner in which the risk associated with the constraint can be mitigated depends very much on the mix.  Currently, we are hearing from customers and prospects about electronic component shortages.  While I agree that probably more than 75% of components are common across the product family, and that many of these are likely to be in short supply, if there is a big range in margin across the product family, without analyzing revenue, margin, and supply at a level deeper than the product family level, the results you arrive at will contain too much error on which to make a decision.  It will definitely make the comparison between scenarios very difficult, maybe even meaningless.  Since what-if analysis or scenario planning is so central to S&OP, you must be able to generate plans and results at a level of detail that allows for meaningful comparison.

Some may argue that these decisions should not be made in S&OP but rather at the operational planning level.  I can’t agree.  We all know that as components become scarce, not only does the price go up, but the lead time extends too, impacting not only margin, but customer service too.  We are hearing of 6 month lead times on committed purchases of some electronic components, which is well within the S&OP lead time.  If you are going to do any level of demand shaping to try to claw back market share and margin, you will need to get to the item level of planning at least 6-9 months out.

Even in more ‘normal’ times I have often run into the fallacy of running S&OP at the volume level only. It is only when Operations creates the mix plans that many constraints are surfaced, when, for example, it is realized that there is insufficient capacity to launch that new product because the equipment was only planned for next quarter.  That won’t come out in a volume plan.  I’m not suggesting that the executives view the S&OP plan at the detailed level or that the plan gets explained to the executives at the detailed level, but I cannot agree that generation of the plan can be performed at the volume level only.

Bob Ferrari comments that there are four critical challenges that need to be overcome in an S&OP process:

  1. Time sensitivity and information complexity – … the need for constant updating and data refreshes can exceed the required S&OP process cycle time, which dilutes the credibility of the process.
  2. Realities of the clock-speed of business – … Periods of maximum profitability are short, and supply chain participants change constantly.
  3. Virtual organization – … The constant two-way transfer of planning information across cross-business barriers is best accomplished through tailored IT applications supporting the S&OP process.
  4. The broader end-goal – … If gaps in revenue, manufacturing or cash objective plans get discovered at the end of the (S&OP) process, it’s too late. He further noted that speed of the process equates to the fact that the process executes faster than how business circumstances change.

While Bob is commenting more on the need for technology to support the S&OP process than the level at which S&OP should be performed, his comments on the clock-speed of business and broader end-goal emphasize the fact that the S&OP process must be run at both the volume and mix level, incorporating both the long term and medium term.  How is it possible in a high-tech/electronics organization to plan the very frequent new product introductions and end-of-life products without a granular representation of the items to capture component requirements and time-phased average selling price changes by item?  After all, most high-tech/electronics companies will bring new products to market specifically to combat price erosion of older models.

Commenting on Bob’s blog, Lora Cecere states that:

I think that a company can only successfully move forward without a S&OP solution, using the defacto solution of Excel if the following conditions exist:

  • Regional player with no constraints and limited demand and supply variability
  • Revenue less than 250 M and organizational size less than 100 people
  • Very little new product launch activity
  • Little to no supply chain complexity
  • Single division or supply chain with little to no organizational complexity

My take away from Lora’s comment, while it is still focused on the issue of the need for technology in S&OP, is that the complexity and variability in today’s supply chains requires that we perform scenario planning or what-if analysis at the mix or granular level.  All the complexity she describes cannot be accommodated at the volume level. While I do not question that a volume plan is a good method of eliminating some scenarios, I contend that a mix level S&OP must be created in order to surface constraints in the plan and to get a sufficiently accurate result so that comparison of alternative scenarios is meaningful.

What do you think?  Is the ‘direction setting’ level of S&OP carried out at the volume level sufficient?  Would you report future performance projections to your management and perhaps to the market based upon an analysis carried out at the volume level only?

All I can say is that our customers, some of which are in the list of AMR Top 25 Supply Chains, are telling us that with scenario planning, go deep or go home.

Posted in Milesahead, Sales and operations planning (S&OP)


Really unusually uncertain

Published August 24th, 2010 by Trevor Miles @milesahead 0 Comments

For me one of the pleasures of being on vacation, as I was last week, is to read different newspapers and learn a bit about the local economy and politics.  While not quite as “local” as I would have liked, I happened upon the Caribbean version of the Miami Herald and was fortunate enough to run into an op-ed piece by long-time columnists Thomas L. Friedman of the New York Times titled “Really unusually uncertain”.  Many of you will have heard of Tom Friedman in the context of his book “The World is Flat”.  I stumbled across Tom Friedman in the late 1980’s – I think – and have been reading him avidly since.  Clearly I have completely plagiarized the title of Tom’s article, which in turn refers to US Federal Reserve chairman Ben Bernacke’s use of the term “unusually uncertain” to describe the outlook for the US economy.

Of course this uncertainty is not restricted to the US economy, which is the point Tom Friedman makes by focusing on the German economy and how it relates to economic recovery in Europe.  In fact he points to three influences that will need to be reversed if the US and EU economies are to recover soon:

The first big structural problem is America’s. We’ve just ended more than a decade of debt-fueled growth during which we borrowed money from China to give ourselves a tax cut and more entitlements but did nothing to curtail spending or make long-term investments in new growth engines.

Second, America’s solvency inflection point is coinciding with a technological one. Thanks to Internet diffusion, the rise of cloud computing, social networking and the shift from laptops and desktops to hand-held iPads and iPhones, technology is destroying older, less skilled jobs that paid a decent wage at a faster pace than ever while spinning off more new skilled jobs that pay a decent wage but require more education than ever.

But the global economy needs a healthy Europe as well, and the third structural challenge we face is that the European Union, a huge market, is facing what the former U.S. ambassador to Germany, John Kornblum, calls its first “existential crisis.” For the first time, he noted, the E.U. “saw the possibility of collapse.” Germany has made clear that if the eurozone is to continue, it will be on the German work ethic not the Greek one. Will its euro-partners be able to raise their games? Uncertain.

Commenting on Bernacke’s statements, Jeannine Aversa of Associated Press writes that

Consumers have cut spending. Businesses, uncertain about the strength of their own sales or the economic recovery, are sitting on cash, reluctant to beef up hiring and expand operations. A stalled housing market, near double-digit unemployment and an edgy Wall Street shaken by Europe’s debt crisis are other factors playing into the economic slowdown.

OK, OK, so there is lots of economic uncertainty.  What do we do about it?  During my time as a management consultant I learned a fundamental truth: Analyzing a situation is fairly easy, defining a future state is a lot harder, but the really hard part is defining the path to achieve the future state. Not being an economist I can comment little on the efficacy of Tom Friedman’s suggestions for recovery, nor on Ben Bernacke’s for that matter.  My guess is that most of the readers of this blog fall into this category too.  Clearly we all want the same future state of a revived world economy and we are all too aware of the current state of the economy.  Of course we all have our opinions on the path to recovery, which we can express in elections, but for the most part actually pulling the levers of the economy is not something which is in our control.

Which leaves us all feeling “really unusually uncertain”.

While we may not be able to effect change to the national or global economy, we do have some level of control over the economic performance of the companies for which we work.  As I commented in a previous blog titled “Why S&OP? Why now?”, this is where I see sales and operations planning (S&OP) playing a big role.  But for S&OP to be effective it must provide ways for people to evaluate and understand uncertainty.  There are 4 fundamental capabilities that are required to achieve this:

  1. Capture of assumptions made about the future state for knowledge sharing and control
  2. Facilitated collaboration across functional boundaries to get buy-in and inputs from multiple parties
  3. Super-fast “what-if” analytics that allow organizations to evaluate and compare multiple scenarios in order to maximize performance and to mitigate any identified risks
  4. Continuous plan performance management so that deviations are detected early and course corrections can be made quickly

The last point about performance management is often overlooked.  The more uncertain the future, the less likely it is that your plans will be achieved.  It doesn’t help much if at the end of the month you determine that the plan wasn’t achieved.  In a more stable economy this might have been sufficient.  In today’s volatile economy (which is the root cause of our uncertainty) it is really important to monitor performance continuously and to course correct as quickly as possible when significant deviations are detected.

However, what makes this all possible is super-fast “what-if” analytics.  Uncertainty is risk.  Without a mechanism to evaluate many alternative scenarios, your ability to evaluate and understand risk is reduced greatly.  Do you think Excel is up to this?  Do you think this can be achieved without any technology?

Posted in Milesahead, Response Management, Sales and operations planning (S&OP)


What-if you had What-if for S&OP?

Published July 6th, 2010 by Trevor Miles @milesahead 0 Comments

I had the pleasure of participating in a webinar which featured Lora Cecere, a preeminent speaker on all matters regarding S&OP. Lora has worked for at a number of CPG companies; Manugistics (a software vendor); was the most read analyst at AMR for a number of years; and is now at the Altimeter Group.  If anyone can claim to have “been there, done that,” it is Lora.  The title of the webinar was “What S&OP capabilities matter most?” (presentation available on-demand here)  There is no question in Lora’s mind that this is fast and effective what-if capabilities.

Lora has arrived at the conclusion that what-if capabilities are key to S&OP based upon a study she is conducting.  As Lora stated,

“2009 was all about Demand.  2010 is all about supply.”

Of course in 2009 the issue was demand coming to a screeching halt, and 2010 is all about people scrambling to get supply to satisfy what demand there is given the tepid upturn in the economy.  A number of polls were conducted as part of the webinar which not only confirm the growing importance of what-if capabilities, but also confirm demand volatility as a key driver.  Over 70% of attendees indicated that there has been a big increase in the importance of what-if capabilities in S&OP.  (It must be noted that the registrants and attendees covered a wide variety of industries from Retail to Pharma, and a broad range of company sizes too.)

Perhaps more surprisingly is the dominance of demand volatility as a driver for the need for what-if analysis.  Perhaps we would have received a different result if we had used the term “Supply volatility” or “Supply availability” rather than “Supply reliability”.  Most customers and prospects are telling us that they have real problems with supply shortages of both key and commodity items.  Many of our existing customers are running what-if analysis on what could be built or what demand should best be satisfied based upon an available supply of key components, and therefore insufficient supply to meet all demand.

What was not surprising is that only 12% of respondents state that they have sufficient what-if capabilities, whereas over 40% of respondents state that their company is only just starting to use what-if analysis in the S&OP cycle, and 40% of respondents state that there is a large gap in what-if requirements and capabilities.  While not surprised, I must admit to having been confused by these results because what-if and compromise across functional and organizational boundaries is a key tenet of S&OP.  How are companies running an effective S&OP process without these capabilities?

As Lora stated during the webinar,

“It’s not enough to connect numbers. You need to know which market drivers have greatest influence and to perform what-if’s on supply/demand using these market drivers.”

Let’s accept for now that demand volatility is the primary driver for the need for what-if capabilities, but dig a bit deeper into what this means.  Much of forecasting is focused on creating a statistical forecast based upon past shipments.  This works fine in a fairly stable market when demand is fairly predictable and everyone can afford finished goods inventories to buffer against demand fluctuations.  What has caught everyone by surprise over the past 2-3 years is the rate of change. Demand volatility is no longer represented by historic demand patterns.  This is a “rear view mirror” perspective of the market.  And S&OP cannot be about the rear view mirror.  It is all about looking far into the future when market drivers are very uncertain, let alone demand. Little is known about competitor activities.  At least governments have to publish plans for recovery, even if the efficacy of their plans can always be questioned.

I can think of no better way of evaluating the effect demand uncertainty has on the supply chain than a robust what-if capability, starting from range forecasting.  Instead of a single number forecast, arrive at a best estimate but also test upside and downside scenarios to evaluate and mitigate against risks to which you company will be exposed.  Which is worse, to be left with excess and obsolete components or to lose market share because demand is not being satisfied?  What if we sourced from a more expensive supplier but they would provide shorter lead times and more flexibility on volumes?  Would this provide us a lower overall inventory liability?

These are the types of decisions you should be making during an S&OP cycle.  I don’t know how they can be performed without what-if capabilities.

Again, check out the recorded webcast presentation to hear Lora’s take on the issue.

Posted in Milesahead, Sales and operations planning (S&OP)


Responding…versus planning…versus expediting

Published June 30th, 2010 by Max Jeffrey 0 Comments

This is a follow-up to my post from a few weeks ago: Expediting versus Planning.  I received many comments and recommendations on this subject as to whether much of the expediting that occurs is in fact related to planning deficiencies.   After reading and reflecting on the comments I received, it seems to me that the premise that effective planning by itself will reduce the need to expedite is not necessarily true.  Obviously, effective planning is critical to reduce expediting.  Without a good plan, then what do we execute?  However, no matter how good plan is, it will always change.  Forecasts by definition are not accurate.  As we all know, changes and disruptions can occur in an almost infinite number of ways throughout the supply chain.  The best plan will always be out of date almost immediately after it is published.  (Just to be clear, when I say plan, I am referring to the MRP plan.)

Given the assumption that a plan is crucial, together with the realization that the plan will not be accurate, we are led to the conclusion that we need a stable plan, but be able to adjust the plan as and when needed. We need to be able to adjust the plan only when significant enough factors warrant a change to the plan, and with enough lead time and stakeholder buy-in to execute properly.  To restate, I believe that the following are important:

  1. Plan Accuracy and Stability – The MRP plan needs to be stable enough to enable effective execution but we need to be able to detect exceptions that are significant enough to warrant a change
  2. Responding to Change – The capability to effectively respond to required changes needs to be in place

How do we effectively accomplish the above?

Plan Accuracy and Stability

  • First, the plan needs to start with an effective Sales and Operations (S&OP) process.  The more robust the S&OP process, the better that the high level plan will be.   
  • We need to be able to detect or sense the need for changes, and once needed changes are detected, we need to be able to discern which are significant enough to warrant a change to the plan. 
  • We also need to be able to prioritize these since there may be more than we can deal with. 

The key is that potential problems such as material shortages and late customer orders need to be detected for the future.  Obviously, once late orders or shortages have occurred, they are easy to detect (maybe even by way of angry calls from customers or buyers getting urgent messages from production regarding shortages.)

Referencing a recent blog post by Kerry Zuber, “Driving performance improvement through exception management“, Kerry states that in some organizations, there can be as many as 30,000 action messages generated by an MRP regeneration.  This exemplifies the complexity of the MRP plan and sheer volumne of exceptions in many organizations.  The organization cannot work all of these recommended actions, but which ones are the right ones to work?  Which ones signal that something in the higher level plan needs to be adjusted?  A second level, automated process needs to be in place in this type of environment to prioritize actions and also alert the responsible parties. 

The capability is required to detect what future demand will be late due to the mis-alignment of supply schedules,  issues with capacity in the supply chain and other issues.  If the future state/impact cannot be detected, then adjustments or contingencies cannot be put in place to avoid or mitigate them.  And as mentioned, we also need to be able to determine which of the detected changes require action and by who.

Responding To Change

Once changes are detected, we need a process to effectively implement these changes. This involves two key process and system capabilities:  simulation and collaboration.

We need to be able to simulate what-if scenarios to determine how best to deal with the change.  For example, it is difficult to calculate what the impact of a supplier changing commitments on a PO schedule will be in many environments without being able to simulate what that change in the commitment does to the overall plan.  In developing a response to the change, we need to be able to simulate multiple action alternatives and assess how well they will solve the problem and also whether they are achievable.

These simulations cannot be done in a silo.  Any significant change needs to be collaborated on with the extended supply chain.  Collaboration is certainly required with other internal organizations and potentially with affected external suppliers.

I realize that the above is very high level and probably over simplified, but I believe the general concepts are necessary in a complex manufacturing environment to optimize planning.  Without an optimized plan, execution cannot be effectively and efficiently accomplished and we have to resort to a lot of brute force exercises, including expediting.  Even the best of plans needs to be monitored for required adjustments and we need to have effective processes and systems in place for responding to these changes.

Has your organization implemented a process for responding to change?

Posted in Response Management, Sales and operations planning (S&OP), Supply chain collaboration, Supply chain management, Supply chain risk management


The real value is in the response (In this case, responses to my blog post on forecast accuracy)

Published June 21st, 2010 by Bill DuBois 1 Comment

Many times the responses to a blog post are more valuable than the original post itself, especially when the original post poses a question. In the case of “How accurate does the forecast need to be?” that was certainly the case. The following are some “nuggets” from the  responses received to the original post that are worth sharing.

Stephen Mills (who responded to the LinkedIN version of the post) talked about using what we know about past relationships and other key variables that may be in the future to determine what sales, demand and production should be. It’s the “shocks” to the forecast that can’t be built into the model. How you respond to the shocks will determine the impact of an inaccurate forecast. Running scenarios to determine the impact of “future shocks” to your replenishment times, inventory policies and customer relationships, etc. all play a factor on how accurate the forecast needs to be. Stephen also supplied the best quote related to forecast accuracy,

“Forecasts are either wrong or lucky.”

Stephen points out that a robust end to end supply chain will ensure that an inaccurate forecast doesn’t mean bad luck for the business. It’s only one piece of the puzzle.

Another respondent also pointed out that forecast accuracy was only one piece of the equation. This response also talked about forecast communication. Communication between functional partners on everything from market trends, process improvements and “shocks” are discussed in a timely manner so adjustments can be made in time to improve the business. Some relevant examples included the case where demand for a product may be unexpectedly soft, so marketing may shift promotions to help on the sales and supply chain side. Finance would also be in the loop so they could adjust their balance sheets.

I believe overall respondents agreed that the need for the forecast to be accurate is a function of such factors as the cumulative lead time, safety stock policies and flex capacity. Continuous improvement activities around lead times and quality will take some of the burden off those responsible for developing the forecast. Operational excellence and the ability to respond to “shocks” are a competitive advantage when unexpected demand opportunities present themselves. One response pointed out that this introduces an element of “time” to this issue of forecast accuracy. How good the forecast needs to be will be dependent on the range of the forecast and service level policies, especially on critical lead time items.

As pointed out, forecast should not be left on its own but accompanied by all the background and risk information so demand plans can be set, supply rationalized and plans easily re-evaluated if the “shocks” hit. This was only a small sample of some great insights from the blog responses. Thanks to all those who participate in these discussions. It really is worth it!

Posted in Supply chain management


In-memory…In-style

Published June 2nd, 2010 by John Sicard 0 Comments

There was no avoiding the buzz surrounding SAP’s recent announcements and demonstration of its newly available in-memory database technology at the SAPPHIRE 2010 conference. There’s a good description of Hasso Plattner’s vision here, and some deep and relevant commentary from Dennis Moore here.  Having read previously that SAP acquired Sybase in part to gain access to its in-memory database technology, I was somewhat surprised to see a demonstration of what many are now calling “HassoDB,” which I can only assume is distinctly not Sybase…makes me wonder whether there will be an internal competition between these two platforms. Hasso has been working on the HassoDB for a number of years and even had a keynote on in-memory technology at SAPPHIRE in 2009. I can’t see him giving this up without a fight. With multiple applications already in production leveraging HassoDB (BWA, etc.), where would Sybase fit? Oh yes, I remember now…mobile computing…or was it buying revenue and improving SAP’s earnings per share? I’m just musing…forgive me.

As most who closely follow SAP know, SAP has been talking about in-memory databases for several years. To those that don’t follow the exciting world of database technology, you might even think SAP invented in-memory databases! The truth is that in-memory databases have been around for longer than most realize. Check out this Wikipedia page and you’ll get a sense for the dozens of innovators that led the way well before SAP stepped onto the field. While you won’t see Kinaxis technology mentioned in the mix, we’ve been at it for longer than most—perhaps even the longest. Despite our laser-like focus in this area, our senior architects continue to admit there’s still room for improvement and are in tireless pursuit of it. However, to borrow from the barber’s famous line from one of my favorite Leone spaghetti westerns (“My Name is Nobody”) – they would also state — Faster than ‘us’, Nobody!

While I wasn’t present to witness it, our development of in-memory technology began 25 years ago with a hand full of brilliant engineers in a basement who founded their own company, Cadence Computer. Their goal was simple: to invent something meaningful and technologically amazing. One of our Chief Architects, Jim Crozman, had an idea to run ERP in-memory—motivated by improving upon a then 30-plus hour run. As you might imagine, finding a machine to run the software that was in his head at that time proved to be impossible, so Jim, along with a small group of talented engineers, did the only thing they could think to do: They invented and constructed a specialized computer (the size of two refrigerators), which would become a dedicated in-memory database appliance—likely a world first. They would call it SP1018. We all know how technologists love a good acronym with some numbers attached to it! At that time, 4MB of RAM took an 8×10 circuit card—and wasn’t cheap! They were packaged into modules with a custom bit slice MRP processing engine capable of 10M instructions per second that could process data in memory at its peak speed. Program and temporary working memory were in their own storage blocks, so the main memory space and bandwidth was reserved for the database. Up to 16 of those processing/memory modules were clustered with a high speed backbone to form single MIMD processing system that could do an MRP “what-if” simulation for a large dataset in minutes. We would go on to sell this computer to GE, at that time an IBM 3090 showcase center. The IBM 3090 had a whopping 192MB of RAM, and sitting next to it, our appliance with 384MB of RAM. IBM’s ERP analytics ran in over three hours, while our appliance replicated the same analytics in approximately three minutes.

Computer architecture and speed has evolved greatly since those trailblazing days. Inexpensive multi-core systems with big on-chip caches are capable of tens of billions of instructions per second. No need for custom hardware today! Speaking of on-chip caches, understanding and leveraging this resource has become the key to maximizing speed and throughput. Memory architecture remains 10 times slower than processor speed, so understanding how machines retrieve data and the treatment of that data within the core is fundamental to in-memory database design. It takes the same amount of time to retrieve 1 byte of data as it does 1 block of data. This makes locality of reference a very important system design criteria, minimizing memory access cycles to get the data you need for processing. Data organization and keeping data in a compact form (e.g. eliminating duplication) and with optimal direct relationships and clustering makes for optimal processing speed (minimize memory access cycles).

At this year’s SAPPHIRE Conference, SAP explained how it has chosen a hybrid row/column orientation as the construct to store in-memory relational data. Indeed, columnar orientation helps with data locality and compaction of a column of data (obvious), and is most effective in circumstances where the use cases are driven by querying and reporting against a database that does not change or grow rapidly or often. Dennis Moore says it best in his recent blog:

“There are many limitations to a columnar main-memory database when used in update-intensive applications. Many SAP applications are update-intensive. There are techniques that can be used to make a hybrid database combining columnar approach for reading with a row-oriented approach for updates, using a synchronization method to move data from row to column, but that introduces latency between writing and reading, plus it requires a lot of CPU and memory to support the hybrid approach and all the processing between them.”

The challenges associated with columnar orientation will be felt most when attempting to drive performance of complex in-memory analytics. By analytics, I don’t mean complicated SQL statements, compound or otherwise. Rather, I refer to compute-intensive specialized functions, like ATP/CTP, netting, etc. That is calculating consequence to input events based upon a model of the business, particularly the supply chain. Columnar organization solves issues for a small subset of problems but makes most usages of the data much worse. Processing usually involves a significant subset of the fields on a small related set of records at a time. Since a single record’s data is spread across different areas of memory by a columnar organization, it causes a bottleneck between memory->cache->processor. A single processor cache line ends up with a single piece of useful information and multiple cache lines are then needed to get just one record’s data. For example, ATP for a single order needs a subset of demand, supply, order policies, constraints, BOMs, allocations, inventory, etc. Perhaps this is the main reason why the PhD students at the Hasso Plattner Institute of Design at Stanford reported only achieving a 10x improvement for their ATP analytic prototype using HassoDB, significantly slower than their raw query performance ratios.

Millisecond query results are at most half of the equation—and definitely the easiest half. Don’t get me wrong, faster BI reports are great. If you’ve been waiting a few minutes for a report, and you can now get it in seconds, that’s real value. The trick is to go beyond “what is” and “what was” analysis, and add “what will be if” analysis. If done correctly, in-memory analytics can achieve astounding speeds as well. For example, the Kinaxis in-memory engine processes analytics (e.g. ATP), from a standing start (worst-case scenario) with datasets consisting of 1 million part records, generating 2 million planned order recommendations following the creation and processing of 27 million dependent demand records in 37 seconds, while handicapping the processor to a single core. Further, eight different users can simultaneously request the same complete calculations on eight what-if scenarios of the data and still get their independent answers in less than 60 seconds. No need for ”version copy commands.” My personal favorite performance test done in our labs involves proving that the more users logged into to the system, the less time it takes for them to receive their results (i.e. average time per request goes down). As impressive as these benchmarking numbers are, these tests do not represent typical user interaction (i.e. batching full spectrum analytic runs). If done correctly, massive in-memory databases with intensely complex analytics can scale to thousands of users on a single instance (think TCO here), each capable of running their own simulations—change anything, anytime, and simultaneously compare the results of multiple scenarios in seconds.    

RapidResponse simultaneously measuring eight scenarios for a user using weighted scorecard

All this speed and scale becomes valuable when businesses can bring about new and improved processes capable of delivering breakthrough performance improvements. With collaboration gaining traction as the new supply chain optimizer, companies are driving innovation toward this area and testing in-memory databases in new ways. For example, not only is it important to monitor changes in the supply chain and the potential risk/opportunity they create, companies now want to know “who” is impacted, “who” needs to know, and “who” needs to collaborate. While this seems like an obvious value proposition, the science involved in delivering this on a real-time basis is staggering.

I’m happy to see SAP draw such attention to the merits of in-memory databases. It serves to validate 25 years of our heritage, our focused research and development, and surely validates the investments made by some of SAP’s largest customers (Honeywell, Jabil, Raytheon, Lockheed Martin, RIM, Nikon, Flextronics, Deere, and many more) to leverage RapidResponse. Whether related to Sales and Operations Planning, Demand Management, Constrained Supply Allocation, Multi-Enterprise Supply Chain Modeling, Clear-to-Build, Inventory Liability Reduction, What-if Simulation, Engineering Change Management, etc., these great companies are experiencing and benefiting from the speed of in-memory technology today.

Why wait?

Posted in Products