Posts Tagged ‘information management’

Data is the new oil: know sooner, act faster

Published June 4th, 2012 by Trevor Miles @milesahead 0 Comments

dataRami Karjian of Flextronics casually threw out the comment that “data is the new oil” during a session at the recent Gartner Supply Chain Conference in Palm Desert. The session titled “New Business Rules – The Flextronics Next-Generation Supply Chain Strategy” was hosted by GT Nexus. Given Flextronics’ role as a contract manufacturer in the high-tech/electronics supply chain space, with aspirations of providing supply chain managed services, it is immediately obvious how they can make very effective use of data, especially extended or “true” end customer demand data and “true” component availability data. Of course, this is true for any number of participants in the supply chain and there has been a lot written about the use of social platforms to capture ‘big data’ on consumer behavior. But anyone who has worked with the likes of Flextronics will recognize that the ‘speed’ of their business (driven by low margins, high volumes, and rapidly changing products) is like no other. Knowing sooner and acting faster are core capabilities and competencies they must have in order to survive. It is easy to see where data fits into this equation.

But I hope Rami will not be upset with me when I state that the term seemed too catchy to have come from a supply chain guy (I am writing as a supply chain guy, so I mean no disrespect!) The catchiness of the phrase just reeks of Analyst or Strategy Consultant. So I went Googling for the term.  I was correct. The term seems to have been used originally by Andreas Weigend of Stanford in an interview with Forbes dating from September 2011. (One caution: The interview is sponsored by SAP.) In the interview Weigend says, with my emphasis, that

“Most people try to play the social media game faster, but it is not your game to play anymore. To be successful, you have to understand what the game is made of . . . it is data and identity. This forces the change from a transactional economy to a relationship economy. The companies who get this, will win.”

There is no doubt that Weigend is referring to the customer or demand side of the supply chain, really to ‘big data’ epitomized by social media platforms such as Facebook and Twitter. But here is the problem for social media: Customers aren’t compelled to engage in any manner with the OEM; Suppliers are. So while undoubtedly social data can be mined for information, social data cannot be relied on as the sole source of information to drive the business. Instead, social data can be used to augment, to enhance, the structured data that is being used to drive the business.

I was struck by what Ray Wang said in this context in the same Forbes interview, namely that

“The market has moved beyond just marketing, service, and support use cases,” says Wang. “We see 43 use cases that span across key enterprise business processes that impact eight key functional areas, from external facing to internal facing including PR/marketing, sales, service and support, projects, product life cycles, supply chain, human capital management, and finance.”

Companies will get real value within the internal and supply sides from platforms that are social in nature, whereby social concepts will finally make collaboration a reality by adding context and nuance to ‘dumb’ EDI exchanges of data between computers. The activities of cross-functional and cross-organizational exploration and discovery through what-if analysis followed by negotiation, which are so important to decision making processes. These capabilities, which are required for decision making are not supported by EDI and ERP platforms, which instead focus on executional or transactional processes. This point was made very strongly at the Gartner Supply Chain Conference by Jim Cafone of Pfizer who said that

“ERPs are great if you want to talk to yourself, but who can afford to do that in today’s value chains?”

Jim was referring to the fact that much of today’s value/supply chain exists outside of the four walls of the organization because so much of the supply chain has been outsourced leading to the conclusion (using the context of ‘data is the new oil’, that ERP is like crude oil) - it needs to go through refinement and to be augmented by additives to be useful. It needs to be refined and augmented because, in order to operate a value/supply chain effectively today, it is necessary to reach across functional and organizational boundaries even for so-called structured data, making even structured data ‘big data’. As Jim Cafone stated, without reaching across the organizational boundaries, you are only ‘talking to yourself’. We cannot operate today’s supply chains effectively with this lack of visibility. (And let’s face it, many ERP deployments resemble an oil spill – the mess you’re in isn’t what you expected, and it’s going to cost an awful lot to fix.)

To be of any value, the data provided by visibility needs to be ‘refined’ by being broken down into ‘specific useful parts’, turning visibility into actionable insight. I don’t mean to diminish the value of visibility because without it there can be no actionable insight. The true value comes from the supply chain orchestration made possible by actionable insight.

So let us decompose ‘actionable insight’ quickly.

Turning Data into Visibility
Existing social network platforms, such as Facebook, are principally about sharing or “pushing” information, but most business processes require an interaction between at least two people, each of which is responsible for an aspect of the decision and who need to reach a consensus and compromise in order to take action. Often these interactions require input from as many as 5-10 people. Identifying who needs to know can often be an insurmountable barrier to reaching a timely decision. I want to draw the distinction here between the people who need to know, which implies responsibility to take action, and the people who want to know, which implies an interest but not a responsibility. Existing social technologies address the ‘want to know’, but not the ‘need to know’ aspect. On the other hand, existing machine-to-machine exchanges of structured data between organizations are desperately in need of the context and nuance provided by social platforms. Having both turns data into visibility.

Turning Visibility into Insight
Knowing the state of something – inventory, capacity, etc. – in the supply chain, which is what visibility provides, is useful, but not valuable. Value is derived from knowing what that state means to your financial and operational metrics, and, perhaps even more importantly, to your projected operational and financial metrics. To achieve this insight you must be able to compare the current state to a desired state and to evaluate if the difference is important, which you can only determine by having a complete representation of your supply chain – BOMs, routings, lead times, etc. – so that you can link cause to effect. You cannot do this by ‘talking to yourself’. Only if you have and end-to-end representation of your supply chain can you link a tsunami in Japan to your revenue projections for the next 2 quarters. Of course many people can do this given enough time. Doing this quickly – knowing sooner and acting faster – is what brings value. While events as large as the Japanese tsunami make the linkages between the supply chain nodes obvious, the daily operation of a supply chain is subjected to thousands of little ‘tsunamis’, the compounded effect of which, in terms of reduced revenue and increased costs, can often be the difference between making or missing a quarter.

Turning Insight into Actionable Insight
Knowing that something is not quite right – visibility – and knowing the financial and operational impact or consequence of this mismatch – insight – is only valuable if you can act on this insight quickly – actionable insight. To do this you must be able to link the impact to the person responsible for the impact, not only the cause. People take action. Of course under certain circumstances decisions can be automated, but only mundane decisions, decisions that make little difference, can be automated. ‘Big’ decisions should always be left to human judgment. And to do that we need to link the cause and effect to the people who need to take action. Hence actionable insight.

Turning Actionable Insight into Orchestration
Supply chain is a ‘team sport’. Each function may have their own span of control and metrics, but any decision made by one function will almost always have an impact on at least one other function and more likely have multiple impacts on multiple functions. But currently, each node in the supply chain, in fact each function in the supply chain, usually operates in isolation with Engineering making design changes with little thought as to how these impact Manufacturing or Procurement, with Marketing planning promotions with little consideration of available capacity or material availability, with Sales accepting order quantities and delivery dates with little consideration of the cost required to achieve both. Orchestration is the coordination and synchronization of these separate functions into a unified response to real demand. To achieve orchestration requires a platform on which a team of people across functional and even organizational boundaries can explore and understand the financial and operational consequences of possible actions quickly and collectively.

Turning Orchestration into Competitive Advantage
Too often when I write about these topics I forget to include the word ‘projected’. Without a doubt a great deal of value can be derived from risk recovery, but risk avoidance and risk mitigation are even more valuable. Being able to determine that revenue or margin targets will (future) not be achieved by the end of the quarter if we continue to create in the current manner is a lot more valuable than being able to determine why we have (passed) missed the quarter. Knowing sooner that something will happen if we do not change course allows us time to investigate ways to avoid the risk or to take the opportunity. But knowledge without action brings little benefit. I have written in the past about George Stalk’s concept of Competing Against Time and the related OODA concept – Observe, Orient, Decide, Act – idea from the US military strategist Colonel John Boyd, both of which explain very crisply the competitive advantage of knowing sooner and acting faster.

Posted in Best practices, Control tower, Milesahead, Supply chain collaboration, Supply chain management

Turning Supply Chain Risk into Opportunity at Jabil

Published February 24th, 2012 by Lori Smith 0 Comments

The third in our SupplyChainBrain video interview series is Jabil. These videos are jam packed with great content and I suggest you check them out (free registration to view, but well worth it!)

The cost of managing risk can’t be allowed to outweigh the value of the supply chain, says Joe McBeth, vice president, supply chain, at Jabil. But there are unknowns that have to be planned for.

As one of the largest manufacturing service providers in the world, Jabil is involved in everything from full product design, logistics, assembly and supply chain services for some of the biggest businesses around. That guarantees complexity to contend with.

“The challenges are consistent with some of the things we’ve seen in past, but they are more dramatic than they were” says McBeth. “The complexity of globalization, the number of nodes, the number of suppliers, the number of customers in the industries we serve, they all add a large amount of complexity to the equation.”

Added to that are the unknowns like the tsunami and earthquake combo that disrupted so many supply chains last year in Japan.

With a title like vice president of supply chain, McBeth is understandably a bit biased in assessing the importance of supply chain management. It’s simply the “the most important competitive advantage” that Jabil has in its space, in his view. He acknowledges that that arena is filled with good players. Everyone is working from small margins, and it’s difficult to stand out or to be unique. McBeth feels that Jabil is just that because quite aside from its product offering, its supply chain excellence “does create some separation” from the competition.

To have that kind of world-class supply chain, it’s imperative to have a data system and tool set that allow one to manage the complexity that happens daily. “The guy with the best information is always going to win,” McBeth says.

That’s no mere academic concern for Jabil. The manufacturer has 12,000 active suppliers and more than 250 major accounts. One needs a dependable data tool to plan and understand risk in order to keep ahead of the curve in that complex environment, he says. Sometimes you develop your own supply chain, sometimes you inherit one. Nevertheless, in all cases one needs to comprehend the risks around the master schedule and how to commit to customers’ demand signals. That necessity drove Jabil to invest in the RapidResponse solution from Kinaxis, McBeth says.

“We needed a tool set that was fast, that was easy to use and could do multiple scenarios so that we come out with better answers.”

As the company expands around the world, the supply chain needs to be continually reconfigured to support the additional manufacturing sites. “Having a control tower that feeds the best information, that does the modeling – that will allow us to be ahead of the game, to lower our risk, lower our costs and produce greater value for the end customer,” McBeth says.

It’s important to have tool that indicates course corrections that must be made. But McBeth says he envisions a tool one day that won’t just respond to changes but will anticipate them and take proactive measures.

To view video in its entirety, click here

Posted in Control tower, Supply chain management, Supply chain risk management

How are spreadsheets like cockroaches?

Published August 11th, 2010 by John Westerveld 6 Comments
Cockroach closeup

Image via Wikipedia

“…They’ve been in existence for decades, they can spread like wildfire, and no one has quite figured out how to stop their proliferation – even if they really, really want to.”

I came across this entertaining rant in the Sourcing innovation blog.  The idea is that despite many attempts on behalf of IT to get key corporate data out of spreadsheets and into dedicated systems, spreadsheets keep coming back. 

The doctor’s post was prompted by an article in that indicates that BI vendors have resigned themselves to the fact that spreadsheets are here to stay and have devised means to incorporate spreadsheet data into their systems despite the fact that studies have shown that 80-90% of spreadsheets contain serious errors. You know it’s true…How many times haven’t we been in the situation where user A’s spreadsheet showed different results than user B’s spreadsheet for the same thing?   OK, so there are a few errors…not a big deal, right?  Wrong!  To see the impact spreadsheet errors can have, check out this article from a few years ago that outline 8 of the worst spreadsheet blunders to date. Yikes!

So why do people rely so much on spreadsheets?  There are many reasons I’m sure, but the top culprits are as follows;

  • The business system (BI, CRM, ERP, etc) can’t present the information the way they need to see it
  • The business system doesn’t support what-if modelling
  • The business system is too darn hard to use
  • The business system logic is fixed and can’t be changed (within a human lifetime)

Until we get enterprise tools that can change these factors, spreadsheets will continue to proliferate regardless of how much bug spray we throw around.

Why do you continue to use spreadsheets?  Do you have any great spreadsheet blunders to share?  (We won’t tell… honest!)

Enhanced by Zemanta

Posted in Products, Supply chain management


Published June 2nd, 2010 by John Sicard 0 Comments

There was no avoiding the buzz surrounding SAP’s recent announcements and demonstration of its newly available in-memory database technology at the SAPPHIRE 2010 conference. There’s a good description of Hasso Plattner’s vision here, and some deep and relevant commentary from Dennis Moore here.  Having read previously that SAP acquired Sybase in part to gain access to its in-memory database technology, I was somewhat surprised to see a demonstration of what many are now calling “HassoDB,” which I can only assume is distinctly not Sybase…makes me wonder whether there will be an internal competition between these two platforms. Hasso has been working on the HassoDB for a number of years and even had a keynote on in-memory technology at SAPPHIRE in 2009. I can’t see him giving this up without a fight. With multiple applications already in production leveraging HassoDB (BWA, etc.), where would Sybase fit? Oh yes, I remember now…mobile computing…or was it buying revenue and improving SAP’s earnings per share? I’m just musing…forgive me.

As most who closely follow SAP know, SAP has been talking about in-memory databases for several years. To those that don’t follow the exciting world of database technology, you might even think SAP invented in-memory databases! The truth is that in-memory databases have been around for longer than most realize. Check out this Wikipedia page and you’ll get a sense for the dozens of innovators that led the way well before SAP stepped onto the field. While you won’t see Kinaxis technology mentioned in the mix, we’ve been at it for longer than most—perhaps even the longest. Despite our laser-like focus in this area, our senior architects continue to admit there’s still room for improvement and are in tireless pursuit of it. However, to borrow from the barber’s famous line from one of my favorite Leone spaghetti westerns (“My Name is Nobody”) – they would also state — Faster than ‘us’, Nobody!

While I wasn’t present to witness it, our development of in-memory technology began 25 years ago with a hand full of brilliant engineers in a basement who founded their own company, Cadence Computer. Their goal was simple: to invent something meaningful and technologically amazing. One of our Chief Architects, Jim Crozman, had an idea to run ERP in-memory—motivated by improving upon a then 30-plus hour run. As you might imagine, finding a machine to run the software that was in his head at that time proved to be impossible, so Jim, along with a small group of talented engineers, did the only thing they could think to do: They invented and constructed a specialized computer (the size of two refrigerators), which would become a dedicated in-memory database appliance—likely a world first. They would call it SP1018. We all know how technologists love a good acronym with some numbers attached to it! At that time, 4MB of RAM took an 8×10 circuit card—and wasn’t cheap! They were packaged into modules with a custom bit slice MRP processing engine capable of 10M instructions per second that could process data in memory at its peak speed. Program and temporary working memory were in their own storage blocks, so the main memory space and bandwidth was reserved for the database. Up to 16 of those processing/memory modules were clustered with a high speed backbone to form single MIMD processing system that could do an MRP “what-if” simulation for a large dataset in minutes. We would go on to sell this computer to GE, at that time an IBM 3090 showcase center. The IBM 3090 had a whopping 192MB of RAM, and sitting next to it, our appliance with 384MB of RAM. IBM’s ERP analytics ran in over three hours, while our appliance replicated the same analytics in approximately three minutes.

Computer architecture and speed has evolved greatly since those trailblazing days. Inexpensive multi-core systems with big on-chip caches are capable of tens of billions of instructions per second. No need for custom hardware today! Speaking of on-chip caches, understanding and leveraging this resource has become the key to maximizing speed and throughput. Memory architecture remains 10 times slower than processor speed, so understanding how machines retrieve data and the treatment of that data within the core is fundamental to in-memory database design. It takes the same amount of time to retrieve 1 byte of data as it does 1 block of data. This makes locality of reference a very important system design criteria, minimizing memory access cycles to get the data you need for processing. Data organization and keeping data in a compact form (e.g. eliminating duplication) and with optimal direct relationships and clustering makes for optimal processing speed (minimize memory access cycles).

At this year’s SAPPHIRE Conference, SAP explained how it has chosen a hybrid row/column orientation as the construct to store in-memory relational data. Indeed, columnar orientation helps with data locality and compaction of a column of data (obvious), and is most effective in circumstances where the use cases are driven by querying and reporting against a database that does not change or grow rapidly or often. Dennis Moore says it best in his recent blog:

“There are many limitations to a columnar main-memory database when used in update-intensive applications. Many SAP applications are update-intensive. There are techniques that can be used to make a hybrid database combining columnar approach for reading with a row-oriented approach for updates, using a synchronization method to move data from row to column, but that introduces latency between writing and reading, plus it requires a lot of CPU and memory to support the hybrid approach and all the processing between them.”

The challenges associated with columnar orientation will be felt most when attempting to drive performance of complex in-memory analytics. By analytics, I don’t mean complicated SQL statements, compound or otherwise. Rather, I refer to compute-intensive specialized functions, like ATP/CTP, netting, etc. That is calculating consequence to input events based upon a model of the business, particularly the supply chain. Columnar organization solves issues for a small subset of problems but makes most usages of the data much worse. Processing usually involves a significant subset of the fields on a small related set of records at a time. Since a single record’s data is spread across different areas of memory by a columnar organization, it causes a bottleneck between memory->cache->processor. A single processor cache line ends up with a single piece of useful information and multiple cache lines are then needed to get just one record’s data. For example, ATP for a single order needs a subset of demand, supply, order policies, constraints, BOMs, allocations, inventory, etc. Perhaps this is the main reason why the PhD students at the Hasso Plattner Institute of Design at Stanford reported only achieving a 10x improvement for their ATP analytic prototype using HassoDB, significantly slower than their raw query performance ratios.

Millisecond query results are at most half of the equation—and definitely the easiest half. Don’t get me wrong, faster BI reports are great. If you’ve been waiting a few minutes for a report, and you can now get it in seconds, that’s real value. The trick is to go beyond “what is” and “what was” analysis, and add “what will be if” analysis. If done correctly, in-memory analytics can achieve astounding speeds as well. For example, the Kinaxis in-memory engine processes analytics (e.g. ATP), from a standing start (worst-case scenario) with datasets consisting of 1 million part records, generating 2 million planned order recommendations following the creation and processing of 27 million dependent demand records in 37 seconds, while handicapping the processor to a single core. Further, eight different users can simultaneously request the same complete calculations on eight what-if scenarios of the data and still get their independent answers in less than 60 seconds. No need for ”version copy commands.” My personal favorite performance test done in our labs involves proving that the more users logged into to the system, the less time it takes for them to receive their results (i.e. average time per request goes down). As impressive as these benchmarking numbers are, these tests do not represent typical user interaction (i.e. batching full spectrum analytic runs). If done correctly, massive in-memory databases with intensely complex analytics can scale to thousands of users on a single instance (think TCO here), each capable of running their own simulations—change anything, anytime, and simultaneously compare the results of multiple scenarios in seconds.    

RapidResponse simultaneously measuring eight scenarios for a user using weighted scorecard

All this speed and scale becomes valuable when businesses can bring about new and improved processes capable of delivering breakthrough performance improvements. With collaboration gaining traction as the new supply chain optimizer, companies are driving innovation toward this area and testing in-memory databases in new ways. For example, not only is it important to monitor changes in the supply chain and the potential risk/opportunity they create, companies now want to know “who” is impacted, “who” needs to know, and “who” needs to collaborate. While this seems like an obvious value proposition, the science involved in delivering this on a real-time basis is staggering.

I’m happy to see SAP draw such attention to the merits of in-memory databases. It serves to validate 25 years of our heritage, our focused research and development, and surely validates the investments made by some of SAP’s largest customers (Honeywell, Jabil, Raytheon, Lockheed Martin, RIM, Nikon, Flextronics, Deere, and many more) to leverage RapidResponse. Whether related to Sales and Operations Planning, Demand Management, Constrained Supply Allocation, Multi-Enterprise Supply Chain Modeling, Clear-to-Build, Inventory Liability Reduction, What-if Simulation, Engineering Change Management, etc., these great companies are experiencing and benefiting from the speed of in-memory technology today.

Why wait?

Posted in Products

Human intelligence and machine stupidity: Supply chains are about effectiveness, not only efficiency

Published May 13th, 2010 by Trevor Miles @milesahead 1 Comment

Before I start on the body of my blog posting, let me state unequivocally that I believe, no, that I know, that computers and software have a huge role to play in decision making and execution in a wide range of business functions.  After all, I have worked in the software industry for the past 25 years.  I am also not one of those wacky people who think that machines are going to take over the world.  However, I am one of those people who believe that humans have unique skills that no machine is able to match currently, particularly the ability to evaluate nuance, uncertainty, and risk.  Computers and programs, on the other hand, are capable of processing huge amounts of data far more quickly than humans, but they always assume that the data they are fed and the algorithms/heuristics they are using to analyse the data are absolutely correct.  In other words, computers are hopeless at evaluating nuance, uncertainty, and risk.

All too often we don’t put processes in place which couple the human ability to evaluate nuance “intelligently” with the machine ability to evaluate vast amounts of data “dumbly”.  All too often we confuse efficiency with effectiveness, and pursue efficiency over effectiveness, exemplified by the use of the term “machine intelligence”.

Nothing brings this out more clearly than the recent stock market behaviour.  All the “quants” were quick to identify “human error” initially.  Not only did they say it was human error, but it was female error.  I’m surprised they didn’t suggest she was a blond too.  After all, we know how they confuse their B’s with their M’s.  How ridiculous!  Now that calmer analysis has taken place, it would seem that nothing of the sort happened, and not by a female either.  There is a very interesting article – I am sure there must be many more out there – in the Wall Street Journal (WSJ) by Aaron Lucchetti titled “Exchanges Point Fingers Over Human Hands” that analyzes what really went on last week Thursday. Lucchetti makes no bones about the fact that this is a man vs machine tussle:

“In the man-vs.-machine argument for financial markets, proponents of technology say machines do it faster and cheaper. Those in support of human involvement say people can use their experience and pull the emergency brake when the computers, or their programmers, make mistakes.

But when that happened Thursday, it appeared that some humans couldn’t react quickly enough, while those using computers just kept pushing the market lower.”

I would argue that human involvement should have been used to prevent the situation from occurring, not just as an “emergency brake”.

Let’s start by understanding the role of the “quants” in financial organizations.  A “quant” is short for a quantitative analyst.  These are math and physics whizzes that have been brought into financial institutions to create mathematical models to evaluate market behaviour, particularly algorithmic trading.  Algorithmic trading is a trading system that utilizes very advanced mathematical models for making transaction decisions in the financial markets. The strict rules built into the model attempt to determine the optimal time for an order to be placed that will cause the least amount of impact on a stock’s price. Large blocks of shares are usually purchased by dividing the large share block into smaller lots and allowing the complex algorithms to decide when the smaller blocks are to be purchased.  The use of algorithmic trading is most commonly used by large institutional investors due to the large amount of shares they purchase every day. Complex algorithms allow these investors to obtain the best possible price without significantly affecting the stock’s price and increasing purchasing costs.

Let me come clean;  I am an engineer, so I am a “quant” by nature and by training.  But I had the good fortune to study “decision under uncertainty” at the PhD level.  During this time I also came across “fuzzy logic”.  Forget the math and theory.  Fundamentally what it comes down to is that some people (quants) believe that any and all systems can be modelled exactly – given enough time and insight – and that the models can then be used to predict behaviour under any other circumstances.  I think this is a load of hogwash.  No mathematical model is ever complete and data is never 100% accurate.  However, when computers are used by humans to understand “directionally correct” decisions, they are of huge benefit.  In other words the model of the supply chain may indicate a 5.21% improvement in gross margin from 23.42% to 28.63% if supplier A is used rather than supplier B.  I would interpret the result to mean that it is highly likely that we could increase gross margin by more than 2.5% by using supplier A.  It would have probably taken a human months to gather, collate, and analyse the data by hand, and probably with a great deal of “human error”.  The same analysis could be achieved in a few hours using a computer, provided some of the primary data was already available.

There is an interesting little snippet in the Wikipedia description of quants which I think is of particular relevance.

“Because of their backgrounds, quants draw from three forms of mathematics: statistics and probability, calculus centered around partial differential equations (PDE’s), and econometrics. The majority of quants have received little formal education in mainstream economics, and often apply a mindset drawn from the physical sciences. Physicists tend to have significantly less experience of statistical techniques, and thus lean to approaches based upon PDEs, and solutions to these based upon numerical analysis.”

Statistical techniques are based upon uncertainty, or randomness.  Physicists, mathematicians, and engineers, on the other hand, hate uncertainty, and spend enormous amounts of time looking deeper and deeper into atoms trying to prove that everything is predictable, if only we had the knowledge and wisdom to understand the observations.  And they bring this perspective to the analysis of financial market behaviour, as pointed out in the Wikipedia quote.  Einstein once made the statement that “God doesn’t play dice with the universe”, which he came to regret, incidentally.  He was questioning the notion of randomness as opposed to determinism. Determinism is defined as understanding every event in nature as having a particular cause. Randomness defines an aspect in nature that has only a probability such as in quantum uncertainty.  My engineering training was replete with this deterministic attitude which informed Einstein statement, as was the training of my fellow engineers and scientists.  So the quants are in constant pursuit of the ultimate model to describe all situations so that they can predict the movement of the market under any and all conditions.  This is an attitude that is very common in supply chain management too.  I think it is flawed from the start.

In a separate article in the WSJ titled “Did a Big Bet Help Trigger ‘Black Swan’ Stock Swoon?”, it is clear that what happened last week Thursday was not “human error”, but rather “model error” in the sense that there was an over reliance on computer models, which in turn drove market behaviour.

The non-quants have been fighting back for some time since the market crash in 2008 and the whole CDO mess.  A good example of this is Scott Patterson’s book “The Quants: How a New Breed of Math Whizzes Conquered Wall Street and Nearly Destroyed It”.  It is a fascinating read, and very instructive.  But also fairly predictable in the blame game.  What I found most interesting was a comment by a reader of a book review of “The Quants” in the Globe and Mail.  Interestingly the title of the book review is “Quants accept no blame for financial crisis”.  Can’t be more explicit than that.  The reader wrote that

“In finance, you have a lot of people in high positions who are surprisingly innumerate (MBAs and the like) – they didn’t really understand what the quants were doing but didn’t mind as long as they were making money. Let’s not forget who hired the quants in the first place! When you combine this lack of technical oversight with poor regulation, you have a toxic mix.”

I believe we have a very similar situation in manufacturing operations, particularly supply chain management.  Senior management doesn’t really understand the complexities of operations and rely too heavily on the quants.  As long as they see inventories go down and stock prices go up, all is well.

To go back to Lucchetti’s article in the WSJ, the first act in the blame game for the market behaviour last week Thursday was to focus on “human error”.  Clearly a first salvo from the quants.  Later in the article, Lucchetti quotes Jamie of White Cap Trading as stating that

“Markets are a mix of technology and human judgment. Thursday, we saw far too much technology and not enough (human) judgment.”

I could not agree more.  I think I am going to print out that statement in 94 pt font and put it in a frame on my wall.  I would like to see everyone in supply chain management follow my example.

All too often I see this same behaviour in supply chain management where optimization engines are thrown at a problem.  I do not have too much of an issue with the use of optimization engines.  What I struggle with is that there is a slavish belief that the results are accurate to the nth decimal.  There is no understanding of the likelihood of achieving this optimum nor the degree to which the model is inaccurate nor the degree by which the result is affected by inaccurate data.  What happened in the stock markets is a classic example of relying too much on machines in the pursuit of efficiency.  The parallel’s in the supply chain space where we rely too much on optimization, be that Lean or mathematical optimization.  Do you know the first sign of when the quants have taken over your supply chain?  It’s when you hear that your data isn’t clean enough after you have already spent millions implementing an ERP system and countless hours “cleaning” data.

I am not suggesting that we unplug the ERP and APS systems we have deployed over the past 20 years.  I think there is a huge amount of value that has been received from the use of these tools.  But they are tools.  Let us treat them in that manner.

As always, I look forward to a robust debate, perhaps including some of my erstwhile colleagues.

Reblog this post [with Zemanta]

Posted in Milesahead, Miscellanea, Supply chain management

Excel doesn’t excel in all cases…

Published April 28th, 2010 by Monique Rupert 4 Comments

I recently read a blog post titled “Beware Supply Chain Excel Users—YOU are DOOMED!!!!” by Khudsiya Quadri of TEC.  I completely agree with the author that there is a big risk to SCM Professionals who rely too heavily on Excel.  There are all the reasons listed in the article such as  lack of collaboration, visibility, control and no ability to perform “what-if” scenarios.  I would like to add some additional thoughts to this discussion.

A big limitation of Excel in my view is that it cannot mimic the analytics in the company’s source ERP system.  Why is this important?  If someone is using Excel to make business decisions without all the capabilities the ERP source system has, then they may not be making the right decisions.  How can you make planning decisions if your spreadsheet doesn’t take into consideration functionality like sourcing rules, constraints and order priorities?

A company’s supply chain map is very complex, typically there are internal manufacturing data sources, external manufacturing data sources, inventory site data, etc.  It is possible to get data from multiple sources into Excel, but the big challenge is that the data is not always the same from each source system, so many organizations may have multiple spreadsheets to perform the same function/analysis.  But can any of those spreadsheets be truly accurate if they don’t show a true picture of the whole supply chain?

It is almost impossible to control the integrity of spreadsheet data and access to the spreadsheet.  With multiple people accessing the spreadsheet and no security, how can anyone have any confidence in the data?   In addition, most spreadsheets need to be reviewed by many people which typically requires pushing the spreadsheet around.  Without system standard security, data integrity could be an issue and auditing who made changes could be an issue.   How can there be a high level of confidence in the data and subsequent business decisions made?

I have known many supply chain companies who do make critical business decisions based off of spreadsheets.  For example, one company would use spreadsheets to analyze big order drop ins.  If they had a big order drop in they would use their spreadsheet(s) to determine the effect on their business and when they could commit to the customer to deliver the order.  This would typically require multiple spreadsheets getting data from multiple sources, tons of manipulation, trying to tie data together, and many different users from the organization looking at their piece, which would take several days and by then the data had changed and the end user would only have a 50% confidence level in the answer back to their customer.  This can be crippling if your products are very expensive like in the aerospace industry where the products are multi-million dollar and the customer is the government who may impose penalties if orders aren’t delivered when promised.

You need to:

  • get all the supply chain data in one place for visibility (with frequent data refreshes),
  • mimic the source system analytics,
  • have all the system standard security functionality and
  • output data in a familiar “Excel-like” format.

True nirvana is: one source of the truth, multiple users having access at the same time, data integrity, “what-if” capability with the power and flexibility of “Excel-like” outputs.

Posted in Products, Supply chain management

How can you evolve from business intelligence to business value?

Published April 7th, 2010 by Trevor Miles @milesahead 2 Comments

Not too long ago companies suffered from having too little data with which to manage the company’s operations. The ERP age has brought in a different problem of too much data, but too little information. This is not unusual because transaction systems, such as ERP, are designed to capture data and make a record of a transaction, principally for accounting purposes. They were not designed to provide insight gained from analyzing many similar transactions.

Financial services and telecommunication companies have pioneered the use of business intelligence (BI) solutions to enable them to analyze massive amounts of data they have accumulated over the years. As a result, considerable insight was gained from data mining and data analysis and thus, the need for BI capabilities grew in the ’80s and ’90s in other industries as well. But despite being a topic explored and written about extensively, there has been only a moderate uptake and mediocre results. Why?

Pure business intelligence (BI) tools suffer from two major drawbacks that prevent them from providing greater value and therefore obtaining greater adoption: They cannot identify causality and, as a consequence, they cannot provide a prediction of future performance.

In the past 5 years, the interest, and indeed the need, for real-time access to operational data has increased dramatically. The promise of real-time operational BI that goes beyond the capturing of static data snapshots and enables users to identify and analyze risks and events, is of major interest to supply chain management (SCM) managers. Driven to improve operations performance, supply chain managers know that better information about their operations and processes lead to better decisions and better supply chain performance.

What do you say when the CEO is asking whether the company will hit its revenue targets for the current reporting period? Can you tell the CEO instantly which customers may be facing late delivery, and which orders may not ship and why? Can you tell the CEO what is causing the late deliveries and how the company could get back on track? You should, because it is in these answers where the business value lies.

We just posted a paper that highlights what’s at the heart of evolving business intelligence into business value. Download it today.

Posted in Milesahead, Response Management, Supply chain management

Old-school organizational power structures thwart business performance: The old dogs need to learn new tricks

Published March 11th, 2010 by Trevor Miles @milesahead 0 Comments

John Westerveld, a colleague of mine, wrote a great 2-part blog post titled “Top ten reasons YOU should be doing S&OP” in which he gives a great practical example of when S&OP can be of great benefit to an organization.  The first reason John selects is alignment across different functions in an organization.  This set me thinking on what are the fundamental reasons for a lack of alignment across functions.  Of course, in today’s multi-tier outsourced supply chains, alignment is also an issue between organizations.  Hau Lee at Stanford has written a lot about this in his concept of “Agility, Adaptability, and Alignment” which is driven by “extreme information exchange”, according to Lee.

While trying to formulate my ideas about the causes of lack of alignment, I came across a set of postings by Dustin Mattison on his Logipi blog and on one of the LinkedIn discussions, which postulates that the problems at Toyota can be boiled down to organizational structure and culture.  This has been manifested by power “fiefdoms”, lack of transparency, and therefore lack of alignment between different functions.

There is a great section in “The Big Switch” in which Nicholas Carr traces the origins of organizational structures and their impact on performance.  (I wish I had a more formal source, and I am sure some of our readers can point me to one.)  Our organizational structures have been inherited from the military and really date from as far back as Roman times when there was no ability to communicate in real time.  Imagine the time it took to get a message from Rome to Cairo?  As a consequence, hierarchical structures were developed to ensure a process of central command and control.  Loyalty was prized above all else and disloyalty was dealt with very harshly.  The 20th century phenomenon of the corporation used the same organizational structures and same command and control attitudes, largely because the means of communications had not progressed since the Roman times, though the penalties for disloyalty (or poor performance) are considerably less harsh.

The business process reengineering efforts led by Michael Hammer, Tom Peters, and Peter Drucker in the 1990′s was the first attempt to correct this by “de-layering” management. But think about it: They were doing this before the wide-spread adoption of the internet, when faxes were still considered state of the art.  While the enthusiasm for BPR has waned because when put into practice it focused too much on efficiency (read headcount reduction), the fundamental idea that business processes can be more effective – not just more efficient – has been carried forward by Lean and Six Sigma concepts.  And the internet specifically, but technology more generally, is the enabler.  This is what can/does provide/enable the transparency Richard Wilding of Cranfield University talks about in an interview with Dustin Mattison, which is so crucial in breaking down the power barriers to more effective sharing of information across functional and organizational boundaries.

And yet we still have senior management (and professors in business schools) to whom IT in general, but the internet specifically, is a learned phenomenon.  Before anyone thinks “Yeah, yeah”, let me point out that I am one of the people who have “learned” how to use the internet and I am still not comfortable with “tweeting” and “blogging”.  In short, I am not comfortable with that level of personal “transparency”.  At the same time, I am staggered at how many mid-tier managers, let alone senior managers, still receive paper-based reports, scribble all over them, and then send the scribbled notes back to an underling who is supposed to act on the scribbled notes.  This is all about power and has little to do with effectiveness.  They could have just as easily made changes to values in a system and annotated these with some comments. This information would be available immediately to anyone who had to take action or make further decisions based upon the inputs from the senior manager.

Exacerbating the fact that much of senior management does not come from the “internet” generation is the difficulty of using existing IT applications and systems.  The fundamental drawback of existing supply chain systems specifically, but operations systems in general, which prevents their wide adoption by senior management is that they lack the ability for people (read senior management) to perform quick and effective what-if analysis.  It takes too long for them, and in truth it is also too complex, to create and analyze scenarios themselves, so they devolve this to more junior people who don’t really understand what it was the senior manager wanted to investigate in the first place.  More correctly, the senior manager is forced to take a structured approach to investigating and solving an issue whereas in reality problem solving is a very unstructured process governed strongly by exploration and discovery.  Even when senior managers have monster spreadsheets available to them, there is:

  • little to no connection to the current situation
  • insufficient level of detail to get a realistic evaluation of the future consequences of their decisions on financial and operational metrics, and
  • very limited ability to explore multiple scenarios.

They have to wait until the month end or quarter end to get a report on what has happened, and by that time it is almost impossible to deconstruct the cause and effect.

While I realize the limitation of my thinking (fundamentally I am an operations person) and recognize the impact – both short term and long term – that Finance, and HR, as examples, can have on the performance of a company, in companies that sell, design, and/or manufacture a physical product, Operations is the core business process that determines the current and future success of an organization.

All of this gets me to a brief discussion of Sales and Operations Planning (S&OP).  There are many definitions of S&OP out there and also a lot of discussion on S&OP “maturity” models.  At its heart and in its more simplistic form, S&OP is all about demand/supply balancing.  In other words alignment between the demand and supply side of the organization.  In a multi-tiered outsourced environment, this is not a simple exercise, so my use of “simplistic” is not meant to denigrate this level of S&OP adoption.

The greatest long term benefit of S&OP, even if this is difficult to quantify, is increased transparency and alignment, as noted by John Westerveld and discussed by Richard Wilding. AMR Research calls this “East-West” alignment.  And yet there are so many more benefits that are achievable by linking Operations to the Executive, by linking financial measures and objectives such as revenue, margin, cash flow to operational metrics such as orders delivered on-time and in-full, inventory turns, and capacity utilization.  AMR Research calls this “North-South” alignment.  A number of the analysts such as Ventana Research, Aberdeen, Gartner, and AMR Research (now part of Gartner) have referred to this North-South alignment as Integrated Business Planning.  Tom Wallace and Oliver Wight have referred to this an Executive S&OP, and now Accenture is referring to this as “Profit, Sales, and Operations Planning”.  Whatever we call it, there are lots of benefits.

The principle barrier to tapping into these phenomenal benefits is the organizational power structures we have inherited from a previous era.  These will not be easy to break down.  But an S&OP process – however sophisticated or rudimentary – will start this process of greater transparency and alignment.  I’ve been participating in 2 discussions on LinkedIn (Has Sales & Operations Planning (S&OP) improved your forecast accuracy? and  What is your biggest S&OP pet peeve?, both of which require membership) and in both discussions there is consensus that the greatest contributor to the successful adoption of an S&OP process is Executive support because this is required to get everyone to “play nicely” with each other.  Clearly this is simply a symptom of the organizational power structures.  S&OP is challenging these power structures, which leads to resistance. There is plenty of technology out there to assist in this process, but ultimately you will need both for a truly successful S&OP process that contributes massively to your company’s future success.  But there is no need to wait until you have organizational buy-in.  As with all organizational change, showing the people how they will benefit for adopting new practices is the best way of getting their buy-in.  So start small and give people information that is useful to them and over time you will be able to ask for information that is useful to you.  If this is too slow for you, make a pitch to your executive team to make sure they back you up to get faster adoption.  Either way, you should not wait.  The benefit to your company is too great to ignore.  Help us create the organizational structures of the future.

Posted in Milesahead, Miscellanea, Sales and operations planning (S&OP), Supply chain management