Making use of data in an industrial context – Factory Simulation


Just to add some fuel my recent post let me pick a topic to elaborate on (hoping I’ll get even more responses on your expectations on the upcoming webinar series). 

Today let me focus on “Factory Simulation” (which naturally relates to other topics like “Digital Twin” or “Machine Learning”)

The term “Factory Simulation” is widely used with lots of room for interpretation; and probably most of the interpretations have their own right of existence. 

So, a comment I made quite often over the years still stands true: Identify your use case, your business scenario, to put the endeavor into a business context – anything else is useless unless you want to apply technology just for the sake of applying technology.

Most production line managers I talked to really know what they’d like to get out of a factory simulation (vs. a lot of pure IT managers still tend to focus more on the fancy IT stuff), so let’s lean on the production line managers’ interpretations here:

  • I need to maintain my level of quality throughout all potential impacts
  • I need to maintain or improve my productivity rate
  • I need to increase OEE
  • I need to deliver on all contracts even if important ad hoc orders come in

Of course the list above is just a small starter, but I guess you get the picture – it’s (as in basically all Industrial IoT scenarios) coming down to making the optimal decision in real time.

Ideally these decisions would be fully automated, but a) a fully automated system will need some time to implement and b) we, as humans, will need some time to gain trust in these automated decisions. So the natural step before full automation is to implement a solution that will simulate scenarios, taking into account all impacting parameters (equipment, material, supply chain, order management….potentially external parameters like weather information), so an experienced line manager can see results for different possible decisions using real world and real time information before making the final decision that will be entered back into MES/ERP/SFM etc for execution.

The number and nature of impacting parameters can range from low and small complexity to high and ultra-complex. In a recent discussion with a customer I was told they’d only need to consider order management, current capacity and throughput rates over pre-production and assembly lines to cover ad hoc orders – not super-easy, but also not too complex. A shop floor worker for another customer told me a few months ago he exactly knows (after operating the equipment for 20+ years) how to tune the equipment in his responsibility to maintain quality with upcoming thunder storms – this example is not so easy to replicate in a simulation, as a lot of additional information and evaluation of this information is required to achieve reasonable results.

I hope this leads to additional thinking and requests – let me have your thoughts, please.



Making use of data in an industrial context – what’s on your wish list?

Twitter background 2016


Again it’s been way to long since I posted here….in the meantime I had a really large number of great customer conversations on how to make use of data in an industrial context. Conversations reach from easy things like creating transparency in shop floor, over to leveraging existing, sophisticated solutions for e.g. automated rescheduling based on advanced analytics, very often reaching more disruptive, challenging topics like Digital Twin or Factory Simulation. 

In that regard I am working with great colleagues including @JerryAOverton and @KaiUHess to set up a series of webinars on such topics. 

A few suggestions right here – let me know what you’d like to see covered in September/October 2016, out of these or others. I will use your comments to get the right experts into those discussions.

  • Digital Twin
  • Factory Simulation
  • Machine Learning in industrial context
  • Smart Analytics to automated manufacturing processes end2end
  • Predictive Supply Chain
  • Integrating IT/OT Islands via MachineLearning based REST APIs
  • CyberSecurity in IoX context
  • Leveraging hybrid cloud in manufacturing
I’d really appreciate your comments and wishes here, we definitely want to tell the right stories, including how we solved such situations for customer and/or what we see as emerging technologies fit to  provide value in the (near) future.
With your feedback we’ll select & schedule the topics and will publish dates & times here and on



Making use of data in Manufacturing

I guess by now most people have fully understood the value of data driven decisions and are looking to find the right entry point, i.e. the right initial use cases to dip their toes into the data lake before really getting feet wet. One of the questions coming up quite often in customer discussions is how to start. Maybe the following can help some of you to find those initial use cases.
Please keep in mind: IoT / Smart Manufacturing / Industry 4.0 etc. is about making use of information for fast and accurate decisions for incidents, risks, need or opportunities, making use of modern technology to retrieve and visualise the information, simulate decisions and feed the decisions back into an (hopefully highly automated) execution system.
So let me walk you through a few steps (these steps apply to existing plants  – a greenfield approach would have substantial differences):
Step 1 needs to be: Find the use case – for an incident, a risk, need or an opportunity – or a combination of those. An opportunity for a manufacturer could obviously be the increase of productivity – which in return could be based on the need to satisfy customers (which of course could also bear the risk of not satisfying so a wee bit more than a need – but that might lead a bit too deep here). So the first thing to do: Understand why the productivity is not sufficient, understand if there are unknown drivers and make these known drivers, map out the known drivers, select the ones that would make the largest impact with the least effort => That’s the first toe one wants to dip into the data lake! Let’s assume one of the known drivers is lack of transparency in the supply chain (which can be external or on campus / in plant like “where is my material, where does it have to be next?”
Step 2 now follows: Define the KPI(s) – don’t do more than 2 initially! What do you want to achieve and how do you want to measure? Using the final assumption in step 1 we could focus on an increase hit rate of “right material at the right time at the right location” – this is pretty easy to measure, but does require some real thinking on how to achieve; even if it sounds easy to achieve on campus (it actually is not – depending on complexity and size of plant and/or product) it’s getting tougher with inclusion of external supply chain (you might have the perfect system and routing in place on site, but what if suppliers can’t deliver for whatever reason?).
Step 3: Measure the KPI without improvement – and find out which improvements you could do without any investment; some might just be common sense, most won’t lead too far, but to later be able to justify the investment, you want to make sure you’ve done the business case throughly – this exercise might even lead you to select a different use case as the potential with investment might not be to great compared to the one without.
Step 4: Select your pilot. Don’t make it a proof of concept – that’s too theoretical – make it a real pilot with careful selection of scope. Scope should be able to deliver tangible results (like a small production line or a subset) and make sure you can honestly expect positive impact in a short time frame like 2-4 months after going live with the pilot. Another parameter of the pilot is easy ability to replicate – with success you want to roll out fast; so if you’d have a few similar lines you probably want to pick a small to medium sized ones of that group for the pilot.
Step 5: Define the pilot. This is also the latest point where you want to engage with external service & software partners (but of course – the earlier you include them, the better – most of us are happy to support in the early phases). This definition will also include architecture, technical & functional requirements, how KPIs will be implemented in the final solution. All specs for sensors, potential track & trace devices (especially for on campus / in plant tracking) interfaces to ERP, SFM, MES, PLM, PDM and to external supply chain (if applicable for the pilot) need to be agreed and contracted. The definition should be built as a template so with success (see step 4) you can roll to similar lines as fast as possible. If implementation time for the pilot is longer than 2-4 months: Go back to step 1. If no mobile, (near) real-time, always-on information or alerting: Change your partner. Also make sure you can get majority of services in an aaS model.
Step 6: Implement the pilot. Start measuring. Do constant review – this is after all the journey to a continuous digital improvement.  Concentrate on evaluating the KPIs and start planning beyond step 7.
Step 7: Assuming success: Roll out pilot functionality to other lines (horizontal expansion) while in parallel work on extending functionality / KPIs (vertical expansion). Topics you want to think about in this and following expansions should also include how to integrate more closely with your external partners (e.g. API vs. interfaces) and/or move to co-creation.
These few steps will get you started – other companies have executed and e.g. achieved 20+% increase in OEE in short time frames (6-12 months), increase in productivity (200+% in 2-3 years) – so what’s next?
The minute you’ve started the Digital Journey, you’ll most probably want to continue the digital improvement. So let me give you some outlook on possible scenarios:
  • Supply Chain Prediction: With high visibility / transparency along your supply chain you will be able to predict incidents (using data science scenarios similar to those used in predictive / preventive maintenance). This will of course enable you to plan / schedule accordingly, thus further improving productivity, planning accuracy etc.
  • Also you might want to look into Demand Chain Prediction to cover end2end
  • Decision simulation: With full transparency you can simulate your decision / reaction to any incident (or need, opportunity, risk) before executing. Eventually this could result in a full simulation of a full blown production environment before taking any physical measures (this of course requires all production data to be basically kept forever in a Big Data environment to rebuild the full plant with real data in a simulation environment – but still cheaper than rebuilding a plant)
  • Potentially start looking into the intelligence of your products to move your now digitised company from product sales to service provisioning.
Let me have your thoughts please. Thank You!