Making use of data in an industrial context – Factory Simulation


Just to add some fuel my recent post let me pick a topic to elaborate on (hoping I’ll get even more responses on your expectations on the upcoming webinar series). 

Today let me focus on “Factory Simulation” (which naturally relates to other topics like “Digital Twin” or “Machine Learning”)

The term “Factory Simulation” is widely used with lots of room for interpretation; and probably most of the interpretations have their own right of existence. 

So, a comment I made quite often over the years still stands true: Identify your use case, your business scenario, to put the endeavor into a business context – anything else is useless unless you want to apply technology just for the sake of applying technology.

Most production line managers I talked to really know what they’d like to get out of a factory simulation (vs. a lot of pure IT managers still tend to focus more on the fancy IT stuff), so let’s lean on the production line managers’ interpretations here:

  • I need to maintain my level of quality throughout all potential impacts
  • I need to maintain or improve my productivity rate
  • I need to increase OEE
  • I need to deliver on all contracts even if important ad hoc orders come in

Of course the list above is just a small starter, but I guess you get the picture – it’s (as in basically all Industrial IoT scenarios) coming down to making the optimal decision in real time.

Ideally these decisions would be fully automated, but a) a fully automated system will need some time to implement and b) we, as humans, will need some time to gain trust in these automated decisions. So the natural step before full automation is to implement a solution that will simulate scenarios, taking into account all impacting parameters (equipment, material, supply chain, order management….potentially external parameters like weather information), so an experienced line manager can see results for different possible decisions using real world and real time information before making the final decision that will be entered back into MES/ERP/SFM etc for execution.

The number and nature of impacting parameters can range from low and small complexity to high and ultra-complex. In a recent discussion with a customer I was told they’d only need to consider order management, current capacity and throughput rates over pre-production and assembly lines to cover ad hoc orders – not super-easy, but also not too complex. A shop floor worker for another customer told me a few months ago he exactly knows (after operating the equipment for 20+ years) how to tune the equipment in his responsibility to maintain quality with upcoming thunder storms – this example is not so easy to replicate in a simulation, as a lot of additional information and evaluation of this information is required to achieve reasonable results.

I hope this leads to additional thinking and requests – let me have your thoughts, please.



Making use of data in an industrial context – what’s on your wish list?

Twitter background 2016


Again it’s been way to long since I posted here….in the meantime I had a really large number of great customer conversations on how to make use of data in an industrial context. Conversations reach from easy things like creating transparency in shop floor, over to leveraging existing, sophisticated solutions for e.g. automated rescheduling based on advanced analytics, very often reaching more disruptive, challenging topics like Digital Twin or Factory Simulation. 

In that regard I am working with great colleagues including @JerryAOverton and @KaiUHess to set up a series of webinars on such topics. 

A few suggestions right here – let me know what you’d like to see covered in September/October 2016, out of these or others. I will use your comments to get the right experts into those discussions.

  • Digital Twin
  • Factory Simulation
  • Machine Learning in industrial context
  • Smart Analytics to automated manufacturing processes end2end
  • Predictive Supply Chain
  • Integrating IT/OT Islands via MachineLearning based REST APIs
  • CyberSecurity in IoX context
  • Leveraging hybrid cloud in manufacturing
I’d really appreciate your comments and wishes here, we definitely want to tell the right stories, including how we solved such situations for customer and/or what we see as emerging technologies fit to  provide value in the (near) future.
With your feedback we’ll select & schedule the topics and will publish dates & times here and on



Could we please start securing airports using consistent processes & modern technology?

While watching the horrible news on Bruxelles, I wanted to reflect on what happened to me a few months ago.

My wife & I were checking in in Frankfurt Airport for our vacation in China. For that trip I was bringing my brand new camera backpack, the thing was about 48 hours old, so first use. At security screening the folks decided to have another check specifically for the backpack – and I was (and still am) absolutely fine with that, as I rather have one check too many than one not enough.

So this was the flow:

Security officer wiped my backpack, let the machine do its thing and was really astonished when an alert come from the machine stating “TNT detected”. The security guy was absolutely clueless on how to proceed and so asked a colleague to came over. The colleague gave the advice to run a second test. Same response “TNT detected”. Now we had two helpless security folks standing with us, calling for the third. The third one was at least more impressing from looks – she was wearing uniform & weapon and made a decisive impression…for a few seconds until she also started to stare at the evaluation results with astonishment. We exchanged a few words, trying to understand how this could happen until my wife said “By the way, this backpack is brand new”. All three officers now found back to a happy smile and we were told “Oh, that’s OK then, it’s probably the chemicals from the production process. Have a nice trip!”

I guess everyone understands this is really not what I would expect from airport security – I would have expected something like the following (and I do know airports like e.g. Heathrow have systems & processes like this in place)

  • System detects (or assumes) TNT (well, that was done)
  • System is to notify a) the guy in front of the machine on next steps and b) call an educated authority in the background to get help (fast) to the helpless guy
  • Process is to make sure nobody gets close to my backpack until issue cleared

As none of this happened, I was not really feeling secure when entering the flight – I need to say that (and I’ve been flying a real lot in my life) the airports in China in general give you a more secure feeling than the ones in Europe (with a few exceptions)

Point is, there will never be 100% security, there will never be 100% process coverage and there will never be 100% adequately trained people doing the job. But with accepting those facts, at least please put in place the right processes and technologies (like in the example above just connecting the inspection machine to a workflow / business process management layer to automatically deduct next steps). it’s like in the good old SAP workflow days I had 20+ years ago – there will be unknown return codes, so make sure an unknown return code is routed to the best team in charge to understand if the process is to be change, a new action to be defined etc. Don’t let users without the respective know-how try to deal with it.

Just booked my next flights – thoughts with the victims in Bruxelles


The true source of legacy trouble is Windows

Soren’s posts are always a great source for new thoughts

Soren Helsted's Blog

It is a common understanding that Mainframes cause a lot of problems due to it’s legacy status. And true, many Mainframe systems need modernization and some get it. The vast majority of the Mainframe systems, though, are backend server systems that are hidden from the end users behind various user interfaces and adding APIs are often enough to add some years of useful life to the systems.

A lot of end user systems, on the other hand, were build around Windows and especially Internet Explorer specifics a decade or two back. These are the real troublesome legacy programs. They require that organizations keep otherwise outdated windows based client systems alive and they block for the BYOD/BYOT thinking.

It is well known that the Internet Explorer browser is old and does not support modern browser technologies very well if at all – that’s probably also why Microsoft have renamed the browser…

View original post 94 more words

Making use of data in Manufacturing

I guess by now most people have fully understood the value of data driven decisions and are looking to find the right entry point, i.e. the right initial use cases to dip their toes into the data lake before really getting feet wet. One of the questions coming up quite often in customer discussions is how to start. Maybe the following can help some of you to find those initial use cases.
Please keep in mind: IoT / Smart Manufacturing / Industry 4.0 etc. is about making use of information for fast and accurate decisions for incidents, risks, need or opportunities, making use of modern technology to retrieve and visualise the information, simulate decisions and feed the decisions back into an (hopefully highly automated) execution system.
So let me walk you through a few steps (these steps apply to existing plants  – a greenfield approach would have substantial differences):
Step 1 needs to be: Find the use case – for an incident, a risk, need or an opportunity – or a combination of those. An opportunity for a manufacturer could obviously be the increase of productivity – which in return could be based on the need to satisfy customers (which of course could also bear the risk of not satisfying so a wee bit more than a need – but that might lead a bit too deep here). So the first thing to do: Understand why the productivity is not sufficient, understand if there are unknown drivers and make these known drivers, map out the known drivers, select the ones that would make the largest impact with the least effort => That’s the first toe one wants to dip into the data lake! Let’s assume one of the known drivers is lack of transparency in the supply chain (which can be external or on campus / in plant like “where is my material, where does it have to be next?”
Step 2 now follows: Define the KPI(s) – don’t do more than 2 initially! What do you want to achieve and how do you want to measure? Using the final assumption in step 1 we could focus on an increase hit rate of “right material at the right time at the right location” – this is pretty easy to measure, but does require some real thinking on how to achieve; even if it sounds easy to achieve on campus (it actually is not – depending on complexity and size of plant and/or product) it’s getting tougher with inclusion of external supply chain (you might have the perfect system and routing in place on site, but what if suppliers can’t deliver for whatever reason?).
Step 3: Measure the KPI without improvement – and find out which improvements you could do without any investment; some might just be common sense, most won’t lead too far, but to later be able to justify the investment, you want to make sure you’ve done the business case throughly – this exercise might even lead you to select a different use case as the potential with investment might not be to great compared to the one without.
Step 4: Select your pilot. Don’t make it a proof of concept – that’s too theoretical – make it a real pilot with careful selection of scope. Scope should be able to deliver tangible results (like a small production line or a subset) and make sure you can honestly expect positive impact in a short time frame like 2-4 months after going live with the pilot. Another parameter of the pilot is easy ability to replicate – with success you want to roll out fast; so if you’d have a few similar lines you probably want to pick a small to medium sized ones of that group for the pilot.
Step 5: Define the pilot. This is also the latest point where you want to engage with external service & software partners (but of course – the earlier you include them, the better – most of us are happy to support in the early phases). This definition will also include architecture, technical & functional requirements, how KPIs will be implemented in the final solution. All specs for sensors, potential track & trace devices (especially for on campus / in plant tracking) interfaces to ERP, SFM, MES, PLM, PDM and to external supply chain (if applicable for the pilot) need to be agreed and contracted. The definition should be built as a template so with success (see step 4) you can roll to similar lines as fast as possible. If implementation time for the pilot is longer than 2-4 months: Go back to step 1. If no mobile, (near) real-time, always-on information or alerting: Change your partner. Also make sure you can get majority of services in an aaS model.
Step 6: Implement the pilot. Start measuring. Do constant review – this is after all the journey to a continuous digital improvement.  Concentrate on evaluating the KPIs and start planning beyond step 7.
Step 7: Assuming success: Roll out pilot functionality to other lines (horizontal expansion) while in parallel work on extending functionality / KPIs (vertical expansion). Topics you want to think about in this and following expansions should also include how to integrate more closely with your external partners (e.g. API vs. interfaces) and/or move to co-creation.
These few steps will get you started – other companies have executed and e.g. achieved 20+% increase in OEE in short time frames (6-12 months), increase in productivity (200+% in 2-3 years) – so what’s next?
The minute you’ve started the Digital Journey, you’ll most probably want to continue the digital improvement. So let me give you some outlook on possible scenarios:
  • Supply Chain Prediction: With high visibility / transparency along your supply chain you will be able to predict incidents (using data science scenarios similar to those used in predictive / preventive maintenance). This will of course enable you to plan / schedule accordingly, thus further improving productivity, planning accuracy etc.
  • Also you might want to look into Demand Chain Prediction to cover end2end
  • Decision simulation: With full transparency you can simulate your decision / reaction to any incident (or need, opportunity, risk) before executing. Eventually this could result in a full simulation of a full blown production environment before taking any physical measures (this of course requires all production data to be basically kept forever in a Big Data environment to rebuild the full plant with real data in a simulation environment – but still cheaper than rebuilding a plant)
  • Potentially start looking into the intelligence of your products to move your now digitised company from product sales to service provisioning.
Let me have your thoughts please. Thank You!

Quick Company Health Status Check

As indicated in an earlier tweet, I think one of the easiest ways to find out about the cultural/health status of your company is to ask yourself, your colleagues & your employees the same simple question: “Would you bring your best friend in?”

This of course assumes you want to keep your best friend.

So in order to get some valid results I thought I’d make this a permanent poll – will publish snapshots as soon as there’s some data worth making up a statistic.

Thanks in advance for your participation!


P.S.: If you like the idea and want to see the results, please spread the link – I won’t publish before having a substantial amount of data per company.


Big Data Analytics just helped me

Just a quick update on how Realtime Analytics do provide value on a day2day basis:

Someone stole my credit card data – probably on one of the recent trips. @AmericanExpress found out via their #RTA, blocked the payment, called me, sent email & text message in parallel while blocking the payment waiting for my response. 

As this was fraud, my card got blocked immediately after I pressed the “not me” button (I was in a conference when this happened, so couldn’t get on the phone immediately) but the tech in use saved me a real lot of money + time.

So: Yes, I have to wait a few business days to get my new card, but no financial loss – Thanks AmEx – thanks #BigData #RTA

How did Amex find out? Of course using tailored analytics, taking into account (amongst other parameters) my usual spending behaviour (the fraud was done on a web shop I don’t use and had a significant amount)

In a nutshell: I am glad the guys do use modern technology to protect themselves (and sub sequentially my wallet) – Of course I could have reclaimed later when receiving the monthly statement, but when considering the time this would have cost me – this solution is way better!