There is a well-known proverb…
“If all you have is a hammer, everything looks like a nail.” —
various attributions (Twain, Maslow, Kaplan, Unknown)
… and in the last few years, a particularly advanced hammer has emerged – machine learning.
Machine learning, which is a branch of artificial intelligence (AI), has a very simple concept: feed millions of data points into a machine learning algorithm and it will figure out the rules on its own. Once the rules are figured out, the machine learning algorithm can be used to improve performance and predict behaviour.
Some believe that to develop machine learning algorithms, you don’t need an understanding of the industry you are working with – the algorithms will do the understanding for you.
Consequently, machine learning has become a hammer that is being used to hit whichever nails look most appealing.
Excellent examples of machine learning can be found in healthcare, finance and surveillance. The very same hammer is now being used to hit the manufacturing industry nail, and digital companies without industry specific experience are eyeing up machine learning applications.
The result is an abundance of buzzwords like “preventative maintenance” and “lights-out manufacturing”.
However, currently there are very few concrete use cases that actually help manufacturers to make parts on time, on cost and on spec.
The risk here is that like many technologies before, machine learning may become over-hyped whilst under-delivering and, ultimately, leaving manufacturers jaded by promises of improvements that never materialise.
As you can probably tell, I’m cautious about the application of machine learning into manufacturing. However, I have good reasons to be cautious – three reasons, in fact.
Having spent several years matching up machining activities to the data they produce, I know first-hand how complicated machining shop floors are, where tools, materials and components can be changing on a daily basis.
Need a thousand examples of a component to train your machine learning algorithm? Well on many shop floors, you’d be lucky to get a batch of five.
When the sample sizes are small and the rules are constantly changing, it’s just not feasible to directly apply machine learning to raw data. First, you must understand the underlying systems in their own right – this can only be done through time-served experience – such as that found in FourJaw.
During the design stages the CAD/CAM teams define the materials, which features are machined, in which order and how they combine to make a finished component.
Unfortunately, this all goes out the window when the data is converted to the gcode that feeds the machines tools. All that lovely context is lost, and all that remains is scrolling numbers. No detail about the material or tool geometry, no link to a particular part or batch, and very little information about the toolpath itself.
For machine learning to work, you need that context.
In other domains like finance you’ve got everything you need. In a credit card transaction, you know how much a purchase costs, the location of the transaction, the person who made it, and many other elements of extra context which make machine learning feasible. Not to mention the massive datasets.
When pulling live data from CNC machines, you only get half the story, which means a machine learning algorithm can’t figure out which rules to apply, when to apply them, and when they stop being valid. Rich context is a must-have for machine learning in CNC machining, and it’s something we at FourJaw work tirelessly to acquire.
During my time as a researcher at the Advanced Manufacturing Research Centre (AMRC), I have applied many machine learning techniques – sometimes successfully, other times not. Regardless, it’s always tricky due to the lack of data, lack of context and the complex environment I’ve mentioned above. As an example, adding vibration sensors to a CNC machine can be used to detect crashes of the tool, which shows up as a large spike in the vibration signal. However, you get an almost identical signal when the operator opens/closes the door. Furthermore, it’s very hard to collect example tool crash data as, understandably, no one wants to break an expensive CNC machine to generate the data, and when there is a genuine crash, unsurprisingly nobody owns up!
A further nuance comes from the data streams themselves which also change. For example, on 5-axis mill-turn machines, certain controllers have both a C-axis and a second spindle which act interchangeably depending on the operation. In a 5-axis milling operation, data is represented as a C-axis, but when in turning mode, rotation around the same axis is instead represented as a second spindle.
Again, this adds to the complexity of the machining environment. Success relies on knowing these little pitfalls before machine learning is applied. Fail to do so, and your machine learning algorithm may be training for the wrong thing with the wrong dataset – and that won’t end well. FourJaw has developed the experience to deal with this effectively.
Machine learning is absolutely the right tool for unlocking massive productivity gains and we are enthusiastic about using it as a key part of our toolkit but there is no one-size fits all. There is no utopia. The challenges of manufacturing cannot be solved by one big thump of the machine learning hammer.
It has to be a nuanced approach taken only once there is enough context and understanding of the environment in which it will be used. That’s why the primary goal of FourJaw is to build machine monitoring software that accurately describes and understands manufacturing environments with all their quirks and subtleties. Only once we have achieved this, can we truly provide the information manufacturers need to succeed. Our manufacturers don’t need another hammer – they’ve already got that covered.