February 2017

Disruptive technologies – adopt at your own risk, ignore at your peril

  • By Dulnuwan Wimalatissa, Solution Architect and Sean Helm MAusIMM, Product Portfolio Manager, Snowden

Learning how to effectively interpret high volumes of data can significantly improve decision-making and efficiency

In today’s mining environment, the volume of data made available to us when making decisions eclipses what we have historically had access to. This data is borne through a technology revolution that has made mobile devices and sensors more powerful and available than ever before. Devices are now routinely in place on trucks, excavators, other fixed and mobile equipment and even people, generating data volumes that can be overwhelming.

Coincident with this exponential growth rate in data comes new jargon. Terms like big data, cloud, digital transformation, analytics, business intelligence and internet of things are commonly being used to describe this technology-led revolution.

Albeit daunting, this data growth and new lingo should not prevent us from fully exploring the potential benefits.

In fact, our ability to critically assess and adopt these benefits may ultimately define our success. Even if that point in itself is debated, it will categorically differentiate us from those that elect to turn a blind eye to the opportunity.

Before we dig deeper, let’s take an example common to most of us that demonstrates the value in adopting new, potentially disruptive (at least initially) technology – the smartphone.

These handsets have rapidly transformed the humble telephone into an aptly named ‘smart device’ that can capture your heart rate, blood pressure and sleeping patterns; map out fitness plans; provide you with internet access, video calls or hotspot capability; and deliver countless social media and news feeds. All this in addition to still providing a telephone service! Some of us have not adopted this technology and choose, sometimes quite happily, to continue without. Others though, and I suggest the vast majority, have adopted these technological advances and are in a position to become vastly more ‘mobile’ in their business structures.

In a mining context now, the availability of sensors to measure just about any variable – be it weather, oil pressures, GPS location, equipment condition or possibly even an operator’s whereabouts – has driven data volumes captured at any single mining operation to a staggering level.

Is there a problem?

Having access to so much data sounds great in principle, but trends are emerging in data use that indicate that we are facing a series of conundrums. How much data is too much or, equally, how much is enough? How do I know if I’m using all my data to best effect or extracting all the value present in my data? Is the data I’m collecting an asset or, by virtue of the huge volume and limited time to review, a liability?

There are many potential problems that arise with managing copious data volumes, but one strategic problem stands out. It is the reality that without adopting advances in other areas of technology, we, and the enterprises that we represent, will be rendered inefficient at best, possibly redundant, and will most certainly lose traction and fall behind our competitors.

On a technical level, a common thread evident in managing big data is the element of ‘time’. In today’s world, the increasingly common inability of people to quickly decipher ‘good’ data from ‘bad’ demands action.

Timely and clear decision-making in the face of huge data sets is considerably more problematic today and is set to get worse.

Where do we look?

Potential solutions may actually lie with adopting and applying new technology.

On the surface, this might seem like an odd suggestion given the role that new technology plays in generating big data, but technology is also well placed as the saviour.

Along with the rapid rise in the number of devices available and in use, there has been an equivalent increase in the number of software tools designed specifically to manage such data volumes. It is no surprise that technology is driving us into the next era – technology is part of the problem and also an integral part of the solution.

Before we explore these tools further, we need to ensure that we know what we need from them and how to apply them to best effect. In other words, how do we define what is important in the data routinely sent to our PCs, tablets and phones? Further, how do we feel confident that the decisions we make will accurately extract the wealth we are uncovering in our data?

The answer lies in asking the right questions first.


Join minerals professionals on LinkedInLI_logo

Stay up-to-date with all the latest news and analysis by joining minerals industry professionals in the AusIMM LinkedIn community.


What is important?

Historically, decisions are considered ‘informed’ when those making them have had the time to analyse, validate and familiarise themselves with the strengths and weaknesses inherent in the data.

With time more of the essence today than ever, what is the risk of being pressured into making a decision based on data that has not been reviewed or validated, isn’t deemed accurate or, worse still, is not relevant? In short, it’s high. Pressure-induced decision-making based on data of unknown quality is prone to inefficiencies at best.

In this context then, what is the risk of adopting new technology to assist in our endeavours to get the most from our growing data sets?

Adopting and applying new tools often brings benefits, but care needs to be taken when navigating some of the potential costs, both in financial and professional terms.

Applying new technology may draw into question existing manual processes, even people, as it potentially renders old, traditional data analysis routines redundant. Is it better then to continue with current protocols and procedures and ignore the opportunity?

The key here is to understand two important points:

1. The identification of the main issues, key risks and pertinent points will become an almost impossible task in the foreseeable future without harnessing the value of disruptive technology.

2. Technology is better when applied intelligently. Our people provide the key to our experience, and when our experience is embedded in technology, it will provide answers in quick time, prompting future success.

It may well be that the decision to adopt and intelligently apply disruptive technologies becomes a differentiator.

Interestingly, whether or not technology is applied as part of the solution, the focus should remain on defining the right information by asking the right questions.

The ultimate objective is to effectively make sense of the data you do collect, rather than collect as much data as you can.

2017-02-02-10_38_38-neuroverse-dashboard-5
Screenshot from Neuroverse analytics software. Image courtesy Snowden.

Same problem, different day

At their core, the problems faced today with managing big data are not different from those we’ve always faced. It’s simply that these issues are exacerbated by data volumes. The issues that we currently face that inhibit our ability to quickly extract insight from our data include:

  • the volume of data being generated
  • the difficulty in managing new sources and types of data
  • not knowing which questions to ask
  • knowing which questions to ask, but not how to get the answers
  • knowing how to get the answers, but not being able to do so in time to make relevant decisions
  • ensuring that source data is complete and accurate.

What is different about today?

While the quantity of data being collected continues to grow exponentially, the difference today is that the tools required to make use of it are more readily available. These tools greatly assist in overcoming some of the challenges previously faced and can help manage the analysis paralysis commonly driven by mass data volumes.

Numerous reports, articles and journals outline the advances in technology designed specifically to aid in the review and interpretation of big data. A summary of each of these follows. It’s important to realise though that much like with previous disruptive technologies, pioneers in this space have blazed a trail to success, learnt many lessons and greatly reduced the cost and risk to implement.

Cloud computing

The concept of cloud computing has changed over time, more recently referring to three categories: infrastructure as a service (IaaS), platform as a service (PaaS) and software as a service (SaaS). Cloud computing allows businesses to defer the high upfront cost of computing services – hardware, services, software, etc (you simply pay as you go). Cloud computing also streamlines the setup and management aspects of rapidly deploying these computing services. This scalability means that processing power and functionality is available when needed, but you aren’t paying for it when you don’t.

Connected devices

Advances in machine-to-machine communication mean that collecting data from devices and making that data available is much less difficult than it once was. The availability of specialised communication protocols ensures reliability, security and compatibility. Not only can we collect data from these devices, we can also interact with them and send them instructions.

Pattern-based analytics

Pattern-based analytics allow insights to be found without the need for a business intelligence expert to query or model large data sets. Instead, data is automatically queried for meaningful patterns. Pattern-based analytics can be run on large and unstructured data sets, meaning that there is no upfront setup or data structures to maintain, thus providing quick results with little initial time investment. This also means that there is little data cleansing or categorisation required.

Stream analytics

Processing large volumes of data is often of limited value if the data is not processed quickly enough for a business to react to changing conditions. Examples of this can be found in real-time fraud detection and system monitoring. Stream analytics allows for the processing of real-time data streams or data in motion. Systems built to utilise stream processing are often designed to be highly available and scalable, are able to handle high volumes of incoming data and can provide quick insights or warning of changing conditions.

Machine learning

Typically, machines are told or taught how to perform complex tasks. This is no longer necessary as we now have accessible tools that allow us to give machines the ability to ‘learn’ to perform complex tasks themselves. The way machine learning works can be simplified to the following process. The machine first looks at and analyses a data set that contains attributes or characteristics based on the objective of the learning. The machine then evaluates the data so that it can learn what certain data points indicate. The machine can then apply its learning to an entirely new data set and be given feedback. With each repetition of this process, the machine becomes more efficient at identifying relevant data points.

2017-02-02-10_23_33-neuroverse-dashboard-2-resize
Screenshot from Neuroverse analytics software. Image courtesy Snowden.

How does all this actually make a difference?

The answer to this question lies in our ability to integrate the intelligence and experience we have in our people with the significant capabilities we now have in our (equally smart) technology.

Incorporating your learning into machines that can test, reproduce and apply that learning is more efficient than manually applying the same techniques. With the prime value driver today being ‘time efficiency’, any feature that aids in this pursuit will add considerable value to the user.

Specifically, timely and effective application of disruptive technologies can drive:

  • Faster, more informed decision-making. This is largely the result of the speed at which machines can analyse large data sets.
  • Cost reductions delivered through performing data analysis to identify key issues more quickly and effectively.
  • New products and services that will continue to highlight trends and opportunities by embedding intelligence and learning algorithms through analytics.

Before you rush out and start building your own analytics capability though, be prudent and test what solutions are already available. With the global push for these new technologies in full swing, there are several existing solutions available, some good, some not so. However, all have paved the way for you to focus your efforts on interpreting the results from analytics rather than building software systems.

So, in this era of burgeoning data volumes, the success of many companies, businesses or enterprises to differentiate themselves from competitors may lie in their ability to adopt new technologies. When applied well, it provides clarity through mass volumes of data, highlighting trends and, in time, prescribing actions that ensure your decisions have the best effect.

There most certainly is risk involved in doing so, and careful consideration of your strategic direction is required, especially in terms of adopting cloud technologies. However, the benefits potentially outweigh the alternative of ignoring the opportunity and continuing with traditional processes. Let time be the judge.
For more information on data management, application of new technologies or other enquiries, please don’t hesitate to contact Snowden.

Share This Article