February 2019

What can machine learning do for geoscience?

  • By Hilary Goh MAusIMM, Founder, Geoscientist Machine Learning

Machine learning is generating a lot of hype across many industries – including resources. This article provides some context to the hype and discusses ways that machine learning could be used in geoscience

I originally posted a version of the following article on LinkedIn and titled it ‘10 things machine learning can do for geoscience – no. 7 will shock you!’ It was an intentional, tongue-in-cheek clickbait title to reflect that this is probably the current perception of machine learning in mining, in that there is still hype, buzzwords are being thrown around and plenty of speculation about who’s doing what and what has already been achieved.

The article was in fact originally a talk at a Western Australian Ground Control Group meetup with the purpose of sharing my learnings with the geology/geotechnical world. By now, everybody has likely heard about machine learning, but there is so much information out there that it can be difficult to know where to start (to learn about it) and understand where this tool fits into your business or operations. I hope this article gives you an objective way to approach possible machine learning projects and solutions.

Hype and disappointment

Let’s talk about hype.

The general public has become aware of machine learning for about the last four years, but machine learning has been around since before the 1980s. Advances in microchips, specifically Graphical Processing Units (GPUs), and the rise of cloud computing have allowed for an increase in the capability of complex computations that are fast and cheap enough for those outside of a research organisation to use. This has led to a greater awareness of machine learning in the wider public.

Scanning the daily news articles seems to indicate that machine learning is everywhere and can do anything. It is in every industry and affects everyone – hence the hype.

Despite this, for the mining industry reader, there is probably a disconnect from what you read or hear about and what you see in your day-to-day operations, ie a lack of machine learning-related applications. And aren’t some of the data analytics projects using machine learning actually just big data problems?

There are plenty more questions for our industry when it comes to machine learning. Is the resources sector late to the party? Are mining companies unwilling to share what they’ve learned? Are we still trying to figure out how to use this newfangled technology or is it all just vapourware and wishful thinking?

Another reason for the hype and buzz is the perceived negative effect machine learning will have on jobs and people. Discussions around machine learning generally lead into debates about automation and artificial intelligence and what the future of work may look like for everyone. For a more detailed exploration of this, I would recommend you read Paul Lucey’s article ‘Future of work – the contractor, service provider and consultant’ (2018).

Despite the numerous news articles and seemingly endless discussions around machine learning, is the hype starting to die down? Is there disappointment that machine learning applications (for the resources industry at least) are not living up to expectations?

According to the Gartner Hype Cycle, machine learning is still at the ‘Peak of Inflated Expectations’. It’s currently on the downward trend heading towards the ‘Trough of Disillusionment’ and is not expected to reach the ‘Plateau of Productivity’ for another two to five years.

But I’d like to guide you away from the hype and focus on what is important for mining and data science. I’ll do this by asking: is a machine learning application the best solution for your data science problem?

And to help you approach this question, I’d like you to take a data-first approach.

Back to basics

In a nutshell, machine learning is the combination of cheap computer power (GPUs), and statistics and algorithms to crunch data for a specific problem.

It should be noted that the term ‘machine learning’ is not synonymous with ‘artificial intelligence’ (AI). Machine learning is a data processing technique that solves a specific problem for a human or artificially intelligent being. AI, on the other hand, is a combination of different techniques, including machine learning, to solve complex or multifaceted problems that currently only humans can solve. Figure 1 (reproduced with permission from Goodfellow, Bengio and Courville, 2016) shows how machine learning is encompassed by AI.

Figure 1. How machine learning is encompassed by AI. (Goodfellow, Bengio and Courville, 2016).
Figure 1. How machine learning is encompassed by AI. (Goodfellow, Bengio and Courville, 2016).

When considering a data science project, the best way to approach it is:

  • What data is available?
  • What problem or question am I trying to solve?
  • What techniques are applicable to solving this problem?

The same approach should be taken with any machine learning project. It is very important to consider the data you have available, that you have the right type of data, and that you have a representative sample and high enough quality of data to be able to answer your question. For example, poor quality and blurry core tray photos will be useless for any image classification and would be a waste of time. In machine learning, the old computing maxim still applies: garbage in = garbage out.

Data types should be considered first before selecting a machine learning technique. There are many techniques available, as some techniques are only suitable for particular types of data. Listed below are three common data types (images, text and tables) that may be found in geoscience.

Images

Figure 2. Core tray photo and mineralisation.

Figure 2 shows a core tray photo at the top and a face photo of mineralisation at the bottom. Both these data types are captured for geology exploration and mining.

Deep learning (a specific machine learning technique) could be used to classify an entire image or segments of an image.

This approach allows lots of data to be processed relatively quickly and any features of interest set aside for further analysis. A further examination of deep learning appears later in this article.

Text

Imagine you have a large text file containing 477 written reviews of a movie and you want to know what most people thought of this movie. Using machine learning, you could determine the overall movie sentiment, ie did everyone like it or hate it, and why? This result could then predict whether you would like it too, based on other movies you have seen.

For the geoscience field, this may also be a useful way to trawl through a large number of legacy reports and documents to find only those that are relevant to your project area – ie the machine learning mode could be trained to predict what geology deposit(s) may be described within an unknown report based on the content of its summary page.

Tables

Table data is referred to as ‘structured data’ in the computer science world. The most common geoscience examples would be multi-element assays and other geochemistry results from core assays, grab samples and stockpile samples in the form of Excel spreadsheets and other databases.

Other

Other types of data include LiDAR point clouds, hyperspectral (in a variety of formats) and seismic sections, amongst others. Machine learning also has the ability to link, combine and process different types of data together, making it easier to create a holistic interpretation.

Three types of machine learning

Most machine learning models use techniques based on supervised or unsupervised learning, though new developments in reinforcement learning have made it quite popular.

For supervised learning, you tell the model what the answer should look like using training and validation datasets of a labelled result. Because you can assess the accuracy of the answer, you can improve the model in a variety of ways to increase the prediction accuracy. A simple improvement may involve examining the labelled data for any mistakes and then changing the label.

For unsupervised learning, you are unsure what the answer may be and therefore allow the machine learning model to tell you what it finds. For this type of learning, data is not labelled and the model will seek to find trends, patterns and cluster the data. As you are unsure what the result should look like, assessing the accuracy of unsupervised learning can be difficult. Additionally, the model may just fit the result to erroneous ‘noise’ in the data, correlate where there is no relationship or show relationships that are not insightful. In some cases, there are no advantages over standard statistical techniques.

Over the last two years a third type of learning has evolved: reinforcement learning. This differs to the other types in that instead of providing a labelled result or no result, the model is given rewards and penalties to achieve a desired result. In a way the model is learning through trial and error (see later in this article for an example of reinforcement learning).

Within these three broad types of machine learning there are many techniques available, and these can also contain sub-categories as well. However, in this article I want to look at one technique in particular: deep learning.

What is deep learning?

In the example given in Figure 3, we have a simple neural network with one hidden layer (yellow dots) and a deep neural network with many hidden layers. The hidden layers are where the real computational work is performed.

Figure 3. Simple neural network versus deep learning neural network (click for larger image).

The simple neural network is trying to do many different computations, ie classification and segmentation, simultaneously in a single, wide hidden layer. This means computations are slow and expensive, especially on large datasets, and result in poor predictions.

The deep learning network is designed in a way that has many small hidden layers with each layer learning something different based on the output of the previous layer. Breaking the complexity of the problem into many small pieces allows for fast and cheap computations. This also allows the use of much larger datasets (but with smaller training datasets and labels that still achieve good results). The number of hidden layers depends on the complexity of the problem, resulting in increasingly large and complex (or deep) neural networks.

This hierarchy of representations (hidden layers) seems to enable deep learning to predict better when presented with new data, compared to the simple neural network.

To apply this to a geoscience context, consider the example of image data given earlier.

Using deep learning and another sub-technique called a convolutional neural network (CNN), classification of the data could involve classifying different parts of a photo of a core tray as core, core block, tray and other. Segmentation would involve teaching the machine learning algorithm to identify features of interest from the tray, such as core runs or the core blocks.

CNNs have had good success with image recognition as they combine two functions, feature detection and feature maps, that when combined enable classification of a scene. A convolutional layer runs a filter over the image (one patch of pixels at a time) to detect a discrete feature, and when found maps this to a feature map. These convolutional layers are performing a type of search, and many filters are passed over the image at a time with each filter focusing on one unique feature. These feature maps alongside other training information are then used to classify or predict objects on new, previously unseen images.

Three other applications of deep learning

As mentioned previously, machine learning encompasses all sectors – not just geoscience. But it’s worth sharing three other examples of deep learning to show how the technology can be used, and which might spark ideas for applications in geoscience or mining.

Cats versus dogs (binary image classifier)

In this example, a supervised deep learning model learns the difference between cat and dog photos. A training dataset is created, where images are labelled either ‘cat’ or ‘dog’ and the model learns the difference between the two labels through techniques like convolutional filtering. A validation dataset (with known labels) is tested by the model to test its accuracy. Adjustments can be made to the model and labels to improve accuracy (pretraining) before processing a full dataset.

However, there are some limitations. Because the model is only a binary classifier, if you show it a horse photo it will try to classify it either as a cat or a dog. You can check out a video and coding lesson behind a model of this type at https://course.fast.ai/

Teaching a machine learning model to play a platform-based computer game

This is an interesting example of reinforcement learning.

A deep learning model learns how to play a computer game through trial and error to achieve a specific goal. In the game, Mr Nibbles (a hamster) is supposed to avoid obstacles and learn his way to the exit of the level. The model is given a ‘time penalty’ of 0.1 for every second it spends on each attempt, and is given another penalty of 10 every time the hamster ‘dies’ by coming into contact with an obstacle.

We might expect a human decision-making process from the machine, in which ‘dying’ is avoided even if it takes time to complete the level. But the model has no such concept, and due to the limited reward/penalty system, the model learns that, rather than taking a long time to find the level exit, the cumulative penalty is actually less if Mr Nibbles dies quickly rather than continuing. This example highlights that sometimes machine learning models can spit out unexpected and counterintuitive results to what humans might expect. More on this specific example is available at www.mikecann.co.uk/machine-learning/a-game-developer-learns-machine-learning-mr-nibbles-basics/.

Machine learning networks trying to fool each other

In this example of Generative Adversarial Networks, one network (the ‘Generator’), is trying to fool another network (the ‘Discriminator’), with fake seismic sections. The Discriminator penalises the Generator for obviously fake images, and so the Generator learns to produce better and better fakes each time. In this example, not only is the Generator creating seismic sections (that look just as good as real seismic sections) it is also generating the associated velocity model as well.

This bit of research is very popular right now in the oil and gas industry. If you want to know more, check out Mosser et al’s paper (referenced at the end of this article).

Conclusion

Machine learning is such a large and complex field that an article of this length does not do justice to the intricacies of the topic. But I hope this article gives you enough of an overview of machine learning and some of the different techniques available and gives some ideas for how it might be used in the resources industry – particularly geoscience. I hope that using a data first approach helps you to frame your question – is a machine learning application the fit-for-purpose solution for your data science problem? – as well as to select the technique that is going to work best with your data.

References
Goodfellow, Bengio and Courville, 2016, ‘Deep Learning’, MIT Press.

Lucey P, 2018. Future of work – the contractor, service provider and consultant [online]. Available from: www.linkedin.com/pulse/future-work- contractor-service-provider-consultant-paul-lucey

Mosser L, Dramsch J, Kimman W, Purves S, De la Fuente A and Ganssle G, 2018. ‘Rapid seismic domain transfer: Seismic velocity inversion and modeling using deep generative neural networks’ [online]. Available from: www.researchgate.net/publication/325333052_Rapid_seismic_domain_transfer_Seismic_velocity_inversion_and_modeling_using_deep_generative_neural_networks 

Panetta K, 2017. Top Trends in the Gartner Hype Cycle for Emerging Technologies, 2017 [online]. Available from: www.gartner.com/smarterwithgartner/top-trends-in-the-gartner-hype-cycle-for-emerging-technologies-2017/.

Share This Article