Introduction
In many situations, pit designs and production schedules become highly sensitized to minute changes in economic variables and operating parameters. Mineable reserves shrink and expand into phenomenal changes in volume as one or a combination of parameters change slightly. Historically, many deposits have simply stayed dormant waiting for economics to improve or for someone to find better ore. In other cases, a deposit is put into production with the dire consequence of insolvency. Yet there are many situations where the same deposit, rejected by many companies, eventually goes into production to yield profits even though the data changes little. One may conclude that these are simply due to luck but in many cases the analytical success is founded on very different analytical methods that quantify risk. The three most influencing success or failure factors are: credibility of the feasibility study; the efficiency of production, and; the changes in economic parameters such as price of metal. The real difference lies in "getting the most" out of the data so as to identify and minimize risk associated with these factors. This is done by applying specialized computer techniques that help to quantify the areas of uncertainty and eliminate doubt. To assist in this process, there are four important tactics that help attain the objectives:
Make the most out of the information at hand
Use objective calculation methods
Identify sensitive areas and parameters
Quantify uncertainty to define degree of risk
This article will describe a complex geological property which has a marginal distribution of lead-zinc ore. It has been drilled and re-analyzed in attempts to rationalize the economic worth. For this reason, it has been chosen to illustrate the use of these techniques. The application of these key methods and how they can be used to prevent, isolate and deal with uncertainty issues will be discussed.
Case Study
The case study involves a low grade mineral deposit pin-cushioned with 250 drill holes from the surface in an attempt to better define the lead, zinc and silver mineralization below surface. Although the terrain is relatively flat, the deposit is covered with overburden and surrounded by steeply dipping interbedded bands of argillite, shale, dolomite, sandstone, siltstone and limestone. The orezone is loosely defined because it is unclear as to whether it contains the minerals. It dips steeply and is fragmented into a series of steeply dipping slabs interbedded within unmineralized sedimentary bands. In addition, a dyke invades the layers from a vertical direction and the deposit is further complicated by steep faults which displace the beds and orezone. Information is also sparse, including a topographic map with a sketchy structural interpretation of faults and orezone.
The Problems
Marginal grade. Zinc is the primary mineral, with lead and silver being secondary. The zinc appears to be somewhat contained within an orezone that dips steeply but this may not be so. The orezone is thus made up of bands dominated by values ranging from 3 to 8% zinc. Since the economic cutoff is around 4%, and the average is 5.3% with a standard deviation of +/- 4, the majority of values are distributed around a marginal modal value.
Orezone Clusters. The deposit has three orezone clusters of marginally economic material. Multiple pit bottoms occur at depth thus supporting overlapping common waste as the pits expand upwards. Such a situation becomes hard to assess in terms of economic viability because of the double counting condition. The situation also requires detailed geological interpretation to decide on whether the mineral is actually contained in a distinct orezone.
Geological complexity. The deposit is a complex system of steeply dipping interbedded layers, displacement faults and intrusive dykes, all covered in by a layer of overburden. They all impact the distribution and location of mineral and require good definition.
Information quality. Sometimes the information was not gathered because it was assumed that mineralization did not occur in a particular unit. Occasionally the holes did not penetrate far enough. Either way the effect is the same. Gaps in the data make it difficult to objectively estimate the distribution of grades.
Geological Model
A typical interpreted cross section was created as shown (green dyke, red ore zone, grey overburden). A complete series of these were interpreted and the computer modeling system was used to join these into the solid 3-dimensional model of subsurface. The solid volume representing the orezone was shown in the previous figure. This first step in managing uncertainty is to eliminate the doubt related to modeling.
Spatial Analysis
In analyzing the distributions of minerals, simple histogram plots revealed that the bulk of the values resided around the mean value for zinc of 4%. The standard error of 4 essentially mocks the potential accuracy of the data suggesting that there is a 68% chance of a predicted value falling within a range of 0 to 8% zinc. This means that the variation is not only severe, but that the bulk of the values that make up the marginal orezone are uncertain. Such a conclusion does not create any level of comfort to base a production decision on.
Geostatistics
3D Geostatistics can be a useful tool to use in such cases. While simple statistical procedures are designed to measure the variation of a set of samples, they are not designed to consider the spatial, or distance relationships, between samples. Geostatistics uses the same approach except that it considers distance between samples. The approach is data adaptive in that it measures the spatial characteristics of the specific dataset being looked at. Additionally, one is able to perform the analysis in specific directions or look at the behavior in specific geological units to determine the spatial continuity. Such procedures preclude the definition of specific volumes in order to isolate specific populations. This is very useful in that it quantifies a potential predictor model to estimate between known values producing an objective measurement of continuity at any point in the volume.
The semi-variogram indicates that there is reasonable continuity between samples in all directions up to a range of 30 meters before samples become unrelated. The nugget is 4 and the sill is 15. These now become the objective parameters required to model the orebody and define the degree of confidence on the estimates.
Estimation
Having quantified the continuity of mineralization from the semi-variogram analysis, the next step is to use this model to predict a continuous distribution of the mineralization into a regular 3D grid inside the ore. This process is called Kriging. There are several benefits in its use. First, it is data adaptive, calculating regular estimates for the specific data set being analyzed. Secondly, it provides an estimate of the error at each estimated point. This can be used to determine confidence. Third, it is also an objective process eliminating the need to guess at various interpolation parameters required by other estimation systems. In the study, this error estimate can be modeled and plotted on top of the grade estimates to display the degree of confidence. In our current study, because three minerals were involved, a zinc equivalent was used and a grade model computed. The figure belowshows the result of the estimation in the form of a three dimensional isosurface. This shows the outer limit (cutoff of 4% zinc equivalent) of the mineralized volume.
There are several aspects of this procedure that help manage uncertainty. First, we are using tools that allow us to quantify the degree of continuity and the degree of accuracy. Secondly, the process is objective and limited to specific volumes which may exhibit different characteristics. It becomes possible to create more realistic models of local grade distributions. It also becomes possible to identify problem areas and to quantify the degree of risk. In the example isosurface shown,the error results could have been mapped onto the surface, showing the areas of least (or best) confidence. This ability and the benefits will be illustrated later on.
Pit Optimization
A method of precise pit optimization commonly used in the mining industry is the Lerch Grossmann method. The technique, founded in 3-dimensional graph theory, relies on a regular system of blocks which defines the value (profit, loss) and type (ore, waste) of material contained in the blocks. Each block receives a positive or negative value representing the dollar value (profit/loss) that would be expected by excavating and extracting the mineral.
The optimization process, being totally automated, works from the top down through every combination of blocks that would satisfy wall slope constraints to find the one solution (optimum pit) with the largest positive value. This process can then be repeated by raising the profit threshold to thereby force the algorithm to find more profitable smaller stages closer to surface.
The Lerch-Grossmann method uses the geologic block model for its basis and requires wall slope constraints as determined by the geotechnical analysis. This method also requires that certain economic conditions be provided. These conditions include the predicted cost of the mining and the processing of the ore, the predicted recovery rate, the predicted revenue generated from the sale of the processed material, and the predicted cost of the mining or disposing of the waste material. The last parameter required is the cutoff grade, generally the grade at which mining a block at surface breaks even. That is, its cost equals its revenue.
In the case study, the mineral values were converted to a profit model on the basis of a 4% cutoff (zinc equivalent). This created a profit/loss model. The ultimate pit was then automatically computed by finding the best pit solution that maximized the net profit in the model, also shown in the same figure. In the study the optimized ultimate pit revealed three distinct pit bottoms. By using a rigid optimization method, it becomes more certain that the common waste above the three pods can be supported economically.
$ BIAS |
WASTE |
ORE |
BENCH |
GRADE |
0 |
137 |
9.2 |
30 |
7.40 |
200,000 |
115 |
9.1 |
29 |
7.41 |
300,000 |
104 |
8.7 |
29 |
7.46 |
400,000 |
23 |
3.8 |
19 |
8.54 |
500,000 |
12 |
2.3 |
16 |
9.02 |
The next feature of this method is particularly useful. By raising the cutoff (or bias) on the profit model, it becomes possible to develop a sequence of pits with increasing profitability, closer to surface. This is a totally objective technique used to calculate stages. A common industry application of this process is to seek out a first stage pit (2 to 3 years of high grade mineral) that is able to pay off the initial financing.
In the example, a sequence of pit stages was optimized by increasing the bias. The results are summarized in the table above in millions of tons. It shows the changes in tonnage as the profit threshold is increased to produce a new optimum pit. Closer inspection of this table reveals that a rather dramatic change in tonnage occurs between benches 28 to 19, where a mere change in bias from $350,000 to $400,000 drops the ore reserves from 8 to 4 million tons and the grade increases from 7.5 to 8.5%. This obviously pinpoints a highly sensitive area that requires closer inspection to see what causes this massive change. The first useful tactic is to look at the pit and the model of profit. By using a series of cut planes and enhanced visualization, it becomes possible to inspect the variation of profit within the isosurface inside the pit.
A vertical slice through the three profit volumes reveals the internal distribution. This is a quick way of inspecting the internal variation to find the highest value and most sensitive areas. This also reveals the most sensitive area and the most influential in terms of volume ramifications. The standard error, a direct measure of uncertainty, is mapped on top of slices through the profit model to show the areas of highest uncertainty. It is possible to display or visualize these values in 2 or 3-D to determine the areas that are poorly sampled.
In this study, the results, when displayed in horizontal slices, revealed the bench at -20 meters below sea level as the most influential containing high uncertainty values. The next step is to analyze this more carefully.
Indicator Kriging
Geostatistics provides the ability to model spatial continuity. This concept can be applied to minimize uncertainty in a process called indicator kriging.
The concept is similar to the usual geostatistics approach except that a specific cutoff is used to assign zero to the values above and one to the values below, or vice-versa. A semi-variogram model from zinc values is then computed to model the variation. By predicting the indicators rather than the grade values, the range becomes between 0 and 1, a direct measurement of probability. This defines 0 as the best chance of being ore (above the cutoff) and 1 as the worst chance of being ore. In this way, an estimate of probability is produced at each point and a 3-dimensional probability model is created.
Having now computed a new isosurface of probabilities or indicators, it becomes possible to dissect it the same as with the grade and profit values. This provides another tool to inspect critical areas such as the bottoms of the pits or the levels where the pits explode/contract quickly.
Such areas that most seriously impact uncertainty can be identified, then inspected at a more microscopic level. With the objective further quantifying uncertainty in these areas, these tools allow one to find out:
What key information results rely on
How good or bad this information is
How to improve confidence
Further inspection of the profit or grade model is performed by taking slices through them. These can be cut in any direction to reveal the indicators within, as in the example shown in Figure 6. Not only does this reveal the areas of highest (and lowest) uncertainty, it also points out where information is lacking.
The next step involves taking a more microscopic look at level 20 where the material is particularly influential in one of the stages of pit design. By looking at a color map of the indicators at this level it is possible to visualize and inspect specific problem areas. Remembering that a high value of 1 indicates the worst case (least confident estimate) while the lowest indicates the best case (most confident), this process can isolate specific sensitive bench and measure the degree of uncertainty. The contours of probability are shown in a more traditional display along with the optimized pit limits. Also shown are the drill holes penetrating the bench.
The bottom of stage two is found here as well as the pushed-back wall of the ultimate. The contours, or the lack of them, reveal that certain areas are poorly sampled or that there is a high uncertainty associated with it. One such problem area can be identified to the south against the pit wall.
Another problem area appears to the north where poor definition is the case around a specific drillhole. Here sampling is incomplete, assisting in creating poor confidence in the area. By plotting the drill holes that are responsible for this on the plan, it is possible to identify the specific sources of uncertainty that contribute the most to our insecurity and take corrective actions.
Another way to look at this problem is to overlay the original contoured grade values in the same location. This will explain more about the actual samples and their influence. Obviously the higher isolated samples which influence the pit wall the most are of interest, particularly if the confidence is poor. This approach can be employed to identify the higher grades that are most suspicious.
Using this approach, it becomes simple to determine where new samples, new drilling, sample checking, and so on, are best undertaken to help lower the uncertainty. Similarly, the whole deposit model can be scrutinized the same way to determine sensitive areas where better definition is required to increase the level of confidence in the estimates and the resulting pit designs.
Mine Sequencing Optimization
This is a relatively new technique. The method, now commercially available, addresses a technical gap between scheduling and optimization by providing an objective mathematical method of defining production phases. It works on the basis of maximizing profit by applying a heuristic algorithm using the floating cone method. The process, applied to a pre-defined block model and a computed profit model, provides a definition of the best starting position and the most profitable sequence of mining according to grade, tons or strip ratio, considering market constraints for multiple variables. It also provides an interactive pre-planning and production control mechanism for defining an optimized production sequence.
The major feature of the technique is that it is able to produce a sequence, block by block if necessary, in as much detail as needed. Thus, instead of producing a set of stages, as in the pit optimization, the technique is able to determine phases inside a predefined pit limit. The transition between and through stages is definable by showing ore, waste and mined out blocks at any point in the phase. There are several criteria that can be used to accomplish this such as considering the best cash flow, optimizing on tons or grade, determining optimum starting points to define the best scenarios, imposing stripping ratios or cutoff bounds, and/or defining the total phase or ore requirements. As input, the sequencer uses a set of blocks that are inside a pit, such as the stages defined by the Lerch Grossmann optimization.
This technique follows on the objective to minimize uncertainty by operating in 3D on a more local scale that considers the aspects of practical scheduling. This means that due regard to blocks above, contiguous blocks, relative block positions, and practical, physical production constraints become important in a mathematical process.
In the study, the sequencer was applied to the profit and grade models. It was given the objective of finding the best sequence for a set of 4 phases within the optimized stage design. It was given constraints such as maintaining a minimum grade and finding specific tons of ore for each sequence.
By employing techniques such as the sequencer, a further refinement of the mining process is developed, this time considering more of the practical aspects of mining. Consideration is also given to the 3D relationship of blocks so it is not possible to remove any block until the ones above are first removed. Once again, the purpose has been to eliminate a typically subjective process and to reduce the uncertainty.
Summary
A variety of techniques have been applied to the same low grade project. These are all tools that serve to help manage uncertainty. Although it is unlikely that all of these would be used on one project, it is emphasized that each of the techniques can be used effectively to help reduce or identify uncertainty. In summary the following processes and techniques have been illustrated:
Constructing accurate 3-dimensional geological models
Using the process of geostatistics to apply data adaptive estimation techniques to prediction of grades
Considering the impact of geologically controlled grade distributions
Converting multiple minerals into an equivalent, then operating on a more universal denominator called profit
Using optimization algorithms to reduce the subjectivity in designing stage pits
Seeking out the most economically sensitive volumes within the deposit
Using the standard error to pinpoint problem areas
Applying indicator kriging to compute a measure of uncertainty
Developing sample enhancement programs on the basis of improving identified uncertain areas or suspicious data
Using a sequencing optimizer to develop objective mining sequences based on key requirements
Using advanced visualization methods to dissect and inspect the different models
In the use of the above application techniques the purpose has been to:
Use some advanced computational procedure to take the subjectivity out of the calculation
Use advanced computational techniques to quantify the areas of uncertainty
Use visualization techniques to present ways of determining areas of uncertainty
Use the techniques as tools to assist in planning ways to reduce uncertainty
Managing uncertainty is a combination of art and science. One has to be able to employ the techniques as tools to help identify and correct those areas where the greatest uncertainty is identified. There is never any absolute method to do this but there are many ways to minimize it. Hopefully, this article has assisted in presenting some of these tools and how they can be applied.