Introduction to Random Forests



This piece was adapted with permission from the author from View the original here

Random forests, also known as random decision forests, are a popular ensemble method that can be used to build predictive models for both classification and regression problems. Ensemble methods use multiple learning models to gain better predictive results — in the case of a random forest, the model creates an entire forest of random uncorrelated decision trees to arrive at the best possible answer.

To demonstrate how this works in practice — specifically in a classification context — I’ll be walking you through an example using a famous data set from the University of California, Irvine (UCI) Machine Learning Repository. The data set, called the Breast Cancer Wisconsin (Diagnostic) Data Set, deals with binary classification and includes features computed from digitized images of biopsies. The data set can be downloaded here.

To follow this tutorial, you will need some familiarity with classification and regression tree (CART) modeling. I will provide a brief overview of different CART methodologies that are relevant to random forest, beginning with decision trees. If you’d like to brush up on your knowledge of CART modeling before beginning the tutorial, I highly recommend reading Chapter 8 of the book “An Introduction to Statistical Learning with Applications in R,” which can be downloaded here.

Decision Trees

Decision trees are simple but intuitive models that utilize a top-down approach in which the root node creates binary splits until a certain criteria is met. This binary splitting of nodes provides a predicted value based on the interior nodes leading to the terminal (final) nodes. In a classification context, a decision tree will output a predicted target class for each terminal node produced.

Although intuitive, decision trees have limitations that prevent them from being useful in machine learning applications. You can learn more about implementing a decision tree here.

Limitations to Decision Trees

Decision trees tend to have high variance when they utilize different training and test sets of the same data, since they tend to overfit on training data. This leads to poor performance on unseen data. Unfortunately, this limits the usage of decision trees in predictive modeling. However, using ensemble methods, we can create models that utilize underlying decision trees as a foundation for producing powerful results.

Bootstrap Aggregating Trees

Through a process known as bootstrap aggregating (or bagging), it’s possible to create an ensemble (forest) of trees where multiple training sets are generated with replacement, meaning data instances — or in the case of this tutorial, patients — can be repeated. Once the training sets are created, a CART model can be trained on each subsample.

This approach helps reduce variance by averaging the ensemble's results, creating a majority-votes model. Another important feature of bagging trees is that the resulting model uses the entire feature space when considering node splits. Bagging trees allow the trees to grow without pruning, reducing the tree-depth sizes and resulting in high variance but lower bias, which can help improve predictive power.

However, a downside to this process is that the utilization of the entire feature space creates a risk of correlation between trees, increasing bias in the model.

Limitations to Bagging Trees

The main limitation of bagging trees is that it uses the entire feature space when creating splits in the trees. If some variables within the feature space are indicative of certain predictions, you run the risk of having a forest of correlated trees, thereby increasing bias and reducing variance.

However, a simple tweak of the bagging trees methodology can prove advantageous to the model’s predictive power.

Random Forest

Random forest aims to reduce the previously mentioned correlation issue by choosing only a subsample of the feature space at each split. Essentially, it aims to make the trees de-correlated and prune the trees by setting a stopping criteria for node splits, which I will cover in more detail later.

After this tutorial you will be familiar with how to implement the following in Python:

  • Basic exploratory analysis
  • Training and test set creation
  • Model fitting using sklearn
  • Hyperparameter optimization
  • Out-of-bag error rate
  • Calculating variable importance
  • Test set calculations
  • Cross validation
  • ROC curve estimation
Business Uses

Random Forest can be used for a plethora of data circumstances including, but not limited to:

Load Packages

To get started, let's load our modules into our Python environment. I am employing a Jupyter Notebook while running inside a virtualenv environment (the requirements.txt file associated with this repo contains the module information for reproducibility).

For the purposes of this tutorial, I will primarily be using the SciPy stack, especially pandas, matplotlib, seaborn and sklearn.

    # Import modules
%matplotlib inline

import time
import random
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.metrics import roc_curve, auc
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report
from sklearn.model_selection import KFold, cross_val_score
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.ensemble import RandomForestClassifier 
from urllib.request import urlopen'ggplot')
pd.set_option('display.max_columns', 500)

Load Data

Let's load the data into a Pandas dataframe using urlopen from the urllib.request module.

Instead of downloading a csv, I grabbed the data straight from the UCI Machine Learning Database using an http request, a method inspired by Python tutorials from the University of California, Santa Barbara's data science course.

    # Loading data and cleaning dataset
UCI_data_URL = '\

I recommend that you keep a static file for your data set as well.

Now, create a list with the appropriate names and set them as the data frame's column names. Then, load them into a pandas data frame.

    names = ['id_number', 'diagnosis', 'radius_mean', 
         'texture_mean', 'perimeter_mean', 'area_mean', 
         'smoothness_mean', 'compactness_mean', 
         'symmetry_mean', 'fractal_dimension_mean',
         'radius_se', 'texture_se', 'perimeter_se', 
         'area_se', 'smoothness_se', 'compactness_se', 
         'concavity_se', 'concave_points_se', 
         'symmetry_se', 'fractal_dimension_se', 
         'radius_worst', 'texture_worst', 
         'perimeter_worst', 'area_worst', 
         'smoothness_worst', 'compactness_worst', 
         'concavity_worst', 'concave_points_worst', 
         'symmetry_worst', 'fractal_dimension_worst'] 

dx = ['Benign', 'Malignant']
breast_cancer = pd.read_csv(urlopen(UCI_data_URL), names=names)


You'll need to do some minor cleaning, such as setting the id_number to the data frame index and converting the diagnosis to the standard binary 1, 0 representation using the map() function.

# Setting 'id_number' as our index
breast_cancer.set_index(['id_number'], inplace = True) 
# Converted to binary to help later on with models and plots
breast_cancer['diagnosis'] = breast_cancer['diagnosis'].map({'M':1, 'B':0})

Missing Values

Given context of the data set, I know there is no missing data. Even so, let's run a for-loop to see if there are any missing values in our columns. Below, you'll see the column name and total missing values for that column.

for col in breast_cancer:
    if ((breast_cancer[col].isnull().values.ravel().sum()) == 0):
print('Sanity Check! \No missing Values found!')

Sanity check! No missing values found! We'll use this for the random forest model, where the id_number won't be relevant.

    # For later use in CART models
names_index = names[2:]

Let's preview the data set utilizing the head() function, which will provide the first five values of our data frame.


Next, let's get the dimensions of the data set. The first value is the number of patients and the second value is the number of features. It's important to print the data types of your data set, as it will often be an indicator of missing data — and provide context for additional data cleaning.

print("Here's the dimensions of our data frame:\n", 
print("Here's the data types of our columns:\n",

Here are the dimensions of our data frame: (569, 31)

Here are the data types of our columns:

 diagnosis                    int64\n",
      "radius_mean                float64\n",
      "texture_mean               float64\n",
      "perimeter_mean             float64\n",
      "area_mean                  float64\n",
      "smoothness_mean            float64\n",
      "compactness_mean           float64\n",
      "concavity_mean             float64\n",
      "concave_points_mean        float64\n",
      "symmetry_mean              float64\n",
      "fractal_dimension_mean     float64\n",
      "radius_se                  float64\n",
      "texture_se                 float64\n",
      "perimeter_se               float64\n",
      "area_se                    float64\n",
      "smoothness_se              float64\n",
      "compactness_se             float64\n",
      "concavity_se               float64\n",
      "concave_points_se          float64\n",
      "symmetry_se                float64\n",
      "fractal_dimension_se       float64\n",
      "radius_worst               float64\n",
      "texture_worst              float64\n",
      "perimeter_worst            float64\n",
      "area_worst                 float64\n",
      "smoothness_worst           float64\n",
      "compactness_worst          float64\n",
      "concavity_worst            float64\n",
      "concave_points_worst       float64\n",
      "symmetry_worst             float64\n",
      "fractal_dimension_worst    float64\n",
      "dtype: object\n"

Class Imbalance

The distribution of diagnoses is important because it speaks to class imbalance within machine learning and data mining applications. Class imbalance is a term used to describe when a target class within a data set is outnumbered by another target class (or classes). This can create misleading accuracy metrics, known as an accuracy paradox. To make sure our target classes aren't imbalanced, create a function that will output the distribution of the target classes.

Note: If your data set suffers from class imbalance, I suggest reading up on upsampling and downsampling.

    def print_dx_perc(data_frame, col):
        """Function used to print class distribution for our data set"""
        dx_vals = data_frame[col].value_counts()
        dx_vals = dx_vals.reset_index()
        # Create a function to output the percentage 
        f = lambda x, y: 100 * (x / sum(y))
        for i in range(0, len(dx)):
            print('{0} accounts for {1:.2f}% of the diagnosis class'\
              .format(dx[i], f(dx_vals[col].iloc[i], 
print_dx_perc(breast_cancer, 'diagnosis')

Benign results account for 62.74% of the diagnosis class. Malignant results account for 37.26% of the diagnosis class. Fortunately, this data set does not suffer from class imbalance.

Next, we will employ a function that gives us standard descriptive statistics for each feature including mean, standard deviation, minimum value, maximum value, and range intervals.


You can see in the maximum row of the chart that our data varies in distribution; this will be important as we consider classification models. Standardization is an important requirement for many classification models that should be handled when implementing pre-processing. Some models (like neural networks) can perform poorly if pre-processing isn't considered, so the describe() function is a good indicator for standardization. Fortunately, random forest does not require any pre-processing. (For use of categorical data, see sklearn's Encoding Categorical Data.)

Creating Training and Test Sets

Let's split the data set into our training and test sets, which will be pseudo-randomly selected to create a 80-20% split. You will use the training set to train the model and perform some optimization. You will use the test set, which will act as unseen data, to assess model performance.

When using this method for machine learning, always be wary of utilizing your test set to create models. Data leakage is a common problem that can result in overfitting. More on data leakage can be found in this Kaggle article.

    feature_space = breast_cancer.iloc[:, breast_cancer.columns != 'diagnosis']
feature_class = breast_cancer.iloc[:, breast_cancer.columns == 'diagnosis']

training_set, test_set, class_set, test_class_set = train_test_split(feature_space,
                                                                    test_size = 0.20, 
                                                                    random_state = 42)

Note: What I mean when I say that we will "pseudo-randomly" select data is that we will use a random seed generator and set it equal to a number of our choosing. This will ensure the results are the same for anyone who uses this generator, and therefore, that they will be able to replicate this project. 

# Cleaning test sets to avoid future warning messages
class_set = class_set.values.ravel() 
test_class_set = test_class_set.values.ravel()

Fitting Random Forest

Now, let's create the model, starting with parameter tuning. Here are the parameters we will be tuning in this tutorial: 

  • max_depth: The maximum splits for all trees in the forest.
  • bootstrap: An indicator of whether or not we want to use bootstrap samples when building trees.
  • max_features: The maximum number of features that will be used in node splitting — the main difference I previously mentioned between bagging trees and random forest. Typically, you want a value that is less than p, where p is all features in your data set.
  • criterion: This is the metric used to asses the stopping criteria for the decision trees.

Once we've instantiated our model, we will go ahead and tune our parameters.

    # Set the random state for reproductibility
fit_rf = RandomForestClassifier(random_state=42)

Hyperparameter Optimization

Utilizing the GridSearchCV functionality, let's create a dictionary with parameters we are looking to optimize to create the best model for our data. Setting the n_jobs to 3 tells the grid search to run three jobs in parallel, reducing the time the function will take to compute the best parameters. I included the timer to see how long different jobs took; that led me to ultimately decide to use three parallel jobs.

This will help set the parameters we will use to tune one final parameter: the number of trees in our forest.

start = time.time()

param_dist = {'max_depth': [2, 3, 4],
              'bootstrap': [True, False],
              'max_features': ['auto', 'sqrt', 'log2', None],
              'criterion': ['gini', 'entropy']}

cv_rf = GridSearchCV(fit_rf, cv = 10,
                     n_jobs = 3), class_set)
print('Best Parameters using grid search: \n', 
end = time.time()
print('Time taken in grid search: {0: .2f}'.format(end - start))

Best Parameters using grid search: 
 {'max_features': 'log2', 'max_depth': 3, 'bootstrap': True, 'criterion': 'gini'}
Time taken in grid search:  9.20

Once we find the best parameter combination, we can set the parameters to our model.

Notice how I didn't use the bootstrap: True parameter. I will explain why in the following section.

# Set best parameters given by grid search 
fit_rf.set_params(criterion = 'gini',
                  max_features = 'log2', 
                  max_depth = 3)

RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',
            max_depth=3, max_features='log2', max_leaf_nodes=None,
            min_impurity_split=1e-07, min_samples_leaf=1,
            min_samples_split=2, min_weight_fraction_leaf=0.0,
            n_estimators=10, n_jobs=1, oob_score=False, random_state=42,
            verbose=0, warm_start=False)

Out-of-Bag Error Rate

Another useful feature of random forest is the concept of an out-of-bag (OOB) error rate. Because only two-thirds of the data are used to train each tree when building the forest, one-third of unseen data can be used in a way that is advantageous to our accuracy metrics without being as computationally expensive as something like cross validation, for instance.

As outlined below, when calculating OOB, two parameters have to be changed. Also, by utilizing a for-loop across a multitude of forest sizes, we can calculate the OOB error rate and use it to asses how many trees are appropriate for our model!

Note: When calculating the OOB score, setting bootstrap: True will produce errors, but it is necessary to complete the oob_score calculation, as outlined in this example.

For the original analysis I compared k-nearest neighbors, random forest, and neural networks, so most of the analysis was done to compare across different models.


min_estimators = 15
max_estimators = 1000

error_rate = {}

for i in range(min_estimators, max_estimators + 1):
    fit_rf.set_params(n_estimators=i), class_set)

    oob_error = 1 - fit_rf.oob_score_
    error_rate[i] = oob_error

    # Convert dictionary to a pandas series for easy plotting 
oob_series = pd.Series(error_rate)

    fig, ax = plt.subplots(figsize=(10, 10))


                color = 'red')
plt.ylabel('OOB Error Rate')
plt.title('OOB Error Rate Across various Forest sizes \n(From 15 to 1000 trees)')

The OOB error rate starts to oscillate at around 400 trees, so I will go ahead and use 400 trees in my forest. Using the pandas series object, I can easily find the OOB error rate for the estimator as follows:

print('OOB Error rate for 400 trees is: {0:.5f}'.format(oob_series[400]))

OOB Error rate for 400 trees is: 0.04835

Utilizing the OOB error rate that was created with the model gives us an unbiased error rate. Since OOB can be calculated with the model estimation, it's helpful when cross validating and/or optimizing hyperparameters prove to be too computationally expensive.

For the sake of this tutorial, I will go over other traditional methods for optimizing machine learning models, including the training and test error route and cross validation metrics.

Traditional Training and Test Set Split

In order for this methodology to work, we must set the number of trees calculated using the OOB error rate, and remove the warm_start and oob_score parameters, as well as include the bootstrap parameter.

                  bootstrap = True,

    RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',
            max_depth=3, max_features='log2', max_leaf_nodes=None,
            min_impurity_split=1e-07, min_samples_leaf=1,
            min_samples_split=2, min_weight_fraction_leaf=0.0,
            n_estimators=400, n_jobs=1, oob_score=False, random_state=42,
            verbose=0, warm_start=False)

Training the Algorithm

Next, let's train the algorithm with the training and target class data sets we made earlier., class_set)

    RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',
            max_depth=3, max_features='log2', max_leaf_nodes=None,
            min_impurity_split=1e-07, min_samples_leaf=1,
            min_samples_split=2, min_weight_fraction_leaf=0.0,
            n_estimators=400, n_jobs=1, oob_score=False, random_state=42,
            verbose=0, warm_start=False)

Variable Importance

Once we have trained the model, we can assess variable importance. One downside to using ensemble methods with decision trees is that you lose the interpretability a single tree gives. A single tree can outline for us important node splits, as well as variables that were important at each split.

Fortunately, ensemble methods that rely on CART models use a metric to evaluate the homogeneity of splits. Thus, when creating ensembles, these metrics can be utilized to give insight into the important variables used in the training of the model. Two such metrics are gini impurity and entropy. Many people favor gini impurity because it has a lower computational cost than entropy, which requires calculating the logarithmic function. For more information, I recommend reading this article.

Here we define each metric:

$$Gini\ Impurity = 1 - \sum_i p_i$$


$$Entropy = \sum_i -p_i * \log_2 p_i$$

where $p_i$ is defined as the proportion of subsamples that belong to a certain target class.

Since we are utilizing the gini impurity, the impurity measure reaches 0 when all target class labels are the same.

Let's access the feature importance of the model and use a helper function to output the importance of our variables in descending order.

    importances_rf = fit_rf.feature_importances_
indices_rf = np.argsort(importances_rf)[::-1]

    def variable_importance(importance, indices):
    Prints dependent variable names ordered from largest to smallest
    based on gini or information gain for CART model. 
    names:      Name of columns included in model
    importance: Array returned from feature_importances_ for CART
                   models organized by dataframe index
    indices:    Organized index of dataframe from largest to smallest
                   based on feature_importances_

    Print statement outputting variable importance in descending order
    print("Feature ranking:")
    for f in range(len(names_index)):
        i = f
        print("%d. The feature '%s' \
has a Mean Decrease in Gini of %f" % (f + 1, 

    variable_importance(importances_rf, indices_rf)

Feature ranking:
1. The feature 'area_worst' has a Mean Decrease in Gini of 0.129856
2. The feature 'perimeter_worst' has a Mean Decrease in Gini of 0.120953
3. The feature 'concave_points_worst' has a Mean Decrease in Gini of 0.115548
4. The feature 'concave_points_mean' has a Mean Decrease in Gini of 0.100136
5. The feature 'radius_worst' has a Mean Decrease in Gini of 0.078047
6. The feature 'concavity_mean' has a Mean Decrease in Gini of 0.062143
7. The feature 'area_mean' has a Mean Decrease in Gini of 0.056556
8. The feature 'radius_mean' has a Mean Decrease in Gini of 0.054567
9. The feature 'perimeter_mean' has a Mean Decrease in Gini of 0.051745
10. The feature 'area_se' has a Mean Decrease in Gini of 0.043261
11. The feature 'concavity_worst' has a Mean Decrease in Gini of 0.038659
12. The feature 'compactness_worst' has a Mean Decrease in Gini of 0.020329
13. The feature 'compactness_mean' has a Mean Decrease in Gini of 0.016163
14. The feature 'texture_worst' has a Mean Decrease in Gini of 0.015542
15. The feature 'radius_se' has a Mean Decrease in Gini of 0.014521
16. The feature 'perimeter_se' has a Mean Decrease in Gini of 0.013084
17. The feature 'texture_mean' has a Mean Decrease in Gini of 0.012203
18. The feature 'symmetry_worst' has a Mean Decrease in Gini of 0.011750
19. The feature 'smoothness_worst' has a Mean Decrease in Gini of 0.009380
20. The feature 'concavity_se' has a Mean Decrease in Gini of 0.009105
21. The feature 'concave_points_se' has a Mean Decrease in Gini of 0.004449
22. The feature 'smoothness_mean' has a Mean Decrease in Gini of 0.003982
23. The feature 'fractal_dimension_se' has a Mean Decrease in Gini of 0.003953
24. The feature 'fractal_dimension_worst' has a Mean Decrease in Gini of 0.002672
25. The feature 'fractal_dimension_mean' has a Mean Decrease in Gini of 0.002210
26. The feature 'smoothness_se' has a Mean Decrease in Gini of 0.002169
27. The feature 'symmetry_mean' has a Mean Decrease in Gini of 0.002051
28. The feature 'texture_se' has a Mean Decrease in Gini of 0.002043
29. The feature 'symmetry_se' has a Mean Decrease in Gini of 0.001937
30. The feature 'compactness_se' has a Mean Decrease in Gini of 0.000987

We can see here that our top five variables are area_worst, perimeter_worst, concave_points_worst, concave_points_mean, radius_worst.

This gives us great insight for further analyses like feature engineering, although I won't get into that topic during this tutorial. It can also give us insight into the mind of the practitioner by showing what variables played an important part in the predictions generated by the model. In the case of our test data set, knowing this information would help practitioners in the medical field focus on the top variables and their relationship with breast cancer.

    def variable_importance_plot(importance, indices):
        Prints bar chart detailing variable importance for CART model 
        NOTE: feature_space list was created because the bar chart 
        was transposed and index would be in incorrect order.

        importance_desc: Array returned from feature_importances_ for CART
                        models organized in descending order 

        indices: Organized index of dataframe from largest to smallest
                        based on feature_importances_ 
        Returns variable importance plot in descending order
        index = np.arange(len(names_index))
        importance_desc = sorted(importance) 
        feature_space = []
        for i in range(29, -1, -1):

        fig, ax = plt.subplots(figsize=(10, 10))

        plt.title('Feature importances for Random Forest Model\
        \nBreast Cancer (Diagnostic)')
             color = '#875FDB')

        plt.ylim(-1, 30)
        plt.xlim(0, max(importance_desc))
        plt.xlabel('Mean Decrease in Gini')

    variable_importance_plot(importances_rf, indices_rf)

The visual above helps drive home the point, since you can clearly see the difference in the importance of variables for the ensemble method. Certain cutoff points can be made to reduce the inclusion of features and can help in the accuracy of the model, since we'll be removing what is considered noise within our feature space.

Cross Validation

Cross validation is a powerful tool that is used for estimating the predictive power of your model, and it performs better than the conventional training and test set. Using cross validation, we can create multiple training and test sets and average the scores to give us a less biased metric.

In this case, we will create 10 sets within our data set that calculate the estimations we have done already, then average the prediction error to give us a more accurate representation of our model's prediction power. The model's performance can vary significantly when utilizing different training and test sets.

For a more thorough explanation of cross validation I recommend reading An Introduction to Statistical Learnings with Applications in R, specifically chapter 5.1. 

K-Fold Cross Validation

Here we are employing K-fold cross validation; more specifically, 10 folds. We are creating 10 subsets of our data on which to employ the training and test set methodology; then we will average the accuracy for all folds to give us our estimation.

Within a random forest context, if your data set is significantly large, you can choose to not do cross validation and instead use the OOB error rate as an unbiased metric for computational costs. But for the purposes of this tutorial, I included it to show the different accuracy metrics available.

    def cross_val_metrics(fit, training_set, class_set, print_results = True):
        Function helps automate cross validation processes while including 
        option to print metrics or store in variable
        fit: Fitted model 
        training_set:  Data_frame containing 80% of original dataframe
        class_set:     data_frame containing the respective target vaues 
                      for the training_set
        print_results: Boolean, if true prints the metrics, else saves metrics as 

        scores.mean(): Float representing cross validation score
        scores.std() / 2: Float representing the standard error (derived
                from cross validation score's standard deviation)
        n = KFold(n_splits=10)
        scores = cross_val_score(fit, 
                         cv = n)
        if print_results:
            print("Accuracy: {0: 0.3f} (+/- {1: 0.3f})"\
              .format(scores.mean(), scores.std() / 2))
            return scores.mean(), scores.std() / 2

                  print_results = True)

Accuracy: 0.947 (+/- 0.019)

Test Set Metrics

Using the test set that was created earlier, let's examine another metric for the evaluation of our model. You'll recall that that we didn't touch the test set until now — after we had completed hyperparamter optimization — to avoid the problem of data leakage. 

Here, I have created a confusion matrix showcasing the following metrics:

    predictions_rf = fit_rf.predict(test_set)

    test_crosstb = pd.crosstab(index = test_class_set,
                           columns = predictions_rf)

# More human readable 
test_crosstb = test_crosstb.rename(columns= {0: 'Benign', 1: 'Malignant'})
test_crosstb.index = ['Benign', 'Malignant'] = 'n = 114'


    accuracy_rf = fit_rf.score(test_set, test_class_set)

print("Here is our mean accuracy on the test set:\n {0:.3f}"\

Our mean accuracy on the test set: 0.965

 # Here we calculate the test error rate!
test_error_rate_rf = 1 - accuracy_rf
print("The test error rate for our model is:\n {0: .4f}"\

The test error rate for our model is: 0.0351

As you can see, we got a very similar error rate for our test set to the one we got with our OOB, which is a good sign for our model.

ROC Curve Metrics

A receiver operating characteristic (ROC) curve calculates the false positive rates and true positive rates across different thresholds. Let's graph these calculations.

If our curve is located in the top left corner of the plot, that indicates an ideal model; i.e., a false positive rate of 0 and true positive rate of 1. On the other hand, a ROC curve that is at 45 degrees is indicative of a model that is essentially randomly guessing.

We will also calculate the area under the curve (AUC). The AUC is used as a metric to differentiate the prediction power of the model for patients with cancer and those without it. Typically, a value closer to 1 means that our model was able to differentiate correctly from a random sample of the two target classes of two patients with and without the disease.

    fpr2, tpr2, _ = roc_curve(predictions_rf, 
    predictions_prob = fit_rf.predict_proba(test_set)[:, 1] 

    auc_rf = auc(fpr2, tpr2)

    def plot_roc_curve(fpr, tpr, auc, mod, xlim=None, ylim=None):
        Function creates ROC Curve for respective model given selected parameters.
        Optional x and y limits to zoom into graph 
        fpr:  Array returned from sklearn.metrics.roc_curve for increasing 
             false positive rates
        tpr:  Array returned from sklearn.metrics.roc_curve for increasing 
             true positive rates
        auc:  Float returned from sklearn.metrics.auc (Area under Curve)
        mod:  String represenation of appropriate model, can only contain the 
             following: ['knn', 'rf', 'nn']
        xlim: Set upper and lower x-limits
        ylim: Set upper and lower y-limits
        Returns plot of Receiving Operating Curve for specific model. Allowing user to 
        specify x and y-limits. 
        mod_list = ['knn', 'rf', 'nn']
        method = [('Kth Nearest Neighbor', 'deeppink'), 
              ('Random Forest', 'red'), 
              ('Neural Network', 'purple')]

        plot_title = ''
        color_value = ''
        for i in range(0, 3):
            if mod_list[i] == mod:
                plot_title = method[i][0]
                color_value = method[i][1]

        fig, ax = plt.subplots(figsize=(10, 10))

        plt.plot(fpr, tpr, 
        plt.title('ROC Curve For {0} (AUC = {1: 0.3f}) \
              \nBreast Cancer Diagnostic'\
              .format(plot_title, auc))

        plt.plot([0, 1], [0, 1], 'k--', lw=2) # Add Diagonal line
        plt.plot([0, 0], [1, 0], 'k--', lw=2, color = 'black')
        plt.plot([1, 0], [1, 1], 'k--', lw=2, color = 'black')
        if xlim is not None:
        if ylim is not None:
        plt.xlabel('False Positive Rate')
        plt.ylabel('True Positive Rate')

    plot_roc_curve(fpr2, tpr2, auc_rf, 'rf',
               xlim=(-0.01, 1.05), 
               ylim=(0.001, 1.05))

Our model performed exceptionally well — it has an AUC over .90. Now, let's take a zoomed-in view to showcase the closeness of our ROC curve to the ideal ROC curve.

plot_roc_curve(fpr2, tpr2, auc_rf, 'rf', 
               xlim=(-0.01, 0.2), 
               ylim=(0.85, 1.01))

Classification Report

I'm going to go ahead and create a classification report for our model, which is available through sklearn.metrics. This report provides many important classification metrics including:

  • Precision: Also called the positive predictive value, this metric is the number of correct predictions divided by the number of correct predictions plus false positives, so $tp / (tp + fp)$.
  • Recall: Also known as the sensitivity, this is the number of correct predictions divided by the total number of instances, so $tp / (tp + fn)$ where $fn$ is the number of false negatives.
  • f1-score: This is defined as the weighted harmonic mean of both the precision and recall, where the best f1-score is 1 and this worst value is 0, as defined by sklean's documentation.
  • support: The number of instances that are the correct target values.

Across the board, we can see that our model provided great insight into classifying patients based on FNA scans. Other important metrics to consider would be the false positive rate, since within this context it would be bad for the model to tell someone that they are cancer free when in reality they have cancer.

 def print_class_report(predictions, alg_name): """ 
Function helps automate the report generated by the sklearn package. Useful for multiple model comparison 

The predictions made by the algorithm used alg_name: String containing the name of the algorithm used 

Returns classification report generated from sklearn. 
print('Classification Report for {0}:'.format(alg_name)) 
        target_names = dx)) 

    class_report = print_class_report(predictions_rf, 'Random Forest')

Here's our classification report for our random forest:

avg / total   

Metrics for Random Forest

Here, I've accumulated the various metrics we have used throughout this tutorial in a simple table to showcase the power and effectiveness of random forest modeling.


We've now gone through a number of metrics to assess the capabilities of our random forest, but there's still much that can be done using background information from the data set. Feature engineering would be a powerful tool for extracting information and moving forward into the research phase, and would help define key metrics to utilize when optimizing model parameters.

There have been advancements with image classification in the past decade that make it possible to use images instead of extracted features from those images, but this data set is a great resource for making use of machine learning processes and concepts. If you have any suggestions, recommendations, or corrections please reach out to me.

About the Author

Raul Eulogio is a data analyst currently working for the Hospice of Santa Barbara. He's the co founder of inertia7, a social platform for sharing open source data science projects, and an active member of the Data Science at University of California, Santa Barbara organization. Raul enjoys staying active and helping foster data science in his community. He studied statistics as an undergraduate at UCSB and uses this as a foundation for his career in data science. Raul's projects can be found on inertia7 and on his GitHub account