Page 87 - Data Science Algorithms in a Week
P. 87

4









                                                         Random Forest





            A random forest is a set of random decision trees (similar to the ones described in the
            previous chapter), each generated on a random subset of the data. A random forest
            classifies the feature to belong to the class that is voted for by the majority of the random
            decision trees. A random forest tends to provide a more accurate classification of a feature
            than a decision tree because of the decreased bias and variance.

            In this chapter, you will learn:

                      Tree bagging (or bootstrap aggregation) technique as part of random forest
                      construction, but that can be extended also to other algorithms and methods in
                      data science to reduce the bias and variance and hence to improve the accuracy
                      In example Swim preference to construct a random forest and classify a data item
                      using the constructed random forest
                      How to implement an algorithm in Python that would construct a random forest
                      In example Playing chess the differences in the analysis of a problem by
                      algorithms naive Bayes, decision trees and random forest
                      In example Going shopping how random forest algorithm can overcome the
                      shortcomings of decision tree algorithm and thus outperform it
                      In example Going shopping how a random forest can express the level of the
                      confidence in its classification of the feature
                      In exercises how decreasing the variance of a classifier can yield more accurate
                      results
   82   83   84   85   86   87   88   89   90   91   92