Page 65 - Data Science Algorithms in a Week
P. 65
Decision Trees
Without further knowledge, we would not be able to classify each row correctly.
Fortunately, there is one more question that can be asked about each row which classifies
each row correctly. For the row with the attribute water=cold, the swimming preference is
no. For the row with the attribute water=warm, the swimming preference is yes.
To summarize, starting with the root node, we ask a question at every node and based on
the answer, we move down the tree until we reach a leaf node where we find the class of
the data item corresponding to those answers.
This is how we can use a ready-made decision tree to classify samples of the data. But it is
also important to know how to construct a decision tree from the data.
Which attribute has a question at which node? How does this reflect on the construction of
a decision tree? If we change the order of the attributes, can the resulting decision tree
classify better than another tree?
Information theory
Information theory studies the quantification of information, its storage and
communication. We introduce concepts of information entropy and information gain that
are used to construct a decision tree using ID3 algorithm.
Information entropy
Information entropy of the given data measures the least amount of the information
necessary to represent a data item from the given data. The unit of the information entropy
is a familiar unit - a bit and a byte, a kilobyte, and so on. The lower the information entropy,
the more regular the data is, the more pattern occurs in the data and thus less amount of the
information is necessary to represent it. That is why compression tools on the computer can
take large text files and compress them to a much smaller size, as words and word
expressions keep reoccurring, forming a pattern.
[ 53 ]