Page 36 - Data Science Algorithms in a Week
P. 36

Classification Using K Nearest Neighbors


            The task is to design a metric which, given the word frequencies for each document, would
            accurately determine how semantically close those documents are. Consequently, such a
            metric could be used by the k-NN algorithm to classify the unknown instances of the new
            documents based on the existing documents.
            Analysis:

            Suppose that we consider, for example, N most frequent words in our corpus of the
            documents. Then, we count the word frequencies for each of the N words in a given
            document and put them in an N dimensional vector that will represent that document.
            Then, we define a distance between two documents to be the distance (for example,
            Euclidean) between the two word frequency vectors of those documents.

            The problem with this solution is that only certain words represent the actual content of the
            book, and others need to be present in the text because of grammar rules or their general
            basic meaning. For example, out of the 120 most frequent words in the Bible, each word is
            of a different importance, and the author highlighted the words in bold that have an
            especially high frequency in the Bible and bear an important meaning:

             • lord - used 1.00%             • Israel - 0.32%          • David - 0.13%
             • god - 0.56%                   • king - 0.32%            • Jesus - 0.12%
            These words are less likely to be present in the mathematical texts for example, but more
            likely to be present in the texts concerned with religion or Christianity.
            However, if we just look at the six most frequent words in the Bible, they happen to be less
            in detecting the meaning of the text:

             • the 8.07%                  • of 4.37%                 • that 1.63%
             • and 6.51%                  • to 1.72%                 • in 1.60%
            Texts concerned with mathematics, literature, or other subjects will have similar frequencies
            for these words. The differences may result mostly from the writing style.

            Therefore, to determine a similarity distance between two documents, we need to look only
            at the frequency counts of the important words. Some words are less important - these
            dimensions are better reduced, as their inclusion can lead to a misinterpretation of the
            results in the end. Thus, what we are left to do is to choose the words (dimensions) that are
            important to classify the documents in our corpus. For this, consult exercise 1.6.












                                                     [ 24 ]
   31   32   33   34   35   36   37   38   39   40   41