Page 257 - Data Science Algorithms in a Week
P. 257
The Utilization of Case-Based Reasoning 239
among different tree levels will show the relative importance of these attributes in the
process of developing a solution to the new problem. This T tree represents the stored
simulation cases in the case-base and defined as
= { ,} Where,
N is the set of nodes (attributes),
n is the number of node in the tree
E is the set of edges connecting nodes and correlating attributes,
l is the level of the node, where
l = 0 Root node, l = 1 Category of the case, l = 2 Path number,
l = 3 # Doctors, l = 4 # Nurses, l = 5 # Lab technicians, l = 6 # Staff, and l = 7
Case Number
For each node in N, degree = number of directly connected nodes in levels l – 1 and
l + 1
Table 3. The similarity (distance) matrix between different paths
Similarity (distance) matrix
Path 1 Path 2 Path 3 Path 4
Path 1 0 10 20 10
Path 2 0 10 20
Path 3 0 10
Path 4 0
The decision tree included three types of nodes:
(a) A root node acting as a pointer that reference all sub-nodes in the first level
(starting node of the tree)
(b) Intermediate nodes: all nodes in the tree with level 1 < l < 7. These nodes contain
the set of all child nodes Cl in the direct lower level that connected by edges.
(c) Leaf nodes: all nodes in the tree with degree = l, and l = 7. Each leaf node
expresses a specific set of attributes relating to its parents. The tree of the
developed case-base is shown in Figure 4.
For the stored simulation cases, let each case Ax describe as a set of different
attributes composing a distinctive case {a1, a2, al-1}. Also, for each attribute ai there is a
set Vi that contains all possible values of this attribute {vi1, vi2, … vir}. For example, the
first attribute a1 corresponding to the category of the simulation problem has V1 =
{Optimization, Crowding, New design/methodology}.
The induction tree approach will be ready to use as soon as the decision tree is
developed. Attributes of each of the new cases will compose a new set G = {g1, g2, … gl-