Page 139 - Proceeding of Atrans Young Researcher's Forum 2019_Neat
P. 139

“Transportation for A Better Life:
                                                                                                                       Smart Mobility for Now and Then”

                                                                                    23 August 2019, Bangkok, Thailand

             2.3.3.1. Confusion Matrix                        values,  N c orrect    is  the  number  of  samples  correctly

                 Correlation matrix is popularly used to identify   predicted, and  N total  is the total number of samples.
             independent  variables  from  the  input  space  of
             multivariable problems [57]. This matrix is helpful   3. Results and Discussion
             in exploring the linear correlation between each pair
             of  variables  in  the  input  space  on  which  various   3.1. Prediction Capability of AI models
             correlation  structures  in a set  of  random  variables   In this section, the prediction capability of EDT
             can  be  deduced  and  highlighted  [58].  Basically,   Bagged  and  SVM  are  evaluated.  Firstly,  the
             correlation  matrix  is  based  on  the  Pearson  linear   accuracy  of  the  constructed  AI  models  was
             correlation  coefficient  of  two  real-valued  random   investigated via a confusion matrix. A given random
             variables U and V as follow [58]:                dataset was selected and the predicted results of the
                                                              two proposed AI models for the training dataset are
                               n
                                  U -U  V -V             shown in Figure 3. It can be seen that in this case,
                                         i
                                   i
                     r       i=1                             both  AI  models  exhibited  good  performance.  The
                     U,V             2         2        (6)   correct responses for the training dataset were 96.6%
                             n         n
                                U -U     V -V           and 98.5%, whereas the false responses were 3.4%
                                           i
                                 i
                            i=1       i=1                     and  1.5%  for  SVM  and  EDT  Bagged  algorithms,
                                                              respectively.  For  the  testing  part  (Figure  4),  the
             where n is the number of values in U and V,  U  and   corresponding accuracy slightly decreased such that
             V  infer average values of U and V, respectively.    the correct responses were 76.1% and 80%, whereas
                                                              the false responses are 23.9% and 20% for SVM and
             2.3.3.2. Root Mean Square Error, Mean Absolute   EDT  Bagged  algorithms,  respectively.  More
             Error and Accuracy                               precisely, true positive rate for the training part were
                 Root Mean Square Error (RMSE) is inferred as   96.2%,  98.0%,  100%,  97.7%  and  100%  for  the
             the squared difference error the predicted and actual   alternatives 1, 2, 3, 4 and 5, respectively. For EDT
             output values whereas Mean Absolute Error (MAE)   Bagged  algorithm,  the  true  positive  rate  are  98%,
             infers  the  absolute  average  differences  between   100%, 100%, 100% and 100%, showing excellent
             actual  and  prediction  values  [59].  Both  are  well-  prediction  capability  of  SVM  over  EDT  bagged
             known  methods  for  overestimating  with  an    model. As regard to the testing part, the true positive
             acceptable  margin  of  error  for  predictions  of  real   rate of SVM is confirmed better than EDT Bagged
             world  problems  [60].  Accuracy  is  well-known   (83.5%, 72.7%, 33.3%, 46.2% and 100% compared
             metric used to evaluate classification models [61].   with 84.9%, 56.9%, 66.7%, 31.6% and 100% for the
             RMSE, MAE and accuracy can be calculated using   five  alternatives,  respectively.  The  false  negative
             the following equations [61], [62]:              rate  from  both  training  and  testing  part  of  EDT
                                                              Bagged as well as SVM are lower than that of the
                                  n
                                          2
                        RMSE     (t   0  t p ) / n    (7)   true positive rate. In the case of alternative 3 (travel
                                                              decision = shift to walk) the false negative rates were
                                 i 1
                                                              low (e.g. 20% for EDT Bagged and 10% for SVM).
                                                              This might be influenced by the hesitation of people
                                 1  n
                          MAE       t   t            (8)   on  walking  under  hot  weather  and  low-quality
                                 n  i 1  0  p                infrastructure,  especially  for  working  trip  purpose
                                                              with long travel distance. With an overall accuracy
                                     N
                           Accuracy   c orrect         (9)   regarding  the  testing  part  (e.g.  76.1%  for  EDT
                                      N total                 Bagged and 80.0% for SVM), it can be concluded

                 where  n  means  the  number  of  input  data,  t0   that both AI models exhibited promising ability for
             infers the actual values, and tp infers the predicted   predicting the travel decisions of transport users.











                                                           114
   134   135   136   137   138   139   140   141   142   143   144