Page 380 - NGTU_paper_withoutVideo
P. 380

New Geomatics Technologies and Applications

               To solve these problems, the general solution is to summarize (generalize) the features and then generalize the map.
          Model-based and cartography-based methods are the two broad categories of generalization methods [1,2]. The visualization of
          spatial data is the subject of cartography-based approaches. Modeling base methods seek to reduce density and volume, as well
          as simplify spatial details, depending on the level of detail needed.  Summarization of topological relationships, feature geometry,
          and semantic information is also included in modeling summarization [3]. For each of these purposes, various methods have
          been developed, including the Douglas-Poker and Viswalingam methods. The Douglas-Poker and Viswalingam methods use

          different ways to geometrically summarize features, but their main goal is to select and maintain a few mainline points. In other
          words, the deleted points are completely ignored in these approaches and have no effect on the summarized final feature. It's
          worth noting that the existence or absence of these points has no impact on the results of these algorithms. These points, however,
          are part of the main feature and may be significant; ignoring them reduces the geometric similarity of the summarized feature to
          the main feature [4]. In this field, a variety of approaches have been implemented. The first methods did not rely on mathematical

          relationships or real neighbourhoods. They selected and removed nodes on their own. For example, the random points approach,
          after dividing the feature into different parts with consecutive nodes, randomly selected a node from each category [5, 6] and
          also the n'th points approach, selecting the n'th node from each category and finally the selected nodes formed as a simplified
          geometry [7]. Another categories of algorithms, locally examined the relationship between two or three consecutive nodes in
          terms of distance [8]. The Reumann-Witkam algorithm starts from the first point and forms an length by connecting each point
          to the next point. Then, in the direction of the specified length, a band with the desired width is considered. By checking the
          points sequentially, the point was kept before the first point outside the band and the rest were discarded. Processing started

          again from the retained point and continued until the last point of the initial geometry of the  feature [9]. The sleeve-fitting
          algorithm is another example. This algorithm operates in the same way as Reumann-Witkam does. The distinction is that the
          first and last points decide the band's extension at each phase, and for each point that falls beyond the band, the last point from
          the previous step is chosen, while the rest, except the first point, is left out [10]. In the Reumann-Witkam algorithm, Opheim
          specified the minimum and maximum bandwidth values, as well as the length extension of the minimum bandwidth [11]. Lang
          also modified the bandwidth values based on the vertical distance of the line connecting the two points of the original feature

          geometry, with the points between them [12]. Douglas and Poker developed a method for summarizing the geometry of features
          in 1973, which has since become one of the most widely used algorithms. Hershberger and Snoeyink improved it 19 years later
          [13]. This algorithm differs from similar methods in that its search area is global. The Douglas-Poker algorithm operates on the
          basis  of  vertical  distance  and  does  not  consider  the  area  of  the  shape.  Viswalingam  and  Whyatt  proposed  a  method  that
          generalized based on effective area to solve this problem. According to this method, in each iteration, the effective area for each
          node (the area of the triangle consisting of each node with its two neighbours) is calculated and the node with the least effective
          area is removed. This process continues until it reaches the desired number of nodes [14, 15]. Yilang Shen et al. [16] first

          transferred the line into raster space using image processing techniques. As a result, the pixels that the line went through had a
          value of 1 while the others had a value of 0. They created polygons with the shortest possible length by connecting a number of
          straight lines in the obtained raster channel (pixels with a value of one). In a novel approach, Tinghua Ai et al. [17] performed a
          triangulation on the main geometry nodes first. Then, they were given a certain degree of importance based on the number of
          triangular neighbourhoods obtained, and this led them to the important nodes. Türkay Gökgöz et al also considered a buffer as
          an error band with the aim that the output of the algorithm should be in tolerance of the original geometry of the problem, by

                                                                                                               2
   375   376   377   378   379   380   381   382   383   384   385