Page 474 - NGTU_paper_withoutVideo
P. 474
کیتاموئژ نیون یاهدربراک و اه یروآ نف یلم سنارفنک
عجارم 6 .
[1] E. Oruklu, D. Pesty, J. Neveux, and J. Guebey, “Real-Time Traffic Sign Detection and Recognition
for In-Car Driver Assistance Systems,” pp. 976–979, 2012.
[2] J. Stallkamp, M. Schlipsing, J. Salmen, and C. Igel, “Man vs . computer : Benchmarking machine
learning algorithms for traffic sign recognition,” Neural Networks, vol. 32, pp. 323–332, 2012.
[3] L. Ma, Y. Li, J. Li, C. Wang, R. Wang, and M. A. Chapman, “Mobile laser scanned point-clouds for
road object detection and extraction: A review,” Remote Sens., vol. 10, no. 10, pp. 1–33, 2018.
[4] J. M. Lillo-castellano, I. Mora-jiménez, C. Figuera-pozuelo, and J. L. Rojo-álvarez, “Neurocomputing
Traf fi c sign segmentation and classi fi cation using statistical learning methods,” vol. 153, pp. 286–
299, 2015.
[5] A. Hechri and A. Mtibaa, “Lanes and Road Signs Recognition for Driver Assistance System,” vol. 8,
no. 6, 2011.
[6] S. Waite, FPGA-Based Traffic Sign Recognition for Advanced Driver Assistance Systems, vol. 03.
2013.
[7] P. Duygulu, K. Barnard, J. F. G. De Freitas, and D. A. Forsyth, “Object Recognition as Machine
Translation : Learning a Lexicon for a Fixed Image,” pp. 97–98, 2002.
[8] H. Guan, J. Li, Y. Yu, C. Wang, M. Chapman, and B. Yang, “ISPRS Journal of Photogrammetry and
Remote Sensing Using mobile laser scanning data for automated extraction of road markings,” ISPRS
J. Photogramm. Remote Sens., vol. 87, pp. 93–107, 2014.
[9] R. Azad, B. Azad, and I. T. Kazerooni, “OPTIMIZED METHOD FOR IRANIAN ROAD SIGNS,”
vol. 4, no. 1, pp. 19–26, 2014.
[10] T. Bui, O. Ghita, P. Whelan, and H. Trang, “A Robust Algorithm for Detection and Classification of
Traffic Signs in Video Data,” pp. 108–113, 2012.
[11] C. A. I. Zi-xing and G. U. Ming-qin, “Traffic sign recognition algorithm based on shape signature
and dual-tree complex wavelet transform,” pp. 433–439, 2013.
[12] F. Zaklouta and B. Stanciulescu, “Real-time traffic sign recognition in three stages,” Rob. Auton.
Syst., vol. 62, no. 1, pp. 16–24, 2014.
[13] Z. Sun, H. Wang, W. Lau, G. Seet, and D. Wang, “Neurocomputing Application of BW-ELM model
on traf fi c sign recognition,” vol. 128, pp. 153–159, 2014.
[14] S. Wang, P. Zhang, Z. Dai, Y. Wang, R. Tao, and S. Sun, “Research and Practice of Traffic Lights
and Traffic Signs Recognition System Based on Multicore of FPGA *,” vol. 2013, no. February, pp.
61–64, 2013.
[15] L. L. Scans, “Photographs and Ground,” 2001.
[16] S. Luo, C. Wang, F. Pan, X. Xi, G. Li, and S. Nie, “Estimation of wetland vegetation height and leaf
area index using airborne laser scanning data,” Ecol. Indic., vol. 48, pp. 550–559, 2015.
[17] M. Nilsson et al., “Remote Sensing of Environment A nationwide forest attribute map of Sweden
predicted using airborne laser scanning data and fi eld data from the National Forest Inventory,”
Remote Sens. Environ., vol. 194, pp. 447–454, 2017.
[18] B. Guo, X. Huang, F. Zhang, and G. Sohn, “ISPRS Journal of Photogrammetry and Remote Sensing
Classification of airborne laser scanning data using JointBoost,” ISPRS J. Photogramm. Remote
Sens., vol. 100, pp. 71–83, 2015.
[19] A. Nurunnabi, W. Geoff, and D. Belton, Outlier detection and robust normal-curvature estimation in
mobile laser scanning 3D point cloud data, vol. 48. 2014.
[20] A. Serna and B. Marcotegui, “ISPRS Journal of Photogrammetry and Remote Sensing Detection ,
segmentation and classification of 3D urban objects using mathematical morphology and supervised
learning,” vol. 93, pp. 243–255, 2014.
[21] R. Lindenbergh and P. Pietrzyk, “Change detection and deformation analysis using static and mobile
laser scanning,” 2015.
[22] B. Yang, Z. Dong, G. Zhao, and W. Dai, “Hierarchical extraction of urban objects from mobile laser
scanning data,” ISPRS J. Photogramm. Remote Sens., vol. 99, pp. 45–57, 2015.