Di Zhang, De Xu, Rui Song, Chaoqun Wang, and Yinchuan Wang


  1. [1] R.A. Newcombe, S.J. Lovegrove, and A.J. Davison, DTAM:Dense tracking and mapping in real-time, Proc. 2011International Conf. on Computer Vision, Barcelona, 2011,2320–2327.
  2. [2] J. Engel, T. Sch¨ops, and D. Cremers, LSD-SLAM: large-scaledirect monocular SLAM, Proc. European Conf. on ComputerVision, Zurich, Switzerland, 2014, 834–849.
  3. [3] J. Engel, V. Koltun, and D. Cremers, Direct sparse odometry,IEEE Transactions on Pattern Analysis and MachineIntelligence, 40(3), 2018, 611–625.
  4. [4] J. Zubizarreta, I. Aguinaga, and J.M.M. Montiel, Directsparse mapping, IEEE Transactions on Robotics, 36(4), 2020,1363–1370.
  5. [5] L. Di Giammarino, L. Brizi, T. Guadagnino, C. Stachniss,and G. Grisetti, MD-SLAM: Multi-cue direct SLAM, Proc.2022 IEEE/RSJ International Conf. on Intelligent Robots andSystems (IROS), Kyoto, Japan, 2022, 11047–11054.
  6. [6] J. Mo, M.J. Islam, and J. Sattar, Fast direct stereo visualSLAM, IEEE Robotics and Automation Letters, 7(2), 2022,778–785.
  7. [7] Y. Bao, Z. Yang, Y. Pan, and R. Huan, Semantic-direct visualodometry, IEEE Robotics and Automation Letters, 7(3), 2022,6718–6725.
  8. [8] G. Klein and D. Murray, Parallel tracking and mapping for smallAR workspaces, Proc. 2007 6th IEEE and ACM InternationalSymposium on Mixed and Augmented Reality, Nara, Japan,2007, 225–234.
  9. [9] R. Mur-Artal, J.M.M. Montiel, and J.D. Tard´os, ORB-SLAM:A versatile and accurate monocular SLAM system, IEEETransactions on Robotics, 31(5), 2015, 1147–1163.
  10. [10] R. Mur-Artal and J.D. Tard´os, ORB-SLAM2: An open-sourceSLAM system for monocular, stereo, and RGB-D cameras,IEEE Transactions on Robotics, 33(5), 2017, 1255–1262.
  11. [11] H. Wang, C. Zhang, Y. Song, and B. Pang, Robot armperceptive exploration based significant slam in search andrescue environment, International Journal of Robotics &Automation, 33(4), 2018.
  12. [12] B. Han and L. Xu, MLC-SLAM: Mask loop closing formonocular SLAM, International Journal of Robotics &Automation, 37(1), 2022.
  13. [13] S. Badalkhani, R. Havangi, and M. Farshad, An improvedsimultaneous localization and mapping for dynamic environ-ments, International Journal of Robotics & Automation, 36(6),2021, 374–382.
  14. [14] Y. Jin, L. Yu, Z. Chen, and S. Fei, A mono SLAM methodbased on depth estimation by DenseNet-CNN, IEEE SensorsJournal, 22(3), 2022, 2447–2455.
  15. [15] H. Zhou, D. Zou, L. Pei, R. Ying, P. Liu, and W. Yu,StructSLAM: Visual SLAM with building structure lines,IEEE Transactions on Vehicular Technology, 64(4), 2015,1364–1375.
  16. [16] S.J. Lee and S.S. Hwang, Elaborate monocular point andline SLAM with robust initialization, Proc. 2019 IEEE/CVFInternational Conf. on Computer Vision (ICCV), Seoul, 2019,1121–1129.
  17. [17] K. Li, J. Yao, X. Lu, L. Li, and Z. Zhang, Hierarchicalline matching based on line-junction-line structure descriptorand local homography estimation, Neurocomputing, 184, 2016,207–220.
  18. [18] F. Zhang, T. Rui, C. Yang, and J. Shi, LAP-SLAM: A line-assisted point-based monocular VSLAM, Electronics, 8(2),2019, 243.
  19. [19] A. Pumarola, A. Vakhitov, A. Agudo, A. Sanfeliu, and F.Moreno-Noguer, PL-SLAM: Real-time monocular visual SLAMwith points and lines, Proc. 2017 IEEE International Conf.on Robotics and Automation (ICRA), 2017, 4503–4508.
  20. [20] R. Gomez-Ojeda, F.-A. Moreno, D. Zu˜niga-No¨el, D. Scara-muzza, and J. Gonzalez-Jimenez, PL-SLAM: A stereo SLAMsystem through the combination of points and line segments,IEEE Transactions on Robotics, 35(3), 2019, 734–746.
  21. [21] Y. Li, N. Brasch, Y. Wang, N. Navab, and F. Tombari,Structure-SLAM: Low-drift monocular SLAM in indoorenvironments, IEEE Robotics and Automation Letters, 5(4),2020, 6583–6590.
  22. [22] X. Zuo, X. Xie, Y. Liu, and G. Huang, Robust visual SLAM withpoint and line features, Proc. 2017 IEEE/RSJ InternationalConf. on Intelligent Robots and Systems (IROS), Vancouver,BC, 2017, 1775–1782.
  23. [23] R. Hartley, Multiple view geometry in computer vision(Cambridge, UK: Cambridge Univ. Press, 2003).
  24. [24] A. Bartoli and P. Sturm, Structure-from-motion usinglines: Representation, triangulation, and bundle adjustment,Computer Vision and Image Understanding, 100(3), 2005,416–441.
  25. [25] Y. Yang, P. Geneva, K. Eckenhoff, and G. Huang, Visual-inertial odometry with point and line features, Proc. IEEE/RSJInternational Conf. on Intelligent Robots and Systems (IROS),Macau, 2019, 2447–2454.
  26. [26] T. Sugiura, A. Torii, and M. Okutomi, 3D surface reconstructionfrom point-and-line cloud, Proc. International Conf. on 3DVision, Lyon, 2015, 264–272.
  27. [27] H. Zhou, D. Zhou, K. Peng, W. Fan, and Y. Liu, SLAM-based 3D line reconstruction, Proc. 2018 13th World Congresson Intelligent Control and Automation (WCICA), Changsha,2018, 1148–1153.
  28. [28] ] E. Rublee, V. Rabaud, K. Konolige, and G. Bradski,ORB: An efficient alternative to SIFT or SURF, Proc. IEEEInternational Conf. on Computer Vision (ICCV), Barcelona,Spain, November 2011, 2564–2571.
  29. [29] R. Grompone von Gioi, J. Jakubowicz, J.-M. Morel, andG. Randall, LSD: A fast line segment detector with a falsedetection control, IEEE Transactions on Pattern Analysis andMachine Intelligence, 32(4), 2010, 722–732.
  30. [30] L. Zhang and R. Koch, An efficient and robust line segmentmatching approach based on LBD descriptor and pairwisegeometric consistency, Journal of Visual Communication andImage Representation, 24(7), 2013, 794–805.
  31. [31] D. Galvez-L´opez and J.D. Tardos, Bags of binary words forfast place recognition in image sequences, IEEE Transactionson Robotics, 28(5), 2012, 1188–1197.
  32. [32] G. Zhang, J.H. Lee, J. Lim, and I.H. Suh, Building a 3-Dline-based map using stereo SLAM, IEEE Transactions onRobotics, 31(6), 2015, 1364–1377.
  33. [33] X. Lu, J. Yao, H. Li, Y. Liu, and X. Zhang, 2-Line exhaustivesearching for real-time vanishing point estimation in Manhattanworld, Proc. 2017 IEEE Winter Conf. on Applicationsof Computer Vision (WACV), Santa Rosa, CA, 2017,345–353.
  34. [34] R. Toldo and A. Fusiello, Robust multiple structures estimationwith J-linkage, Proc. 2008 European Conference on ComputerVision (ECCV), Berlin, Heidelberg, 2008, 537–547.
  35. [35] M. Burri, J. Nikolic, P. Gohl, T. Schneider, J. Rehder, S. Omari,M. Achtelik, and R. Siegwart, The EuRoC micro aerial vehicledatasets, The International Journal of Robotics Research, 35,2016, 1157–1163.
  36. [36] S. Agarwal, K. Mierle, and The Ceres Solver Team, Ceressolver, Sep. 2021, http://ceres-solver.org.
  37. [37] X. Gao, R. Wang, N. Demmel, and D. Cremers, LDSO: Directsparse odometry with loop closure, Proc. 2018 IEEE/RSJInternational Conf. on Intelligent Robots and Systems (IROS),Madrid, Spain, 2018, 2198–2204.
  38. [38] F. Zhou, L. Zhang, C. Deng, and X. Fan, Improved point-linefeature based visual slam method for complex environments,Sensors, 21(13), 2021, 4604.12

Important Links:

Go Back