References(94)
[1]
S. Y. Chen,; Y. F. Li,; N. M. Kwok, Active vision in robotic systems: A survey of recent developments. The International Journal of Robotics Research Vol. 30, No. 11, 1343-1377, 2011.
[2]
W. R. Scott,; G. Roth,; J. Rivest, View planning for automated three-dimensional object reconstruction and inspection. ACM Computing Surveys Vol. 35, No. 1, 64-96, 2003.
[3]
S. D. Roy,; S. Chaudhury,; S. Banerjee, Active recognition through next view planning: A survey. Pattern Recognition Vol. 37, No. 3, 429-446, 2004.
[4]
W. R. Scott, Model-based view planning. Machine Vision and Applications Vol. 20, No. 1, 47-69, 2009.
[5]
K. A. Tarabanis,; R. Y. Tsai,; P. K. Allen, Automated sensor planning for robotic vision tasks. In: Proceedings of the IEEE International Conference on Robotics and Automation, 76-82, 1991.
[6]
K. A. Tarabanis,; P. K. Allen,; R. Y. Tsai, A survey of sensor planning in computer vision. IEEE Transactions on Robotics and Automation Vol. 11, No. 1, 86-104, 1995.
[7]
Y. M. Ye,; J. K. Tsotsos, Sensor planning for 3D object search. Computer Vision and Image Understanding Vol. 73, No. 2, 145-168, 1999.
[8]
R. Pito, A solution to the next best view problem for automated surface acquisition. IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. 21, No. 10, 1016-1030, 1999.
[9]
R. Pito, A sensor-based solution to the “next best view” problem. In: Proceedings of the 13th International Conference on Pattern Recognition, Vol. 1, 941-945, 1996.
[10]
J. E. Banta,; L. R. Wong,; C. Dumont,; M. A. Abidi, A next-best-view system for autonomous 3-D object reconstruction. IEEE Transactions on Systems, Man, and Cybernetics — Part A: Systems and Humans Vol. 30, No. 5, 589-598, 2000.
[11]
S. Kriegel,; C. Rink,; T. Bodenmüller,; M. Suppa, Efficient next-best-scan planning for autonomous 3D surface reconstruction of unknown objects. Journal of Real-Time Image Processing Vol. 10, No. 4, 611-631, 2015.
[12]
M. Corsini,; P. Cignoni,; R. Scopigno, Efficient and flexible sampling with blue noise properties of triangular meshes. IEEE Transactions on Visualization and Computer Graphics Vol. 18, No. 6, 914-924, 2012.
[13]
S. Khalfaoui,; R. Seulin,; Y. Fougerolle,; D. Fofi, An efficient method for fully automatic 3D digitization of unknown objects. Computers in Industry Vol. 64, No. 9, 1152-1160, 2013.
[14]
M. Krainin,; B. Curless,; D. Fox, Autonomous generation of complete 3D object models using next best view manipulation planning. In: Proceedings of the IEEE International Conference on Robotics and Automation, 5031-5037, 2011.
[15]
S Kriegel,; C Rink,; T Bodenmüller,; A Narr,; M Suppa,; G. Hirzinger, Next-best-scan planning for autonomous 3D modeling. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, 2850-2856, 2012.
[16]
R Eidenberger,; J. Scharinger, Active perception and scene modeling by planning with probabilistic 6D object poses. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, 1036-1043, 2010.
[17]
C. M. Wu,; R. Zeng,; J. Pan,; C. C. L. Wang,; Y. J. Liu, Plant phenotyping by deep-learning-based planner for multi-robots. IEEE Robotics and Automation Letters Vol. 4, No. 4, 3113-3120, 2019.
[18]
S. Y. Dong,; K. Xu,; Q. Zhou,; A. Tagliasacchi,; S. Q. Xin,; M. Nießner,; B. Chen, Multi-robot collaborative dense scene reconstruction. ACM Transactions on Graphics Vol. 38, No. 4, Article No. 84, 2019.
[19]
L. Liu,; X. Xia,; H. Sun,; Q. Shen,; J. Xu,; B. Chen,; H Huang,; K. Xu, Object-aware guidance for autonomous scene reconstruction. ACM Transactions on Graphics Vol. 37, No. 4, Article No. 104, 2018.
[20]
J. I. Vasquez-Gomez,; L. E. Sucar,; R. Murrieta-Cid, View/state planning for three-dimensional object reconstruction under uncertainty. Autonomous Robots Vol. 41, No. 1, 89-109, 2017.
[21]
N. Palomeras,; N. Hurtos,; E. Vidal,; M. Carreras, Autonomous exploration of complex underwater environments using a probabilistic next-best-view planner. IEEE Robotics and Automation Letters Vol. 4, No. 2, 1619-1625, 2019.
[22]
A Bircher,; M Kamel,; K Alexis,; H. Oleynikova,; R. Siegwart, Receding horizon “next-best-view” planner for 3D exploration. In: Proceedings of the IEEE International Conference on Robotics and Automation, 1462-1468, 2016.
[23]
D. Marr,; T. Poggio, A computational theory of human stereo vision. Proceedings of the Royal Society B: Biological Sciences Vol. 204, No. 1156, 301-328, 1979.
[24]
R. Monica,; J. Aleotti, Surfel-based next best view planning. IEEE Robotics and Automation Letters Vol. 3, No. 4, 3324-3331, 2018.
[25]
J. Delmerico,; S. Isler,; R. Sabzevari,; D. Scaramuzza, A comparison of volumetric information gain metrics for active 3D object reconstruction. Autonomous Robots Vol. 42, No. 2, 197-208, 2018.
[26]
A. Hornung,; K. M. Wurm,; M. Bennewitz,; C. Stachniss,; W. Burgard, OctoMap: An efficient probabilistic 3D mapping framework based on octrees. Autonomous Robots Vol. 34, No. 3, 189-206, 2013.
[27]
Z. Wu,; S. Song,; A. Khosla,; F. Yu,; L. Zhang,; X. Tang,; J. Xiao, 3D shapenets: A deep representation for volumetric shapes. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1912-1920, 2015.
[28]
A. X. Chang,; T. Funkhouser,; L. Guibas,; P. Hanrahan,; Q. X. Huang,; Z. M. Li,; S. Savarese,; M. Savva,; S. R. Song,; H. et al. Su, ShapeNet: An information-rich 3D model repository. arXiv preprint arXiv:1512.03012, 2015.
[29]
J. Cui,; J. T. Wen,; J. Trinkle, A multi-sensor next-best-view framework for geometric model-based robotics applications. In: Proceedings of the International Conference on Robotics and Automation, 8769-8775, 2019.
[30]
Z. Y. Zhang, Microsoft kinect sensor and its effect. IEEE Multimedia Vol. 19, No. 2, 4-10, 2012.
[31]
L. Keselman,; J. I. Woodfill,; A. Grunnet-Jepsen,; A. Bhowmik, Intel realsense stereoscopic depth cameras. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1-10, 2017.
[32]
G. H. Tarbox,; S. N. Gottschlich, Planning for complete sensor coverage in inspection. Computer Vision and Image Understanding Vol. 61, No. 1, 84-111, 1995.
[33]
M. Mendoza,; J. I. Vasquez-Gomez,; H. Taud,; L. E. Sucar,; C. Reta, Supervised learning of the next-best-view for 3D object reconstruction. Pattern Recognition Letters Vol. 133, 224-231, 2020.
[34]
A Nüchter,; H. Surmann,; J. Hertzberg, Planning robot motion for 3D digitalization of indoor environments. In: Proceedings of the 11th International Conference on Advanced Robotics, 78, 2003.
[35]
P. S. Blaer,; P. K. Allen, Data acquisition and view planning for 3-D modeling tasks. In: Proceedings of the IEEE International Conference on Intelligent Robots and Systems, 417-422, 2007.
[36]
B. Browatzki,; V. Tikhanoff,; G. Metta,; H. H. Bu¨lthoff,; C. Wallraven, Active object recognition on a humanoid robot. In: Proceedings of the IEEE International Conference on Robotics and Automation, 2021-2028, 2012.
[37]
J. Sock,; S. H. Kasaei,; L. S. Lopes,; T. K. Kim, Multi-view 6D object pose estimation and camera motion planning using RGBD images. In: Proceedings of the IEEE International Conference on Computer Vision, 2228-2235, 2017.
[38]
N. A. Massios,; R. B. Fisher, A best next view selection algorithm incorporating a quality criterion. In: Proceedings of the British Machine Vision Conference, 780-789, 1998.
[39]
A. Doumanoglou,; R. Kouskouridas,; S. Malassiotis,; T. K. Kim, Recovering 6D object pose and predicting next-best-view in the crowd. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 3583-3592, 2016.
[40]
S. H. Wu,; W. Sun,; P. X. Long,; H. Huang,; D. Cohen-Or,; M. L. Gong,; O. Deussen,; B. Q. Chen, Quality-driven poisson-guided autoscanning. ACM Transactions on Graphics Vol. 33, No. 6, Article No. 203, 2014.
[41]
C. Connolly, The determination of next best views. In: Proceedings of the IEEE International Conference on Robotics and Automation, 432-435, 1985.
[42]
L. M. Wong,; C. Dumont,; M. A. Abidi, Next best view system in a 3d object modeling task. In: Proceedings of the IEEE International Symposium on Computational Intelligence in Robotics and Automation, 306-311, 1999.
[43]
Y. J. Liu,; J. B. Zhang,; J. C. Hou,; J. C. Ren,; W. Q. Tang, Cylinder detection in large-scale point cloud of pipeline plant. IEEE Transactions on Visualization and Computer Graphics Vol. 19, No. 10, 1700-1707, 2013.
[44]
H. Huang,; D. Li,; H. Zhang,; U. Ascher,; D. Cohen-Or, Consolidation of unorganized point clouds for surface reconstruction. ACM Transactions on Graphics Vol. 28, No. 5, Article No. 176, 2009.
[45]
M. Kazhdan,; M. Bolitho,; H. Hoppe, Poisson surface reconstruction. In: Proceedings of the 4th Eurographics Symposium on Geometry Processing, Vol. 7, 2006.
[46]
M. Kazhdan,; H. Hoppe, Screened poisson surface reconstruction. ACM Transactions on Graphics Vol. 32, No. 3, Article No. 29, 2013.
[47]
J. I. Vasquez-Gomez,; L. E. Sucar,; R. Murrieta-Cid,; E. Lopez-Damian, Volumetric next-best-view planning for 3D object reconstruction with positioning error. International Journal of Advanced Robotic Systems Vol. 11, No. 10, 159, 2014.
[48]
R. Diankov,; J Kuffner,. OpenRAVE: A planning architecture for autonomous robotics. Technical Report CMU-RI-TR-08-34. Robotics Institute, Carnegie Mellon University, 2008.
[49]
A. Krizhevsky,; I. Sutskever,; G. E. Hinton, ImageNet classification with deep convolutional neural networks. In: Proceedings of the 25th International Conference on Neural Information Processing Systems, Vol. 1, 1097-1105, 2012.
[50]
W. Yuan,; T. Khot,; D. Held,; C. Mertz,; M. Hebert, PCN: point completion network. In: Proceedings of the International Conference on 3D Vision, 728-737, 2018.
[51]
M. D. Kaba,; M. G. Uzunbas,; S. Lim, A reinforcement learning approach to the view planning problem. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 5094-5102, 2017.
[52]
I. Dinur,; D. Steurer, Analytical approach to parallel repetition. In: Proceedings of the 46th Annual ACM Symposium on Theory of Computing, 624-633, 2014.
[53]
U. Feige, A threshold of ln n for approximating set cover. Journal of the ACM Vol. 45, No. 4, 634-652, 1998.
[54]
N. Smith,; N. Moehrle,; M. Goesele,; W. Heidrich, Aerial path planning for urban scene reconstruction: a continuous optimization method and benchmark. ACM Transactions on Graphics Vol. 37, No. 6, Article No. 183, 2019.
[55]
H. Durrant-Whyte,; T. Bailey, Simultaneous localization and mapping: Part I. IEEE Robotics & Automation Magazine Vol. 13, No. 2, 99-110, 2006.
[56]
T. Bailey,; H. Durrant-Whyte, Simultaneous localization and mapping (SLAM): Part II. IEEE Robotics & Automation Magazine Vol. 13, No. 3, 108-117, 2006.
[57]
J. O’rourke, Art Gallery Theorems and Algorithms, Vol. 57. Oxford University Press, 1987.
[58]
H. Gonzalez-Banos,; E Mao,; J. C. Latombe,; T. M. Murali,; A. Efrat, Planning robot motion strategies for efficient model construction. In: Robotics Research. J. M. Hollerbach,; D. E. Koditschek, Eds. Springer London, 345-352, 2000.
[59]
P. Blaer,; P. K. Allen, Topbot: automated network topology detection with a mobile robot. In: Proceedings of the IEEE International Conference on Robotics and Automation, Vol. 2, 1582-1587, 2003.
[60]
LaValle, S. M. Rapidly-exploring random trees: A new tool for path planning. 1998.
[61]
S. Karaman,; E. Frazzoli, Sampling-based algorithms for optimal motion planning. The International Journal of Robotics Research Vol. 30, No. 7, 846-894, 2011.
[62]
K. Xu,; H. Huang,; Y. Shi,; H. Li,; P. Long,; J. Caichen,; W. Sun,; B. Chen, Autoscanning for coupled scene reconstruction and proactive object analysis. ACM Transactions on Graphics Vol. 34, No. 6, Article No. 177, 2015.
[63]
K. Xu,; Y. Shi,; L. Zheng,; J. Zhang,; M. Liu,; H. Huang,; H. Su,; D. Cohen-Or,; B. Chen. 3D attention-driven depth acquisition for object identification. ACM Transactions on Graphics Vol. 35, No. 6, Article No. 238, 2016.
[64]
S. Song,; F. Yu,; A. Zeng,; A. X. Chang,; M. Savva,; T. Funkhouser. Semantic scene completion from a single depth image. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 190-198, 2017.
[65]
A. Dai,; A. X. Chang,; M. Savva,; M. Halber,; T. Funkhouser,; M. Nießner, ScanNet: Richly-annotated 3D reconstructions of indoor scenes. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2432-2443, 2017.
[66]
L. T. Zheng,; C. Y. Zhu,; J. Z. Zhang,; H. Zhao,; H. Huang,; M. Niessner,; K. Xu, Active scene understanding via online semantic reconstruction. Computer Graphics Forum Vol. 38, No. 7, 103-114, 2019.
[67]
T. Bektas, The multiple traveling salesman problem: An overview of formulations and solution procedures. Omega Vol. 34, No. 3, 209-219, 2006.
[68]
X. Han,; Z. Zhang,; D. Du,; M. Yang,; J. Yu,; P. Pan,; X. Yang,; L. Liu,; Z. Xiong,; S. Cui, Deep reinforcement learning of volume-guided progressive view inpainting for 3D point scene completion from a single depth image. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 234-243, 2019.
[69]
V. Mnih,; K. Kavukcuoglu,; D. Silver,; A. A. Rusu,; J. Veness,; M. G. Bellemare,; A. Graves,; M. Riedmiller,; A. K. Fidjeland,; G. Ostrovski, Human-level control through deep reinforcement learning. Nature Vol. 518, No. 7540, 529-533, 2015.
[70]
G. L. Liu,; F. A. Reda,; K. J. Shih,; T. C. Wang,; A. Tao,; B. Catanzaro, Image inpainting for irregular holes using partial convolutions. In: Proceedings of the European Conference on Computer Vision, 89-105, 2018.
[71]
A. Dai,; D. Ritchie,; M. Bokeloh,; S. Reed,; J. Sturm,; M. Nießner, Scancomplete: Large-scale scene completion and semantic segmentation for 3D scans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 4578-4587, 2018.
[72]
S. Hinterstoisser,; S. Holzer,; C. Cagniart,; S. Ilic,; K. Konolige,; N. Navab,; V. Lepetit, Multimodal templates for real-time detection of texture-less objects in heavily cluttered scenes. In: Proceedings of the IEEE International Conference on Computer Vision, 858-865, 2011.
[73]
M. Martinez,; A. Collet,; S. S. Srinivasa, Moped: A scalable and low latency object recognition and pose estimation system. In: Proceedings of the IEEE International Conference on Robotics and Automation, 2043-2049, 2010.
[74]
J. Tang,; S. Miller,; A. Singh,; P. Abbeel, A textured object recognition pipeline for color and depth image data. In: Proceedings of the IEEE International Conference on Robotics and Automation, 3467-3474, 2012.
[75]
S. Kriegel,; M. Brucker,; Z.-C. Marton,; T. Bodenmüller,; M. Suppa, Combining object modeling and recognition for active scene exploration In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, 2384-2391, 2013.
[76]
E. Johns,; S. Leutenegger,; A. J. Davison, Pairwise decomposition of image sequences for active multi-view recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 3813-3822, 2016.
[77]
S. A. Hutchinson,; A. C. Kak, Planning sensing strategies in a robot work cell with multi-sensor capabilities. IEEE Transactions on Robotics and Automation Vol. 5, No. 6, 765-783, 1989.
[78]
S. J. Dickinson,; H. I. Christensen,; J. K. Tsotsos,; G. Olofsson, Active object recognition integrating attention and viewpoint control. Computer Vision and Image Understanding Vol. 67, No. 3, 239-260, 1997.
[79]
D. Fox,; W. Burgard,; F. Dellaert,; S Thrun,; Monte Carlo localization: Efficient position estimation for mobile robots. In: Proceedings of the 16th National Conference on Artificial Intelligence and the 11th Innovative Applications of Artificial Intelligence Conference Innovative Applications of Artificial Intelligence, 343-349, 1999.
[80]
E. Johns,; O. Mac Aodha,; G. J. Brostow, Becoming the expert-interactive multi-class machine teaching. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2616-2624, 2015.
[81]
N. Silberman,; D. Hoiem,; P. Kohli,; R. Fergus, Indoor segmentation and support inference from RGBD images. In: Proceedings of the European Conference on Computer Vision, 746-760, 2012.
[82]
R. Kouskouridas,; K. Charalampous,; A. Gasteratos, Sparse pose manifolds. Autonomous Robots Vol. 37, No. 2, 191-207, 2014.
[83]
S. Makris,; P. Karagiannis,; S. Koukas,; A. S. Matthaiakis, Augmented reality system for operator support in human-robot collaborative assembly. CIRP Annals Vol. 65, No. 1, 61-64, 2016.
[84]
K. Wu,; R. Ranasinghe,; G. Dissanayake, Active recognition and pose estimation of household objects in clutter. In: Proceedings of the IEEE International Conference on Robotics and Automation, 4230-4237, 2015.
[85]
A. Richtsfeld,; T. Mörwald,; J. Prankl,; M. Zillich,; M Vincze.; Segmentation of unknown objects in indoor environments. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, 4791-4796, 2012.
[86]
H. Bay,; A. Ess,; T. Tuytelaars,; L. van Gool, Speeded-up robust features (SURF). Computer Vision and Image Understanding Vol. 110, No. 3, 346-359, 2008.
[87]
D. G. Lowe, Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision Vol. 60, No. 2, 91-110, 2004.
[88]
K. S. Arun,; T. S. Huang,; S. D. Blostein, Least-squares fitting of two 3-D point sets. IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. 9, No. 5, 698-700, 1987.
[89]
A. Doumanoglou,; T. K. Kim,; X. W. Zhao,; S. Malassiotis, Active random forests: An application to autonomous unfolding of clothes. In: Proceedings of the European Conference on Computer Vision, 644-658, 2014.
[90]
L. Breiman, Random forests. Machine Learning Vol. 45, No. 1, 5-32, 2001.
[91]
J. Gall,; A. Yao,; N. Razavi,; L. van Gool,; V. Lempitsky, Hough forests for object detection, tracking, and action recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. 33, No. 11, 2188-2202, 2011.
[92]
A. Tejani,; D. H. Tang,; R. Kouskouridas,; T. K. Kim, Latent-class Hough forests for 3D object detection and pose estimation In: Computer Vision - ECCV 2014. Lecture Notes in Computer Science, Vol. 8694. D. Fleet,; T. Pajdla,; B. Schiele,; T. Tuytelaars, Eds. Springer Cham, 462-477, 2014.
[93]
A. Coates,; A. Ng,; H. Lee, An analysis of single-layer networks in unsupervised feature learning. In: Proceedings of the 14th International Conference on Artificial Intelligence and Statistics, Vol. 15, 215-223, 2011.
[94]
S. Kriegel,; T. Bodenmüller,; M. Suppa,; G. Hirzinger. A surface-based next-best-view approach for automated 3D model completion of unknown objects. In: Proceedings of the IEEE International Conference on Robotics and Automation, 4869-4874, 2011.