Аннотация и ключевые слова
Аннотация (русский):
The monocular visual odometry algorithm involves several basic steps and for each of them there is a number of methods. The purpose of this work is to conduct practical research of methods for key point detection and the optical flow calculation in the problem of determining the unmanned ground vehicle proper motion. For detection method research conduction the image panel containing various distortions typical for follow robot shot was made. It has been educed that among the accounted methods FAST finds the largest number of points within the minimum elapsed time. At the same time image distortions strongly affect the results of the method, which is negative for the positioning of the robot. Therefore the Shi-Tomasi method was chosen for key points detection within a short period of time, because its results are less dependent on distortion. For research undertake a number of video clips by means of the follow robot shot was made in a confined space at a later scale of the odometry algorithm. From experimental observations the conclusions concerning the application of Lucas-Kanade optical flow method tracking the identified points on the video sequence have been made. Based on the error in the results obtained it was implication that monocular odometry cannot be the main method of an unmanned vehicle positioning in confined spaces, but in combination with probe data given by assistive sensors it is quite possible to use it for determining the robotic system position.

Ключевые слова:
positioning, monocular visual odometry, image key points, detector, FAST, Harris, ORB, Shi-Tomasi, optical flow, Lucas-Kanade, Farneback
Список литературы

1. Calonder, M. BRIEF: Binary Robust IndependentElementary Features / M. Calonder, V. Lepetit, Chr.Strecha, P. Fua, // 11th European Conference on ComputerVision (ECCV). – 2010. – pp. 778 – 792.

2. Checchin, P. Radar scan matching SLAM using the FourierMellin transform / P. Checchin, Fr. Gerossier, Chr. Blanc //Field and Service Robotics / Springer. – 2010. – pp. 151–161.

3. Cole, D.M. Using laser range data for 3D SLAM in outdoorenvironments / D.M. Cole, P.M. Newman // Robotics andAutomation, 2006. ICRA 2006. Proceedings 2006 IEEEInternational Conference on / IEEE. – 2006. – pp. 1556–1563.

4. Drummond, E.R.a.T. Fusing Points and Lines for HighPerformance Tracking, 2005.

5. Farneback, G. Two-frame motion estimation based onpolynomial expansion / G. Farneback // Lecture Notes inComputer Science. – 2003. – pp. 363-370.

6. Harris, C. A combined corner and edge detector / C. Harris,M. Stephens // In Fourth Alvey Vision Conference,Manchester, UK. – 1988. – pp. 147-151.

7. Hess, W. Real-time loop closure in 2D LIDAR SLAM /W. Hess, D. Kohler, H. Rapp, D. Andor // Robotics andAutomation (ICRA), 2016 IEEE International Conferenceon / IEEE. – 2016. – pp. 1271–1278.

8. Kitt, B.M. Monocular visual odometry using a planar roadmodel to solve scale ambiguity / B. M. Kitt, J. Rehder,A.D. Chambers, M. Schonbein, H. Lategahn, S. Singh // InProc. European Conference on Mobile Robots, September2011– 2011.

9. Lucas, B.D. An Iterative Image Registration Techniquewith an Application to Stereo Vision / B.D. Lucas,T. Kanade // Proceedings of the 7th international jointconference on Artificial intelligence. – 1981. – pp. 674-679.

10. Rublee, E. ORB: an efficient alternative to SIFT or SURF /E. Rublee, V. Rabaud, K. Konolige, G. Bradski // ComputerVision (ICCV), IEEE International Conference on. IEEE. –2011. – pp. 2564-2571.

11. Shi, J. Good features to track / J. Shi, C. Tomasi // TR 93-1399, Cornell U., 1993

12. Yi, J. IMU-based localization and slip estimation for skidsteered mobile robots / J. Yi, J. Zhang, D. Song,S. Jayasuriya // Intelligent Robots and Systems, 2007. IROS2007. IEEE/RSJ International Conference on / IEEE. –2007. – pp. 2845–2850.