<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.3 20210610//EN" "JATS-journalpublishing1-3.dtd">
<article article-type="research-article" dtd-version="1.3" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xml:lang="ru"><front><journal-meta><journal-id journal-id-type="publisher-id">tuzsut</journal-id><journal-title-group><journal-title xml:lang="ru">Труды учебных заведений связи</journal-title><trans-title-group xml:lang="en"><trans-title>Proceedings of Telecommunication Universities</trans-title></trans-title-group></journal-title-group><issn pub-type="ppub">1813-324X</issn><issn pub-type="epub">2712-8830</issn><publisher><publisher-name>СПбГУТ</publisher-name></publisher></journal-meta><article-meta><article-id pub-id-type="doi">10.31854/1813-324X-2024-10-1-97-106</article-id><article-id custom-type="edn" pub-id-type="custom">MHMJRS</article-id><article-id custom-type="elpub" pub-id-type="custom">tuzsut-551</article-id><article-categories><subj-group subj-group-type="heading"><subject>Research Article</subject></subj-group><subj-group subj-group-type="section-heading" xml:lang="ru"><subject>ИНФОРМАЦИОННЫЕ ТЕХНОЛОГИИ И ТЕЛЕКОММУНИКАЦИИ</subject></subj-group><subj-group subj-group-type="section-heading" xml:lang="en"><subject>INFORMATION TECHNOLOGIES AND TELECOMMUNICATION</subject></subj-group></article-categories><title-group><article-title>Открытый набор данных для тестирования Visual SLAM-алгоритмов при различных погодных условиях</article-title><trans-title-group xml:lang="en"><trans-title>Open Dataset for Testing of Visual SLAM Algorithms under Different Weather Conditions</trans-title></trans-title-group></title-group><contrib-group><contrib contrib-type="author" corresp="yes"><contrib-id contrib-id-type="orcid">https://orcid.org/0009-0008-3022-5282</contrib-id><name-alternatives><name name-style="eastern" xml:lang="ru"><surname>Подтихов</surname><given-names>А. В.</given-names></name><name name-style="western" xml:lang="en"><surname>Podtikhov</surname><given-names>A.</given-names></name></name-alternatives><bio xml:lang="ru"><p>аспирант лаборатории автономных робототехнических систем Санкт-Петербургского Федерального исследовательского центра Российской академии наук</p></bio><email xlink:type="simple">a.podtikhov@gmail.com</email><xref ref-type="aff" rid="aff-1"/></contrib><contrib contrib-type="author" corresp="yes"><contrib-id contrib-id-type="orcid">https://orcid.org/0000-0003-1851-2699</contrib-id><name-alternatives><name name-style="eastern" xml:lang="ru"><surname>Савельев</surname><given-names>А. И.</given-names></name><name name-style="western" xml:lang="en"><surname>Saveliev</surname><given-names>A.</given-names></name></name-alternatives><bio xml:lang="ru"><p>кандидат технических наук, старший научный сотрудник лаборатории автономных робототехнических систем Санкт-Петербургского Федерального исследовательского центра Российской академии наук</p></bio><email xlink:type="simple">saveliev@iias.spb.su</email><xref ref-type="aff" rid="aff-1"/></contrib></contrib-group><aff-alternatives id="aff-1"><aff xml:lang="ru">Санкт-Петербургский Федеральный исследовательский центр Российской академии наук<country>Россия</country></aff><aff xml:lang="en">Saint-Petersburg Federal Research Center of the Russian Academy of Sciences<country>Russian Federation</country></aff></aff-alternatives><pub-date pub-type="collection"><year>2024</year></pub-date><pub-date pub-type="epub"><day>28</day><month>02</month><year>2024</year></pub-date><volume>10</volume><issue>1</issue><fpage>97</fpage><lpage>106</lpage><permissions><copyright-statement>Copyright &amp;#x00A9; Подтихов А.В., Савельев А.И., 2024</copyright-statement><copyright-year>2024</copyright-year><copyright-holder xml:lang="ru">Подтихов А.В., Савельев А.И.</copyright-holder><copyright-holder xml:lang="en">Podtikhov A., Saveliev A.</copyright-holder><license license-type="creative-commons-attribution" xlink:href="https://creativecommons.org/licenses/by/4.0/" xlink:type="simple"><license-p>This work is licensed under a Creative Commons Attribution 4.0 License.</license-p></license></permissions><self-uri xlink:href="https://tuzs.sut.ru/jour/article/view/551">https://tuzs.sut.ru/jour/article/view/551</self-uri><abstract><p>Существующие наборы данных для тестирования SLAM-алгоритмов в открытой местности не подходят для оценки влияния погодных условий на точность локализации. Получить подходящий набор из реального мира трудно из-за длительного периода сбора данных и невозможности исключения динамических факторов среды. Искусственно сгенерированные наборы данных позволяют обойти описанные ограничения, однако на текущий момент исследователи не выделяли тестирование SLAM-алгоритмов при различных погодных условиях как отдельную задачу, несмотря на то, что она является одним из аспектов различия между открытой и закрытой местностями. В данной работе представлен новый открытый набор данных, который состоит из 36 последовательностей движения робота в городской среде или по пересеченной местности, в виде изображений со стереокамеры и истинного положения робота, собранных с частотой 30 Гц. Движение в пределах одной местности происходит по фиксированному маршруту, последовательности отличают только климатические условия, что может позволить корректно оценить влияние погодных явления на точность локализации.</p></abstract><trans-abstract xml:lang="en"><p>Existing datasets for testing SLAM algorithms in outdoor environments are not suitable for assessing the influence of weather conditions on localization accuracy. Obtaining a suitable dataset from the real world is difficult due to the long data collection period and the inability to exclude dynamic environmental factors. Artificially generated datasets make it possible to bypass the described limitations, but up to date, researchers have not identified testing SLAM algorithms under different weather conditions as a stand-alone task, despite the fact that it is one of the main aspects of the difference between outdoor and indoor environments. This work presents a new open dataset that consists of 36 sequences of robot movement in an urban environment or rough terrain, in the form of images from a stereo camera and the ground truth position of the robot, collected at a frequency of 30 Hz. Movement within one area occurs along a fixed route; the sequences are distinguished only by whether conditions, which can make it possible to correctly assess the influence of weather phenomena on the accuracy of localization.</p></trans-abstract><kwd-group xml:lang="ru"><kwd>SLAM</kwd><kwd>одновременная локализация и построение карты</kwd><kwd>набор данных</kwd><kwd>Air-Sim</kwd><kwd>погодные условия</kwd></kwd-group><kwd-group xml:lang="en"><kwd>SLAM</kwd><kwd>simultaneous localization and mapping</kwd><kwd>dataset</kwd><kwd>AirSim</kwd><kwd>weather conditions</kwd></kwd-group></article-meta></front><back><ref-list><title>References</title><ref id="cit1"><label>1</label><citation-alternatives><mixed-citation xml:lang="ru">Olson C.F., Matthies L.H., Schoppers H., Maimone M.W. Robust stereo ego-motion for long distance navigation // Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2000, Hilton Head, USA, 15 June 2000). Cat. No. PR00662. IEEE, 2000. Vol. 2. PP. 453‒458. DOI:10.1109/CVPR.2000.854879</mixed-citation><mixed-citation xml:lang="en">Olson C.F., Matthies L.H., Schoppers H., Maimone M.W. Robust stereo ego-motion for long distance navigation. Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2000, 15 June 2000, Hilton Head, USA, Cat. No. PR00662, vol.2. IEEE; 2000. p.453‒458. DOI:10.1109/CVPR.2000.854879</mixed-citation></citation-alternatives></ref><ref id="cit2"><label>2</label><citation-alternatives><mixed-citation xml:lang="ru">Schubert D., Goll T., Demmel N., Usenko V., Stückler J., Cremers D. The TUM VI Benchmark for Evaluating Visual-Inertial Odometry // Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS, Madrid, Spain, 01‒05 October 2018). IEEE, 2018. PP. 1680‒1687. DOI:10.1109/IROS.2018.8593419</mixed-citation><mixed-citation xml:lang="en">Schubert D., Goll T., Demmel N., Usenko V., Stückler J., Cremers D. The TUM VI Benchmark for Evaluating Visual-Inertial Odometry. Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS, 01‒05 October 2018, Madrid, Spain. IEEE; 2018. p.1680‒1687. DOI:10.1109/IROS.2018.8593419</mixed-citation></citation-alternatives></ref><ref id="cit3"><label>3</label><citation-alternatives><mixed-citation xml:lang="ru">Fischler M.A., Bolles R.C. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography // Communications of the ACM. 1981. Vol. 24. Iss. 6. PP. 381‒395. DOI:10.1145/358669.358692</mixed-citation><mixed-citation xml:lang="en">Fischler M.A., Bolles R.C. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM. 1981;24(6):381‒395. DOI:10.1145/358669.358692</mixed-citation></citation-alternatives></ref><ref id="cit4"><label>4</label><citation-alternatives><mixed-citation xml:lang="ru">Shah S., Dey D., Lovett C., Kapoor A. Airsim: High-Fidelity Visual and Physical Simulation for Autonomous Vehicles // Results of the 11th International Conference on Field and Service Robotics (Zurich, Switzerland, 12‒15 September 2017). Springer Proceedings in Advanced Robotics. Cham: Springer, 2018. Vol. 5. PP. 621‒635. DOI:10.1007/978-3-319-67361-5_40</mixed-citation><mixed-citation xml:lang="en">Shah S., Dey D., Lovett C., Kapoor A. Airsim: High-Fidelity Visual and Physical Simulation for Autonomous Vehicles. Results of the 11th International Conference on Field and Service Robotics, 12‒15 September 2017, Zurich, Switzerland. Springer Proceedings in Advanced Robotics, vol.5. Cham: Springer; 2018. p.621‒635. DOI:10.1007/978-3-319-67361-5_40</mixed-citation></citation-alternatives></ref><ref id="cit5"><label>5</label><citation-alternatives><mixed-citation xml:lang="ru">Maddern W., Pascoe G., Newman P. 1 year, 1000 km: The oxford robotcar dataset // The International Journal of Robotics Research. 2017. Vol. 36. Iss. 1. PP. 3‒15. DOI:10.1177/0278364916679</mixed-citation><mixed-citation xml:lang="en">Maddern W., Pascoe G., Newman P. 1 year, 1000 km: The oxford robotcar dataset. The International Journal of Robotics Research. 2017;36(1):3‒15. DOI:10.1177/0278364916679</mixed-citation></citation-alternatives></ref><ref id="cit6"><label>6</label><citation-alternatives><mixed-citation xml:lang="ru">Cordts M., Omran M., Ramos S., Scharwachter T., Enzweiler M., Benenson R., et al. The Cityscapes Dataset. URL: https://markus-enzweiler.de/downloads/publications/cordts15-cvprws.pdf (Accessed 18.01.2024)</mixed-citation><mixed-citation xml:lang="en">Cordts M., Omran M., Ramos S., Scharwachter T., Enzweiler M., Benenson R., et al. The Cityscapes Dataset. URL: https://markus-enzweiler.de/downloads/publications/cordts15-cvprws.pdf [Accessed 18.01.2024]</mixed-citation></citation-alternatives></ref><ref id="cit7"><label>7</label><citation-alternatives><mixed-citation xml:lang="ru">Geiger A., Lenz P., Urtasun R. Are we ready for autonomous driving? The KITTI vision benchmark suite // Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Providence, USA, 16‒21 June 2012). IEEE, 2012. PP. 3354‒3361. DOI:10.1109/CVPR.2012.6248074</mixed-citation><mixed-citation xml:lang="en">Geiger A., Lenz P., Urtasun R. Are we ready for autonomous driving? The KITTI vision benchmark suite. Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 16‒21 June 2012, Providence, USA. IEEE; 2012. p.3354‒3361. DOI:10.1109/CVPR.2012.6248074</mixed-citation></citation-alternatives></ref><ref id="cit8"><label>8</label><citation-alternatives><mixed-citation xml:lang="ru">Engel J., Usenko V., Cremers D. A Photometrically Calibrated Benchmark for Monocular Visual Odometry // arXiv preprint arXiv:1607.02555. 2016. DOI:10.48550/arXiv.1607.02555</mixed-citation><mixed-citation xml:lang="en">Engel J., Usenko V., Cremers D. A Photometrically Calibrated Benchmark for Monocular Visual Odometry. arXiv preprint arXiv:1607.02555. 2016. DOI:10.48550/arXiv.1607.02555</mixed-citation></citation-alternatives></ref><ref id="cit9"><label>9</label><citation-alternatives><mixed-citation xml:lang="ru">Chebrolu N., Lottes P., Stachniss C., Winterhalter W., Burgard W., Stachniss C. Agricultural robot dataset for plant classification, localization and mapping on sugar beet fields // The International Journal of Robotics Research. 2017. Vol. 36. Iss. 10. PP. 1045‒1052. DOI:10.1177/0278364917720510</mixed-citation><mixed-citation xml:lang="en">Chebrolu N., Lottes P., Stachniss C., Winterhalter W., Burgard W., Stachniss C. Agricultural robot dataset for plant classification, localization and mapping on sugar beet fields. The International Journal of Robotics Research. 2017;36(10):1045‒1052. DOI:10.1177/0278364917720510</mixed-citation></citation-alternatives></ref><ref id="cit10"><label>10</label><citation-alternatives><mixed-citation xml:lang="ru">Pire T., Mujica M., Civera J., Kofman E. The Rosario dataset: Multisensor data for localization and mapping in agricultural environments // The International Journal of Robotics Research. 2019. Vol. 38. Iss. 6. PP. 633‒641. DOI:10.1177/0278364919 841437</mixed-citation><mixed-citation xml:lang="en">Pire T., Mujica M., Civera J., Kofman E. The Rosario dataset: Multisensor data for localization and mapping in agricultural environments. The International Journal of Robotics Research. 2019;38(6):633‒641. DOI:10.1177/0278364919841437</mixed-citation></citation-alternatives></ref><ref id="cit11"><label>11</label><citation-alternatives><mixed-citation xml:lang="ru">Minoda K., Schilling F., Wüest V., Floreano D., Yairi T. Viode: A Simulated Dataset to Address the Challenges of Visual-Inertial Odometry in Dynamic Environments // IEEE Robotics and Automation Letters. 2021. Vol. 6. Iss. 2. PP. 1343‒1350. DOI:10.1109/LRA.2021.3058073</mixed-citation><mixed-citation xml:lang="en">Minoda K., Schilling F., Wüest V., Floreano D., Yairi T. Viode: A Simulated Dataset to Address the Challenges of Visual-Inertial Odometry in Dynamic Environments. IEEE Robotics and Automation Letters. 2021;6(2):1343‒1350. DOI:10.1109/ LRA.2021.3058073</mixed-citation></citation-alternatives></ref><ref id="cit12"><label>12</label><citation-alternatives><mixed-citation xml:lang="ru">Soliman A., Bonardi F., Sidibé D., Bouchafa S. IBISCape: A Simulated Benchmark for multi-modal SLAM Systems Evaluation in Large-scale Dynamic Environments // Journal of Intelligent &amp; Robotic Systems. 2022. Vol. 106. Iss. 3. P. 53. DOI:10.1007/s10846-022-01753-7</mixed-citation><mixed-citation xml:lang="en">Soliman A., Bonardi F., Sidibé D., Bouchafa S. IBISCape: A Simulated Benchmark for multi-modal SLAM Systems Evaluation in Large-scale Dynamic Environments. Journal of Intelligent &amp; Robotic Systems. 2022;106(3):53. DOI:10.1007/s10846-022-01753-7</mixed-citation></citation-alternatives></ref><ref id="cit13"><label>13</label><citation-alternatives><mixed-citation xml:lang="ru">Han Y., Liu Z., Sun S., Li D., Sun J., Hong Z., et al. CARLA-Loc: Synthetic SLAM Dataset with Full-stack Sensor Setup in Challenging Weather and Dynamic Environments // arXiv preprint arXiv:2309.08909. 2023. DOI:10.48550/arXiv.2309.08909</mixed-citation><mixed-citation xml:lang="en">Han Y., Liu Z., Sun S., Li D., Sun J., Hong Z., et al. CARLA-Loc: Synthetic SLAM Dataset with Full-stack Sensor Setup in Challenging Weather and Dynamic Environments. arXiv preprint arXiv:2309.08909. 2023. DOI:10.48550/arXiv.2309.08909</mixed-citation></citation-alternatives></ref><ref id="cit14"><label>14</label><citation-alternatives><mixed-citation xml:lang="ru">Dosovitskiy A., Ros G., Codevilla F., Lopez A., Koltun V. CARLA: An Open Urban Driving Simulator // Proceedings of the 1st Annual Conference on Robot Learning (PMLR, 13‒15 November 2017). 2017. Vol. 78. PP. 1‒16.</mixed-citation><mixed-citation xml:lang="en">Dosovitskiy A., Ros G., Codevilla F., Lopez A., Koltun V. CARLA: An Open Urban Driving Simulator. Proceedings of the 1st Annual Conference on Robot Learning, PMLR, 13‒15 November 2017, vol.78. 2017. p.1‒16.</mixed-citation></citation-alternatives></ref><ref id="cit15"><label>15</label><citation-alternatives><mixed-citation xml:lang="ru">Campos C., Elvira R., Rodríguez J.J.G., Montiel J.M.M., Tardós J.D. ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual–Inertial, and Multimap SLAM // IEEE Transactions on Robotics. 2021. Vol. 37. Iss. 6. PP. 1874‒1890. DOI:10.1109/TRO.2021.3075644</mixed-citation><mixed-citation xml:lang="en">Campos C., Elvira R., Rodríguez J.J.G., Montiel J.M.M., Tardós J.D. ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual–Inertial, and Multimap SLAM. IEEE Transactions on Robotics. 2021;37(6):1874‒1890. DOI:10.1109/TRO.2021.3075644</mixed-citation></citation-alternatives></ref></ref-list><fn-group><fn fn-type="conflict"><p>The authors declare that there are no conflicts of interest present.</p></fn></fn-group></back></article>
