关键词:
视觉SLAM
目标检测
室内动态环境
光流法
摘要:
针对同步定位与地图构建(Simultaneous Localization and Mapping, SLAM)系统在室内动态环境中存在鲁棒性变差、定位精度降低的问题,提出了一种基于改进YOLOv8n目标检测网络的室内动态视觉SLAM算法。首先,选择YOLOv8n网络作为基线,并采用Ghost卷积替换原卷积,同时增加一个小目标检测层,减小网络模型体积,提高模型检测速度。其次,将改进的YOLOv8n目标检测网络与LK稀疏光流法结合,并引入视觉SLAM系统跟踪线程中,对场景中动态目标进行识别判断,筛选并剔除动态特征点;最后,仅使用静态特征点进行特征匹配和位姿估计。实验结果表明,在TUM数据集动态序列下相较于ORB-SLAM2,绝对轨迹均方根误差平均降低了96.62%,显著提高系统的鲁棒性和定位精度。与DS-SLAM、DynaSLAM等系统相比,该系统也能更有效平衡检测速度与定位精度。To address the issues of reduced robustness and lower localization accuracy in Simultaneous Loca- lization and Mapping (SLAM) systems within dynamic indoor environments, an indoor dynamic visual SLAM algorithm based on an improved object detection network is proposed. First, the YOLOv8n network is selected as the baseline, and Ghost convolution is employed to replace the original convolution, along with the addition of a small object detection layer to reduce the model size and improve detection speed. Second, the improved YOLOv8n object detection network is integrated with the Lucas-Kanade sparse optical flow method and is introduced into the tracking thread of the visual SLAM system to identify and filter out dynamic feature points from the scene. Finally, only static feature points are utilized for feature matching and pose estimation. Experimental results indicate that, on the dynamic sequences of the TUM dataset, the root mean square error of the absolute trajectory is reduced by an average of 96.62% compared to ORB-SLAM2, significantly enhancing system robustness and localization accuracy. Compared to systems such as DS-SLAM and DynaSLAM, this system also effectively balances detection speed and accuracy.