基于改进YOLOv11n的复杂场景下行人检测模型
CSTR:
作者:
作者单位:

作者简介:

通讯作者:

中图分类号:

基金项目:

国家自然科学基金(62441401);国防基础科研计划项目(JCKYS2022DC10)


Improved YOLOv11n based pedestrian detection model in complex scenarios
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    针对由于光照、角度、背景干扰及行人目标太小等复杂场景的影响会导致行人检测精度下降,容易出现误检或漏检等问题,提出了一种基于改进YOLOv11n的行人检测模型YOLOv11-CREP。首先,引入由Conv卷积和空间深度转化卷积(space-to-depth convolution,SPDConv)融合形成的CSPDConv,使模型减少信息的丢失并增强对重要细节的提取;其次,给出RepNCSPELAN4-GC模块(其利用幽灵卷积GhostConv对RepNCSPELAN4进行改进,以减少RepNCSPELAN4模块的参数量),并用改进后的RepNCSPELAN4-GC模块来替换Neck层部分C3k2模块;再次,将高效多尺度注意力(efficient multi-scale attention,EMAttention)和并行网络注意力(parallel network attention,ParNetAttention)融合成新的EMPAttention注意力模块,以增强模型对小目标行人的检测能力;最后,针对小目标行人和遮挡目标的特性,新增小目标检测头P2来增强模型对小目标的识别能力。结果表明:YOLOv11-CREP与原始的YOLOv11n模型相比,平均精度(mean average precision,mAP)在IoU阈值0.5时提升4.6个百分点,达到95.3%;在IoU阈值范围为0.5~0.95时提升9.0个百分点,达到70.2%。所提模型兼顾高检测性能和实时性要求,有效提升了复杂场景下的行人检测性能,为行人检测任务建模提供了参考。

    Abstract:

    To address the decline in pedestrian detection accuracy caused by complex scenarios such as illumination variations, viewing angles, background interference and small pedestrian targets, which often lead to false positives and missed detections, a pedestrian detection model, YOLOv11-CREP, was proposed based on an improved YOLOv11n. Firstly, CSPDConv, which was formed by integrating standard convolution(Conv) with space-to-depth convolution(SPDConv), was introduced to reduce information loss and enhance critical feature extraction. Secondly, a new RepNCSPELAN4-GC module was proposed, which incorporates GhostConv to optimize the RepNCSPELAN4 module, reducing its parameter count. The improved RepNCSPELAN4-GC module was then used to partially replace the C3k2 modules in the Neck layer. Next, efficient multi-scale attention(EMAttention) and parallel network attention(ParNetAttention) were fused into a new EMPAttention module to enhance the detection ability of the model for small target pedestrians. Finally, considering the characteristics of small target pedestrains and occluded targets, a small-target detection head P2 was added to further improve the model’s recognition capability for small targets. The experiments show that compared with the original YOLOv11n model, YOLOv11-CREP improves the mean average precision(mAP) by 4.6 percentage points at an IoU threshold of 0.5, reaching 95.3%. When evaluated over the IoU range of 0.5 to 0.95, its mAP increases by 9.0 percentage points, reaching 70.2%. The proposed model achieves a balance between high detection performance and real-time requirements, effectively enhancing pedestrian detection performance in complex scenarios. It provides valuable references for modeling pedestrian detection tasks.

    参考文献
    相似文献
    引证文献
引用本文

刘 伟,时 薇,杨 淼,王井阳,黄 敏,杨 琳.基于改进YOLOv11n的复杂场景下行人检测模型[J].河北科技大学学报,2026,47(1):60-72

复制
分享
相关视频

文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2025-05-14
  • 最后修改日期:2025-07-10
  • 录用日期:
  • 在线发布日期: 2026-02-09
  • 出版日期:
文章二维码