湖北农业科学 ›› 2020, Vol. 59 ›› Issue (5): 28-36.doi: 10.14088/j.cnki.issn0439-8114.2020.05.006

• 资源·环境 • 上一篇    下一篇

Sentinel-1A与不同云量Sentinel-2A的融合尺度及分类研究

罗冬, 罗红霞, 刘光鹏, 雷茜, 冯华梅   

  1. 西南大学地理科学学院/岩溶环境重庆市重点实验室,重庆 400715
  • 修回日期:2019-12-12 出版日期:2020-03-10 发布日期:2020-06-05
  • 通讯作者: 罗红霞,副教授,主要从事GIS与遥感综合应用研究,(电子信箱)tam_7236@163.com。
  • 作者简介:罗 冬(1993-),男,四川广元人,在读硕士研究生,研究方向为遥感与GIS应用,(电话)18875092993(电子信箱) Mr.Luodong@outlook.com
  • 基金资助:
    国家自然科学基金项目(41201436)

Research on classification and fusion scale of sentinel-1A and sentinel-2A with different cloud cover

LUO Dong, LUO Hong-xia, LIU Guang-peng, LEI Xi, FENG Hua-mei   

  1. School of Geographical Science/Chongqing Key Laboratory of Karst Environment,Southwest University,Chongqing 400715,China
  • Revised:2019-12-12 Online:2020-03-10 Published:2020-06-05

摘要: 为了提高多云雾地区光学遥感影像利用率、探索不同云量的光学影像和SAR影像的最佳融合尺度,选取重庆市渝中地区为研究区,基于Sentinel-1A极化合成孔径雷达(SAR)影像与不同云量的Sentinel-2A(云量分别为0、10%、20%、30%)光学影像,进行小波变换融合、乘积变换融合、高通滤波融合,再利用图像评价方法评价影像融合以后的效果,最后利用面向对象的方法对融合前后的影像进行分类,利用混淆矩阵比较最终的分类精度。结果显示,无云情况下,小波变换融合效果最好,最大限度地保留了原多光谱影像的亮度、反差,有效防止了影像信息的丢失,对植被的解译能力有了明显改善,而融合后的影像保真度较差,其余两种融合效果相对次之,因此,在多云雾地区进行多源遥感数据融合时,尤其是异质数据融合时,优先推荐小波融合算法;当云量为10%以上时,3种融合算法虽然信息熵略有增加,但平均梯度、标准差减少了,造成解译困难,导致最终分类精度均略低于Sentinel-2A影像,远低于Sentinel-1A影像,基本不能满足使用要求,因此,在进行地表覆被解译时,推荐用SAR影像替代光学影像。

关键词: 影像融合, 合成孔径雷达(SAR), 地表覆被分类, 面向对象分类方法, 多云雾地区

Abstract: : This research aimed to improve the utilization ratio of optical remote sensing image in cloudy areas and explore the best fusion scales of different cloud cover optical remote sensing image and SAR image. Taking the Yuzhong area of Chongqing city as the research area, wavelet transform fusion, multiplicative transform fusion and high-pass filter fusion was carried out based on sentinel-1A polarized synthetic aperture radar (SAR) image and different cloud cover of sentinel-2A (cloud volume of 0, 10%, 20%, 30%) multi-spectral images, then the effect of image fusion was evaluated by image evaluation method, finally all images were classified by object-oriented methods,and the final classification accuracy was compared by the confusion matrix. The results show that, in the cloudless case, the wavelet transform has the best fusion effect, and the brightness and contrast of the original multi-spectral image are preserved to the maximum extent, which effectively prevents the loss of image information, and the interpretation ability of the vegetation is obviously improved, but after the fusion the image fidelity is poor, and the other two fusion effects are relatively inferior. Therefore, when multi-source remote sensing data fusion is performed in a cloudy fog region, especially when heterogeneous data is fused, the wavelet fusion algorithm is preferred. When the cloud cover is more than 10%, although the information entropy is slightly increased in the four fusion algorithms, the average gradient and standard deviation are reduced, which makes the interpretation difficult, resulting in the final classification accuracy is slightly lower than the sentinel-2A image, much lower than sentinel-1A images basically cannot meet the requirements for use. Therefore, when performing surface overlay interpretation, it is recommended to replace the optical images with SAR images.

Key words: image fusion, polarimetric synthetic aperture radar(sar), land cover classification, object-oriented classification, cloudy areas

中图分类号: