WO2019015344A1 - Image saliency object detection method based on center-dark channel priori information - Google Patents

Image saliency object detection method based on center-dark channel priori information Download PDF

Info

Publication number
WO2019015344A1
WO2019015344A1 PCT/CN2018/078935 CN2018078935W WO2019015344A1 WO 2019015344 A1 WO2019015344 A1 WO 2019015344A1 CN 2018078935 W CN2018078935 W CN 2018078935W WO 2019015344 A1 WO2019015344 A1 WO 2019015344A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
region
saliency
depth
dark channel
Prior art date
Application number
PCT/CN2018/078935
Other languages
French (fr)
Chinese (zh)
Inventor
李革
朱春彪
王文敏
王荣刚
高文
黄铁军
Original Assignee
北京大学深圳研究生院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京大学深圳研究生院 filed Critical 北京大学深圳研究生院
Publication of WO2019015344A1 publication Critical patent/WO2019015344A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows

Definitions

  • the present invention relates to the field of image processing technologies, and in particular, to a method for performing saliency object detection of an image using a central dark channel prior information.
  • the human eye's attention is quickly concentrated on a few prominent visual objects, and these objects are prioritized, a process known as visual saliency.
  • Significant detection is the use of this visual biological mechanism of the human eye, using mathematical calculations to simulate the human eye to properly process the image to obtain a significant object of the picture. Since we can prioritize the computational resources required for image analysis and synthesis through the saliency region, it is significant to detect the saliency region of the image by calculation.
  • the extracted saliency images can be widely used in many computer vision applications, including image segmentation of target objects of interest, detection and recognition of target objects, image compression and encoding, image retrieval, and content-aware image editing.
  • the existing saliency detection frameworks are mainly divided into a bottom-up saliency detection method and a top-down saliency detection method.
  • most of the bottom-up saliency detection methods are based on data-driven and independent of specific tasks; and the top-down saliency detection method is subject to consciousness and is related to specific tasks.
  • the bottom-up saliency detection method mostly uses low-level feature information such as color features, distance features, and some heuristic saliency features. Although these methods have their own advantages, they are not accurate enough to be robust enough on challenging data sets in specific scenarios.
  • existing methods have adopted depth information to enhance the accuracy of significant object detection. Although depth information can increase the accuracy of saliency object detection, the accuracy of saliency detection is affected when a significant object has a low contrast with its background.
  • the existing image saliency object detection method is not accurate when detecting significant objects, the method is not robust enough, and it is easy to cause false detection, missed detection, etc. It is difficult to obtain an accurate image saliency detection. As a result, not only the misdetection of the significant object itself is caused, but also a certain error is caused to the application using the significance detection result.
  • the present invention proposes a new image saliency object detection method based on the central dark channel prior information, which can solve the problem that the existing saliency detection accuracy is not high and the robustness is insufficient.
  • the saliency area in the image is more accurately displayed, providing accurate and useful information for later applications such as target recognition and classification.
  • a method for detecting image saliency objects based on prior information of central dark channel, using color, depth and distance information to locate and detect the saliency region of the image, and obtaining preliminary detection results of significant objects in the image, and then using the invention The proposed central dark channel prior information optimizes the final result of the significance test; its implementation includes the following steps:
  • the saliency object is located at the center position, and the center of the depth map I d sub-region k and the depth weight DW(d k ) are calculated;
  • the BSCA algorithm (based on the element) described in the literature (Qin Y, Lu H, Xu Y, et al. Saliency detection via Cellular Automata [C]//IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2015: 110-119) is used.
  • the saliency detection method of the cellular automaton is to obtain the central prior information S csp of the image;
  • the invention provides an image saliency object detection algorithm based on the central dark channel prior information, and firstly calculates a preliminary saliency result based on the image color, space and depth information. Then the central dark channel prior information of the image is obtained. Finally, the preliminary significant result map is fused with the central dark channel prior information to obtain the final saliency test result map.
  • Experimental results show that the present invention is more effective than other methods.
  • the invention can detect significant objects more accurately and more robustly. Compared with the prior art, the present invention can increase the accuracy of the detection of significant objects by utilizing the central dark channel prior information for significant detection. At the same time, the robustness of the significance detection is also enhanced.
  • the invention is applicable to more complicated scenarios and has a wider range of use, such as the method of the present invention is applied to the field of small target detection and tracking.
  • FIG. 1 is a flow chart of the present invention.
  • FIG. 2 is a comparison diagram of a detection result image obtained by using an existing method, detecting an image by using the method of the present invention, and manually calibrating an expected image according to an embodiment of the present invention
  • the first column is the input image
  • the second column is the image obtained by manual calibration
  • the third column to the ninth column are the detection result images obtained by other existing methods
  • the tenth column is the detection result image of the present invention.
  • Figure 3 shows the application of the invention in the field of small target detection and tracking
  • the invention provides an image saliency object detection algorithm based on the central dark channel prior information, which can detect significant objects more accurately and more robustly.
  • the present invention first calculates a preliminary significant result based on image color, space, and depth information. Then the central dark channel prior information of the image is obtained. Finally, the preliminary significant result map is fused with the central dark channel prior information to obtain the final saliency test result map.
  • FIG. 1 is a flow chart of a method for detecting a significant object provided by the present invention, including the following steps:
  • Step 2 Using the K-means algorithm to divide the image into K regions, and calculate the color significance value of each sub-region by the formula (1):
  • D o (r k , r i ) represents the coordinate position distance of the region k and the region i
  • is a range in which a parameter controls W d (r k ).
  • Step 3 The same as the color saliency value calculation method, calculate the depth saliency value of the depth map by the formula (3):
  • D d (r k , r i ) is the Euclidean distance of the region k and the region i in the depth space.
  • Step 4 Generally speaking, the saliency object is located at the center position, and the center of the region k and the depth weight W cd (r k ) are calculated by the formula (4):
  • G( ⁇ ) represents Gaussian normalization
  • represents Euclidean distance operation
  • P k is the position coordinate of region k
  • P o is the coordinate center of the image
  • N k is the number of pixels of region k.
  • DW(d k ) is the depth weight and is defined as follows:
  • max ⁇ d ⁇ represents the maximum depth of the depth map
  • d k represents the depth value of the region k
  • is a parameter related to the calculated depth map, defined as follows:
  • min ⁇ d ⁇ represents the minimum depth of the depth map.
  • Step 5 Using the formula (7) to obtain a preliminary significance test result S 1 (r k ):
  • Step 6 obtaining a central dark channel prior information of the image
  • the image is obtained by using the algorithm described in the literature (Qin Y, Lu H, Xu Y, et al. Saliency detection via Cellular Automata [C]//IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2015: 110-119). Center prior information S csp ;
  • Step IX Using the formula (9) to fuse the preliminary significance test results with the central dark channel prior information to obtain our final saliency test results:
  • the image of the detection result obtained by using the existing method, the image detected by the method of the present invention, and the comparison image of the image obtained by manual calibration are shown in FIG. 2; wherein the first column is the input image.
  • the second column is the image obtained by manual calibration, the third column to the ninth column are the detection result images obtained by other existing methods, and the tenth column is the detection result image of the present invention.
  • the present invention is applied to the field of small target detection and tracking; wherein, the first sequence of input video frames, the second behavior of the central dark channel prior information of the frame sequence, and the third behavior of the video detected by the algorithm
  • the sequence of frames, the fourth line manually calibrates the desired sequence of video frames. Therefore, the significant object detection algorithm based on the central dark channel prior information provided by the present invention is also applicable to the field of small target detection and tracking.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

Disclosed in the present invention is an image saliency object detection method based on center-dark channel priori information. The method uses color, depth and distance information to perform positioning detection on a saliency region of an image, to obtain a preliminary detection result of a saliency object in the image, and then uses center-dark channel priori information proposed in the present invention to optimize a final result of the saliency detection. The present invention can detect a saliency object more accurately and robustly. The present invention uses center-dark channel priori information to perform saliency detection, thereby increasing the accuracy of saliency object detection. At the same time, the present invention also enhances the robustness of saliency detection. The present invention can solve the problems of low accuracy and low robustness of existing saliency detection, and show a saliency region in an image more accurately, thereby providing accurate and useful information for subsequent applications such as target recognition and classification. The present invention is applicable to many complex situations, and has a wide application range.

Description

基于中心暗通道先验信息的图像显著性物体检测方法Image saliency object detection method based on central dark channel prior information 技术领域Technical field
本发明涉及图像处理技术领域,尤其涉及一种利用中心暗通道先验信息进行图像的显著性物体检测的方法。The present invention relates to the field of image processing technologies, and in particular, to a method for performing saliency object detection of an image using a central dark channel prior information.
背景技术Background technique
在面对一个复杂场景时,人眼的注意力会迅速集中在少数几个显著的视觉对象上,并对这些对象进行优先处理,该过程被称为视觉显著性。显著性检测正是利用人眼的这种视觉生物学机制,用数学的计算方法模拟人眼对图像进行适当的处理,从而获得一张图片的显著性物体。由于我们可以通过显著性区域来优先分配图像分析与合成所需要的计算资源,所以,通过计算来检测图像的显著性区域意义重大。提取出的显著性图像可以广泛应用于许多计算机视觉领域的应用,包括对兴趣目标物体的图像分割,目标物体的检测与识别,图像压缩与编码,图像检索,内容感知图像编辑等方面。In the face of a complex scene, the human eye's attention is quickly concentrated on a few prominent visual objects, and these objects are prioritized, a process known as visual saliency. Significant detection is the use of this visual biological mechanism of the human eye, using mathematical calculations to simulate the human eye to properly process the image to obtain a significant object of the picture. Since we can prioritize the computational resources required for image analysis and synthesis through the saliency region, it is significant to detect the saliency region of the image by calculation. The extracted saliency images can be widely used in many computer vision applications, including image segmentation of target objects of interest, detection and recognition of target objects, image compression and encoding, image retrieval, and content-aware image editing.
通常来说,现有的显著性检测框架主要分为:自底向上的显著性检测方法和自顶向下的显著性检测方法。目前大多采用自底向上的显著性检测方法,它是基于数据驱动的,且独立于具体的任务;而自顶向下的显著性检测方法是受意识支配的,与具体任务相关。Generally speaking, the existing saliency detection frameworks are mainly divided into a bottom-up saliency detection method and a top-down saliency detection method. At present, most of the bottom-up saliency detection methods are based on data-driven and independent of specific tasks; and the top-down saliency detection method is subject to consciousness and is related to specific tasks.
现有方法中,自底向上的显著性检测方法大多使用低水平的特征信息,例如颜色特征、距离特征和一些启发式的显著性特征等。尽管这些方法有各自的优点,但是在一些特定场景下的具有挑战性的数据集上,这些方法表现的不够精确,不够健壮。为了解决这一问题,随着3D图像采集技术的出现,目前已有方法通过采用深度信息来增强显著性物体检测的精准度。尽管深度信息可以增加显著性物体检测的精准度,但是,当一个显著性物体与其背景有着低对比的深度时,还是会影响显著性检测的精准度。In the existing methods, the bottom-up saliency detection method mostly uses low-level feature information such as color features, distance features, and some heuristic saliency features. Although these methods have their own advantages, they are not accurate enough to be robust enough on challenging data sets in specific scenarios. In order to solve this problem, with the advent of 3D image acquisition technology, existing methods have adopted depth information to enhance the accuracy of significant object detection. Although depth information can increase the accuracy of saliency object detection, the accuracy of saliency detection is affected when a significant object has a low contrast with its background.
综合来看,现有的图像显著性物体检测方法在检测显著性物体时精准度不高,方法健壮性不够强,容易造成误检、漏检等情况,很难得到一个精确的图像显著性检测结果,不仅造成显著性物体本身的错检,同时也会对利用显著性检测结果的应用造成一定的误差。On the whole, the existing image saliency object detection method is not accurate when detecting significant objects, the method is not robust enough, and it is easy to cause false detection, missed detection, etc. It is difficult to obtain an accurate image saliency detection. As a result, not only the misdetection of the significant object itself is caused, but also a certain error is caused to the application using the significance detection result.
发明内容Summary of the invention
为了克服上述现有技术的不足,本发明提出了一种新的基于中心暗通道先验信息的图像显著性物体检测方法,能够解决现有的显著性检测精确度不高、健壮性不够的问题,使图像 中的显著性区域更精准地显现出来,为后期的目标识别和分类等应用提供精准且有用的信息。In order to overcome the above deficiencies of the prior art, the present invention proposes a new image saliency object detection method based on the central dark channel prior information, which can solve the problem that the existing saliency detection accuracy is not high and the robustness is insufficient. The saliency area in the image is more accurately displayed, providing accurate and useful information for later applications such as target recognition and classification.
本发明提供的技术方案是:The technical solution provided by the invention is:
一种基于中心暗通道先验信息的图像显著性物体的检测方法,利用颜色、深度、距离信息对图像的显著性区域进行定位检测,得到图像中显著性物体的初步检测结果,再利用本发明提出的中心暗通道先验信息,优化显著性检测的最终结果;其实现包括如下步骤:A method for detecting image saliency objects based on prior information of central dark channel, using color, depth and distance information to locate and detect the saliency region of the image, and obtaining preliminary detection results of significant objects in the image, and then using the invention The proposed central dark channel prior information optimizes the final result of the significance test; its implementation includes the following steps:
1)输入一张待检测图像I o,利用Kinect设备得到的该图像的深度图I d1) input a to-be-detected image I o , a depth map I d of the image obtained by using a Kinect device;
2)利用K-means算法将图像I o分成K个区域,并计算得到图像I o每一个区域的颜色显著性值; 2) using the K-means algorithm to divide the image I o into K regions, and calculate the color significance value of each region of the image I o ;
3)同颜色显著性值计算方式一样,计算得到深度图I d中每一个区域的的深度显著性值; 3) The same as the color saliency value calculation method, the depth saliency value of each region in the depth map I d is calculated;
4)通常来说,显著性物体都位于中心位置,计算深度图I d子区域k的中心和深度权重DW(d k); 4) Generally speaking, the saliency object is located at the center position, and the center of the depth map I d sub-region k and the depth weight DW(d k ) are calculated;
5)进行初步显著性检测:利用待检测图像中每一个区域的颜色显著性值、深度图中每一个区域的深度显著性值和区域的中心和深度权重,通过高斯归一化方法计算得到初步的显著性检测结果S 15) Perform preliminary saliency detection: using the color saliency value of each region in the image to be detected, the depth saliency value of each region in the depth map, and the center and depth weight of the region, and calculating by Gaussian normalization method Significance test results S 1 ;
6)求取图像的中心暗通道先验信息;包括如下过程:6) Find the central dark channel prior information of the image; including the following process:
首先利用文献(Qin Y,Lu H,Xu Y,et al.Saliency detection via Cellular Automata[C]//IEEE Conference on Computer Vision and Pattern Recognition.IEEE,2015:110-119)记载的BSCA算法(基于元胞自动机的显著性检测方法)求取图像的中心先验信息S cspFirst, the BSCA algorithm (based on the element) described in the literature (Qin Y, Lu H, Xu Y, et al. Saliency detection via Cellular Automata [C]//IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2015: 110-119) is used. The saliency detection method of the cellular automaton) is to obtain the central prior information S csp of the image;
然后,利用文献(Kaiming He,Jian Sun,and Xiaoou Tang.Single image haze removal using dark channel prior.In Computer Vision and Pattern Recognition,2009.CVPR 2009.IEEE Conference on,pages 1956–1963,2009)记载的算法(基于暗通道的图像去雾方法)求取图像的暗通道先验信息S dcpThen, the algorithm described in the literature (Kaiming He, Jian Sun, and Xiaoou Tang. Single image haze removal using dark channel prior. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 1956-1963, 2009) is used. (Image channel defogging method based on dark channel) to obtain dark channel prior information S dcp of the image ;
最后通过公式(8)求取图像的中心暗通道先验信息S cdcpFinally, the central dark channel prior information S cdcp of the image is obtained by formula (8):
S cdcp=S cspS dcp   (8) S cdcp =S csp S dcp (8)
7)将步骤5)得到的初步显著性检测结果和步骤6)得到的中心暗通道先验信息利用式(9)进行融合,得到最后的显著性检测结果:7) The preliminary significance detection result obtained in the step 5) and the central dark channel prior information obtained in the step 6) are fused by the formula (9) to obtain the final saliency detection result:
Figure PCTCN2018078935-appb-000001
Figure PCTCN2018078935-appb-000001
其中,S即为最后的显著性检测结果。Among them, S is the final significance test result.
与现有技术相比,本发明的有益效果是:Compared with the prior art, the beneficial effects of the present invention are:
本发明提供了一种基于中心暗通道先验信息的图像显著性物体检测算法,首先基于图像颜色、空间、深度信息计算出初步的显著性结果。然后求取图像的中心暗通道先验信息。最后,将初步显著性结果图与中心暗通道先验信息进行融合,得到最后的显著性检测结果图。实验结果表明,本发明较其他方法检测结果更有效。The invention provides an image saliency object detection algorithm based on the central dark channel prior information, and firstly calculates a preliminary saliency result based on the image color, space and depth information. Then the central dark channel prior information of the image is obtained. Finally, the preliminary significant result map is fused with the central dark channel prior information to obtain the final saliency test result map. Experimental results show that the present invention is more effective than other methods.
本发明能够更加精准,更加鲁棒地检测出显著性物体。与现有技术相比,本发明由于利用了中心暗通道先验信息进行显著性检测,可以增加显著性物体检测的精准性。同时,也增强了显著性检测的鲁棒性。本发明适用于更多复杂的场景,使用范围更广,如将本发明方法用于小目标检测追踪领域。The invention can detect significant objects more accurately and more robustly. Compared with the prior art, the present invention can increase the accuracy of the detection of significant objects by utilizing the central dark channel prior information for significant detection. At the same time, the robustness of the significance detection is also enhanced. The invention is applicable to more complicated scenarios and has a wider range of use, such as the method of the present invention is applied to the field of small target detection and tracking.
附图说明DRAWINGS
图1为本发明提供的流程框图。FIG. 1 is a flow chart of the present invention.
图2为本发明实施例中对输入图像分别采用现有方法、采用本发明方法检测图像得到的检测结果图像,以及人工标定期望得到图像的对比图;2 is a comparison diagram of a detection result image obtained by using an existing method, detecting an image by using the method of the present invention, and manually calibrating an expected image according to an embodiment of the present invention;
其中,第一列为输入图像,第二列为人工标定期望得到的图像,第三列至第九列为现有其他方法得到的检测结果图像,第十列为本发明检测结果图像。The first column is the input image, the second column is the image obtained by manual calibration, the third column to the ninth column are the detection result images obtained by other existing methods, and the tenth column is the detection result image of the present invention.
图3为本发明应用在小目标检测追踪领域;Figure 3 shows the application of the invention in the field of small target detection and tracking;
其中,第一行为输入的视频帧序列,第二行为该帧序列的中心暗通道先验信息,第三行为本算法检测得到的视频帧序列,第四行人工标定期望得到的视频帧序列。The first sequence of video frame sequences input, the second behavior of the central dark channel prior information of the frame sequence, the third behavior is a sequence of video frames detected by the algorithm, and the fourth line manually calibrates the desired sequence of video frames.
具体实施方式Detailed ways
下面结合附图,通过实施例进一步描述本发明,但不以任何方式限制本发明的范围。The invention is further described by the following examples in conjunction with the accompanying drawings, but not by way of limitation.
本发明提供了一种基于中心暗通道先验信息的图像显著性物体检测算法,能够更加精准,更加鲁棒地检测出显著性物体。本发明首先基于图像颜色、空间、深度信息计算出初步的显著性结果。然后求取图像的中心暗通道先验信息。最后,将初步显著性结果图与中心暗通道先验信息进行融合,得到最后的显著性检测结果图。图1为本发明提供的显著性物体检测方法的流程框图,包括以下步骤:The invention provides an image saliency object detection algorithm based on the central dark channel prior information, which can detect significant objects more accurately and more robustly. The present invention first calculates a preliminary significant result based on image color, space, and depth information. Then the central dark channel prior information of the image is obtained. Finally, the preliminary significant result map is fused with the central dark channel prior information to obtain the final saliency test result map. FIG. 1 is a flow chart of a method for detecting a significant object provided by the present invention, including the following steps:
步骤一、输入一张待检测的图像I o,利用Kinect设备得到的该图像的深度图I dA step, to be detected an input image I o, Kinect apparatus with which the image depth map obtained by I d;
步骤二、利用K-means算法将图像分成K个区域,并通过式(1)计算得到每一个子区域的颜色显著性值:Step 2: Using the K-means algorithm to divide the image into K regions, and calculate the color significance value of each sub-region by the formula (1):
Figure PCTCN2018078935-appb-000002
Figure PCTCN2018078935-appb-000002
其中,r k和r i分别代表区域k和i,D c(r k,r i)表示区域k和区域i在L*a*b颜色空间上的欧氏距离,P i代表区域i所占图像区域的比例,W d(r k)定义如下: Where r k and r i represent the regions k and i, respectively, and D c (r k , r i ) represents the Euclidean distance of the region k and the region i in the L*a*b color space, and P i represents the region i The ratio of the image area, W d (r k ), is defined as follows:
Figure PCTCN2018078935-appb-000003
Figure PCTCN2018078935-appb-000003
其中,D o(r k,r i)表示区域k和区域i的坐标位置距离,σ是一个参数控制着W d(r k)的范围。 Where D o (r k , r i ) represents the coordinate position distance of the region k and the region i, and σ is a range in which a parameter controls W d (r k ).
步骤三、同颜色显著性值计算方式一样,通过式(3)计算深度图的深度显著性值:Step 3: The same as the color saliency value calculation method, calculate the depth saliency value of the depth map by the formula (3):
Figure PCTCN2018078935-appb-000004
Figure PCTCN2018078935-appb-000004
其中,D d(r k,r i)是区域k和区域i在深度空间的欧氏距离。 Where D d (r k , r i ) is the Euclidean distance of the region k and the region i in the depth space.
步骤四、通常来说,显著性物体都位于中心位置,通过式(4)计算区域k的中心和深度权重W cd(r k): Step 4: Generally speaking, the saliency object is located at the center position, and the center of the region k and the depth weight W cd (r k ) are calculated by the formula (4):
Figure PCTCN2018078935-appb-000005
Figure PCTCN2018078935-appb-000005
其中,G(·)表示高斯归一化,||·||表示欧氏距离操作,P k是区域k的位置坐标,P o是该图像的坐标中心,N k是区域k的像素数量。DW(d k)是深度权重,定义如下: Where G(·) represents Gaussian normalization, ||·|| represents Euclidean distance operation, P k is the position coordinate of region k, P o is the coordinate center of the image, and N k is the number of pixels of region k. DW(d k ) is the depth weight and is defined as follows:
DW(d k)=(max{d}-d k) μ    (5) DW(d k )=(max{d}-d k ) μ (5)
其中,max{d}表示深度图的最大深度,d k表示区域k的深度值,μ是一个与计算的深度图有关的参数,定义如下: Where max{d} represents the maximum depth of the depth map, d k represents the depth value of the region k, and μ is a parameter related to the calculated depth map, defined as follows:
Figure PCTCN2018078935-appb-000006
Figure PCTCN2018078935-appb-000006
其中,min{d}表示深度图的最小深度。Where min{d} represents the minimum depth of the depth map.
步骤五、利用式(7)得到初步的显著性检测结果S 1(r k): Step 5: Using the formula (7) to obtain a preliminary significance test result S 1 (r k ):
S 1(r k)=G(S c(r k)×W cd(r k)+S d(r k)×W cd(r k))    (7) S 1 (r k )=G(S c (r k )×W cd (r k )+S d (r k )×W cd (r k )) (7)
步骤六、求取图像的中心暗通道先验信息;Step 6: obtaining a central dark channel prior information of the image;
首先利用文献(Qin Y,Lu H,Xu Y,et al.Saliency detection via Cellular Automata[C]//IEEE  Conference on Computer Vision and Pattern Recognition.IEEE,2015:110-119)记载的算法求取图像的中心先验信息S cspFirst, the image is obtained by using the algorithm described in the literature (Qin Y, Lu H, Xu Y, et al. Saliency detection via Cellular Automata [C]//IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2015: 110-119). Center prior information S csp ;
然后,利用文献(Kaiming He,Jian Sun,and Xiaoou Tang.Single image haze removal using dark channel prior.In Computer Vision and Pattern Recognition,2009.CVPR 2009.IEEE Conference on,pages 1956–1963,2009)记载的算法求取图像的暗通道先验信息S dcpThen, the algorithm described in the literature (Kaiming He, Jian Sun, and Xiaoou Tang. Single image haze removal using dark channel prior. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 1956-1963, 2009) is used. Obtain the dark channel prior information S dcp of the image;
最后通过公式(8)求取图像的中心暗通道先验信息S cdcpFinally, the central dark channel prior information S cdcp of the image is obtained by formula (8):
S cdcp=S cspS dcp    (8) S cdcp =S csp S dcp (8)
步骤九、利用式(9)将初步显著性检测结果和中心暗通道先验信息进行融合,得到我们最后的显著性检测结果:Step IX. Using the formula (9) to fuse the preliminary significance test results with the central dark channel prior information to obtain our final saliency test results:
Figure PCTCN2018078935-appb-000007
Figure PCTCN2018078935-appb-000007
本发明具体实施中,对输入图像分别采用现有方法、采用本发明方法检测图像得到的检测结果图像,以及人工标定期望得到图像的对比图如图2所示;其中,第一列为输入图像,第二列为人工标定期望得到的图像,第三列至第九列为现有其他方法得到的检测结果图像,第十列为本发明检测结果图像。In the specific implementation of the present invention, the image of the detection result obtained by using the existing method, the image detected by the method of the present invention, and the comparison image of the image obtained by manual calibration are shown in FIG. 2; wherein the first column is the input image. The second column is the image obtained by manual calibration, the third column to the ninth column are the detection result images obtained by other existing methods, and the tenth column is the detection result image of the present invention.
如图3所示,本发明应用在小目标检测追踪领域;其中,第一行为输入的视频帧序列,第二行为该帧序列的中心暗通道先验信息,第三行为本算法检测得到的视频帧序列,第四行人工标定期望得到的视频帧序列。因此,本发明提供了的基于中心暗通道先验信息的显著性物体检测算法也适用于小目标检测追踪领域。As shown in FIG. 3, the present invention is applied to the field of small target detection and tracking; wherein, the first sequence of input video frames, the second behavior of the central dark channel prior information of the frame sequence, and the third behavior of the video detected by the algorithm The sequence of frames, the fourth line manually calibrates the desired sequence of video frames. Therefore, the significant object detection algorithm based on the central dark channel prior information provided by the present invention is also applicable to the field of small target detection and tracking.
需要注意的是,公布实施例的目的在于帮助进一步理解本发明,但是本领域的技术人员可以理解:在不脱离本发明及所附权利要求的精神和范围内,各种替换和修改都是可能的。因此,本发明不应局限于实施例所公开的内容,本发明要求保护的范围以权利要求书界定的范围为准。It is to be noted that the embodiments are disclosed to facilitate a further understanding of the invention, but those skilled in the art can understand that various alternatives and modifications are possible without departing from the spirit and scope of the invention and the appended claims. of. Therefore, the invention should not be limited by the scope of the invention, and the scope of the invention is defined by the scope of the claims.

Claims (8)

  1. 一种基于中心暗通道先验信息的图像显著性物体的检测方法,利用颜色、深度、距离信息对图像的显著性区域进行定位检测,得到图像中显著性物体的初步检测结果,再利用中心暗通道先验信息进行优化,得到显著性检测的最终结果;包括如下步骤:A method for detecting an image saliency object based on the prior information of the central dark channel, using the color, depth, and distance information to position and detect the saliency region of the image, and obtaining a preliminary detection result of the saliency object in the image, and then using the center dark The channel prior information is optimized to obtain the final result of the significance test; the following steps are included:
    1)输入一张待检测图像I o,获得该图像的深度图I d1) input a to-be-detected image I o to obtain a depth map I d of the image;
    2)将图像I o分成K个区域,并计算得到每一个区域的颜色显著性值; 2) dividing the image I o into K regions, and calculating the color significance value of each region;
    3)将深度图I d分成K个区域,计算得到深度图中每一个区域的的深度显著性值; 3) dividing the depth map I d into K regions, and calculating the depth significance value of each region in the depth map;
    4)计算图像I d每个子区域k的中心和深度权重DW(d k); 4) calculating the center of the image I d per sub-region k and the depth weight DW (d k );
    5)进行初步显著性检测:利用待检测图像I o中每一个区域的颜色显著性值、深度图I d中每一个区域的深度显著性值和区域的中心和深度权重DW(d k),通过高斯归一化方法计算得到初步的显著性检测结果S 15) preliminary significance test: using the image to be detected I o the color of each region of significant value, the depth map I depth of each region d of the central and depth of the right saliency value and the area weight of DW (d k), The preliminary significance test result S 1 is calculated by the Gaussian normalization method;
    6)求取图像的中心暗通道先验信息;包括如下过程:6) Find the central dark channel prior information of the image; including the following process:
    首先求取图像的中心先验信息S cspFirst, the center prior information S csp of the image is obtained;
    然后,求取图像的暗通道先验信息S dcpThen, the dark channel prior information S dcp of the image is obtained;
    最后通过公式(8)求取图像的中心暗通道先验信息S cdcpFinally, the central dark channel prior information S cdcp of the image is obtained by formula (8):
    S cdcp=S cspS dcp             (8) S cdcp =S csp S dcp (8)
    7)将步骤5)得到的初步显著性检测结果和步骤6)得到的中心暗通道先验信息利用式(9)进行融合,得到最后的显著性检测结果:7) The preliminary significance detection result obtained in the step 5) and the central dark channel prior information obtained in the step 6) are fused by the formula (9) to obtain the final saliency detection result:
    Figure PCTCN2018078935-appb-100001
    Figure PCTCN2018078935-appb-100001
    其中,S即为最后的显著性检测结果。Among them, S is the final significance test result.
  2. 如权利要求1所述图像显著性物体的检测方法,其特征是,步骤1)具体利用Kinect设备得到的该图像的深度图I dA method of detecting an image saliency object according to claim 1, wherein the step 1) specifically uses the depth map I d of the image obtained by the Kinect device.
  3. 如权利要求1所述图像显著性物体的检测方法,其特征是,步骤2)具体通过K-means算法将图像分成K个区域,并通过式(1)计算得到每一个子区域的颜色显著性值S c(r k): The method for detecting an image saliency object according to claim 1, wherein the step 2) divides the image into K regions by a K-means algorithm, and calculates the color saliency of each sub-region by the formula (1). Value S c (r k ):
    Figure PCTCN2018078935-appb-100002
    Figure PCTCN2018078935-appb-100002
    其中,r k和r i分别代表区域k和i,D c(r k,r i)表示区域k和区域i在L*a*b颜色空间上的欧氏距离,P i代表区域i所占图像区域的比例,W d(r k)定义如式2: Where r k and r i represent the regions k and i, respectively, and D c (r k , r i ) represents the Euclidean distance of the region k and the region i in the L*a*b color space, and P i represents the region i The ratio of the image area, W d (r k ), is defined as Equation 2:
    Figure PCTCN2018078935-appb-100003
    Figure PCTCN2018078935-appb-100003
    其中,D o(r k,r i)表示区域k和区域i的坐标位置距离,σ是一个参数控制着W d(r k)的范围。 Where D o (r k , r i ) represents the coordinate position distance of the region k and the region i, and σ is a range in which a parameter controls W d (r k ).
  4. 如权利要求3所述图像显著性物体的检测方法,其特征是,步骤3)采用与步骤2)相同的方法将深度图I d分成多个区域,通过式(3)计算深度图的深度显著性值S d(r k): The method for detecting an image saliency object according to claim 3, wherein the step 3) divides the depth map Id into a plurality of regions by the same method as in the step 2), and calculates the depth of the depth map by the equation (3). Sex value S d (r k ):
    Figure PCTCN2018078935-appb-100004
    Figure PCTCN2018078935-appb-100004
    其中,D d(r k,r i)是区域k和区域i在深度空间的欧氏距离。 Where D d (r k , r i ) is the Euclidean distance of the region k and the region i in the depth space.
  5. 如权利要求1所述图像显著性物体的检测方法,其特征是,步骤4)通过式(4)计算区域k的中心和深度权重W cd(r k): A method of detecting an image saliency object according to claim 1, wherein the step 4) calculates the center of the region k and the depth weight W cd (r k ) by the formula (4):
    Figure PCTCN2018078935-appb-100005
    Figure PCTCN2018078935-appb-100005
    其中,G(·)表示高斯归一化,||·||表示欧氏距离操作,P k是区域k的位置坐标,P o是该图像的坐标中心,N k是区域k的像素数量;DW(d k)是深度权重,定义如式(5): Where G(·) represents Gaussian normalization, ||·|| represents Euclidean distance operation, P k is the position coordinate of region k, P o is the coordinate center of the image, and N k is the number of pixels of region k; DW(d k ) is the depth weight, defined as equation (5):
    DW(d k)=(max{d}-d k) μ        (5) DW(d k )=(max{d}-d k ) μ (5)
    其中,max{d}表示深度图的最大深度,d k表示区域k的深度值,μ是一个与计算的深度图有关的参数,定义如式(6): Where max{d} represents the maximum depth of the depth map, d k represents the depth value of the region k, and μ is a parameter related to the calculated depth map, defined as equation (6):
    Figure PCTCN2018078935-appb-100006
    Figure PCTCN2018078935-appb-100006
    其中,min{d}表示深度图的最小深度。Where min{d} represents the minimum depth of the depth map.
  6. 如权利要求1所述图像显著性物体的检测方法,其特征是,步骤5)通过式(7)计算得到初步的显著性检测结果S 1(r k): The method for detecting an image saliency object according to claim 1, wherein the step 5) calculates a preliminary saliency detection result S 1 (r k ) by the formula (7):
    S 1(r k)=G(S c(r k)×W cd(r k)+S d(r k)×W cd(r k))          (7) S 1 (r k )=G(S c (r k )×W cd (r k )+S d (r k )×W cd (r k )) (7)
    其中,G(·)表示高斯归一化;S c(r k)是每一个子区域的颜色显著性值;W cd(r k)是区域k的中心和深度权重;S d(r k)是深度图的深度显著性值。 Where G(·) represents Gaussian normalization; S c (r k ) is the color significance value of each sub-region; W cd (r k ) is the center and depth weight of region k; S d (r k ) Is the depth significance value of the depth map.
  7. 如权利要求1所述图像显著性物体的检测方法,其特征是,步骤6)中采用基于元胞自动机的显著性检测方法求取图像的中心先验信息S cspThe method for detecting an image saliency object according to claim 1, wherein in step 6), the central a priori information S csp of the image is obtained by using a saliency detection method based on a cellular automaton.
  8. 如权利要求1所述图像显著性物体的检测方法,其特征是,步骤6)中采用基于暗通道的图像去雾方法求取图像的暗通道先验信息S dcpThe method for detecting an image saliency object according to claim 1, wherein in step 6), the dark channel prior information S dcp of the image is obtained by using a dark channel-based image defogging method.
PCT/CN2018/078935 2017-07-21 2018-03-14 Image saliency object detection method based on center-dark channel priori information WO2019015344A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710600386.3 2017-07-21
CN201710600386.3A CN107292318B (en) 2017-07-21 2017-07-21 Image significance object detection method based on center dark channel prior information

Publications (1)

Publication Number Publication Date
WO2019015344A1 true WO2019015344A1 (en) 2019-01-24

Family

ID=60101984

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/078935 WO2019015344A1 (en) 2017-07-21 2018-03-14 Image saliency object detection method based on center-dark channel priori information

Country Status (2)

Country Link
CN (1) CN107292318B (en)
WO (1) WO2019015344A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110458178A (en) * 2019-08-12 2019-11-15 浙江科技学院 The multi-modal RGB-D conspicuousness object detection method spliced more
CN111524090A (en) * 2020-01-13 2020-08-11 镇江优瞳智能科技有限公司 Depth prediction image-based RGB-D significance detection method
CN112651406A (en) * 2020-12-18 2021-04-13 浙江大学 Depth perception and multi-mode automatic fusion RGB-D significance target detection method
CN114842308A (en) * 2022-03-16 2022-08-02 电子科技大学 Method for establishing target prejudgment optimization model based on full-feature fusion

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107292318B (en) * 2017-07-21 2019-08-09 北京大学深圳研究生院 Image significance object detection method based on center dark channel prior information
CN107886533B (en) * 2017-10-26 2021-05-04 深圳大学 Method, device and equipment for detecting visual saliency of three-dimensional image and storage medium
CN109410171B (en) * 2018-09-14 2022-02-18 安徽三联学院 Target significance detection method for rainy image
CN112529896A (en) * 2020-12-24 2021-03-19 山东师范大学 Infrared small target detection method and system based on dark channel prior
CN112861880B (en) * 2021-03-05 2021-12-07 江苏实达迪美数据处理有限公司 Weak supervision RGBD image saliency detection method and system based on image classification

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104050674A (en) * 2014-06-27 2014-09-17 中国科学院自动化研究所 Salient region detection method and device
CN105404888A (en) * 2015-11-16 2016-03-16 浙江大学 Saliency object detection method integrated with color and depth information
CN105898278A (en) * 2016-05-26 2016-08-24 杭州电子科技大学 Stereoscopic video saliency detection method based on binocular multidimensional perception characteristic
CN107292318A (en) * 2017-07-21 2017-10-24 北京大学深圳研究生院 Image significance object detection method based on center dark channel prior information

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6330385B2 (en) * 2014-03-13 2018-05-30 オムロン株式会社 Image processing apparatus, image processing method, and program
CN104574375B (en) * 2014-12-23 2017-05-03 浙江大学 Image significance detection method combining color and depth information

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104050674A (en) * 2014-06-27 2014-09-17 中国科学院自动化研究所 Salient region detection method and device
CN105404888A (en) * 2015-11-16 2016-03-16 浙江大学 Saliency object detection method integrated with color and depth information
CN105898278A (en) * 2016-05-26 2016-08-24 杭州电子科技大学 Stereoscopic video saliency detection method based on binocular multidimensional perception characteristic
CN107292318A (en) * 2017-07-21 2017-10-24 北京大学深圳研究生院 Image significance object detection method based on center dark channel prior information

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ZHU, QINGSONG ET AL.: "An Adaptive and Effective Single Image Dehazing Algorithm Based on Dark Channel Prior", PROCEEDING OF THE IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS (ROBIO) SHENZHEN , CHINA , DECEMBER 2013, 12 December 2013 (2013-12-12) - 14 December 2013 (2013-12-14), pages 1796 - 1800, XP055567134, Retrieved from the Internet <URL:DOI:10.1109/ROBIO.2013.6739728> *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110458178A (en) * 2019-08-12 2019-11-15 浙江科技学院 The multi-modal RGB-D conspicuousness object detection method spliced more
CN110458178B (en) * 2019-08-12 2023-09-22 浙江科技学院 Multi-mode multi-spliced RGB-D significance target detection method
CN111524090A (en) * 2020-01-13 2020-08-11 镇江优瞳智能科技有限公司 Depth prediction image-based RGB-D significance detection method
CN112651406A (en) * 2020-12-18 2021-04-13 浙江大学 Depth perception and multi-mode automatic fusion RGB-D significance target detection method
CN112651406B (en) * 2020-12-18 2022-08-09 浙江大学 Depth perception and multi-mode automatic fusion RGB-D significance target detection method
CN114842308A (en) * 2022-03-16 2022-08-02 电子科技大学 Method for establishing target prejudgment optimization model based on full-feature fusion

Also Published As

Publication number Publication date
CN107292318A (en) 2017-10-24
CN107292318B (en) 2019-08-09

Similar Documents

Publication Publication Date Title
WO2019015344A1 (en) Image saliency object detection method based on center-dark channel priori information
US11182592B2 (en) Target object recognition method and apparatus, storage medium, and electronic device
US9710698B2 (en) Method, apparatus and computer program product for human-face features extraction
US10740912B2 (en) Detection of humans in images using depth information
CN105404884B (en) Image analysis method
WO2019020103A1 (en) Target recognition method and apparatus, storage medium and electronic device
WO2019042419A1 (en) Image tracking point acquisition method and device, and storage medium
US20160004935A1 (en) Image processing apparatus and image processing method which learn dictionary
WO2019128254A1 (en) Image analysis method and apparatus, and electronic device and readable storage medium
CN112036339B (en) Face detection method and device and electronic equipment
US20230334235A1 (en) Detecting occlusion of digital ink
CN108229494B (en) Network training method, processing method, device, storage medium and electronic equipment
WO2016145591A1 (en) Moving object detection based on motion blur
CN111368717A (en) Sight line determining method and device, electronic equipment and computer readable storage medium
CN110991412A (en) Face recognition method and device, storage medium and electronic equipment
US8903124B2 (en) Object learning method, object tracking method using the same, and object learning and tracking system
JP6244886B2 (en) Image processing apparatus, image processing method, and image processing program
CN103607558A (en) Video monitoring system, target matching method and apparatus thereof
CN108647605B (en) Human eye gaze point extraction method combining global color and local structural features
CN112329663A (en) Micro-expression time detection method and device based on face image sequence
Tian et al. Detecting good quality frames in videos captured by a wearable camera for blind navigation
JPWO2018179119A1 (en) Video analysis device, video analysis method, and program
CN111819567A (en) Method and apparatus for matching images using semantic features
KR20100119420A (en) Apparatus and method for detecting eye location using eye descriptor
JP2016045744A (en) Image processor, image processing method, and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18835489

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18835489

Country of ref document: EP

Kind code of ref document: A1