CN107085848A - Method for detecting significance of RGB-D (Red, Green and blue-D) image - Google Patents

Method for detecting significance of RGB-D (Red, Green and blue-D) image Download PDF

Info

Publication number
CN107085848A
CN107085848A CN201710263003.8A CN201710263003A CN107085848A CN 107085848 A CN107085848 A CN 107085848A CN 201710263003 A CN201710263003 A CN 201710263003A CN 107085848 A CN107085848 A CN 107085848A
Authority
CN
China
Prior art keywords
depth
notable
rgb
rgb color
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710263003.8A
Other languages
Chinese (zh)
Inventor
邵婷
刘政怡
郭星
李炜
吴建国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui University
Original Assignee
Anhui University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui University filed Critical Anhui University
Priority to CN201710263003.8A priority Critical patent/CN107085848A/en
Publication of CN107085848A publication Critical patent/CN107085848A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20004Adaptive image processing

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for detecting the saliency of an RGB-D image, which belongs to the technical field of computer vision and comprises the following steps: adopting a saliency detection algorithm to perform saliency detection on a Depth map of Depth in an RGB-D (red, green and blue) -image to obtain a Depth saliency map Sd(ii) a The depth saliency map SdThe manifold ordering of the RGB color image in the RGB-D image is strengthened as a characteristic to obtain a significant image S of the RGB color imagec(ii) a Saliency map S using RGB color mapcGuiding the manifold ordering of the Depth map to obtain a saliency map S of the Depth mapd′(ii) a Saliency map S of RGB color mapcAnd saliency map S of Depth map of Depthd′And performing fusion to obtain a saliency map S of the RGB-D map so as to detect the saliency of the RGB-D map. The depth saliency map obtained through preprocessing replaces an original rough depth map to serve as a depth feature, and is fused with other features to implement saliency calculation of the RGB-D map, so that the accuracy of saliency detection of the RGB-D map is improved.

Description

A kind of detection method of RGB-D figures conspicuousness
Technical field
The present invention relates to technical field of computer vision, more particularly to a kind of computational methods of RGB-D figures conspicuousness.
Background technology
At present, the conspicuousness detection in computer vision is increasingly paid close attention to by people, and conspicuousness detection can be applied In many visual tasks, such as image classification, target identification, image segmentation and target reorientation.And it is color from the RGB of two dimension Color image adds depth information to during three-dimensional RGB-D figures, therefore depth information was calculated in the conspicuousness of image There is important effect in journey.
2014, Cheng et al. was in paper《Depth enhanced saliency detection method》In point Do not contrasted using the visual cues of color space and deep space, and the off-centring of two dimension expanded into three dimensions, Three kinds of features merge to obtain final saliency value.In recent years, based on depth correlation, fusion priori and optimization Notable detection method achieves the effect of brilliance.Ren in 2015 et al. is in paper《Exploiting global priors for RGB-D saliency detection》Middle utilization curved surface direction priori and background priori detection well-marked target, and with PageRank algorithms and MRF algorithm optimization notable figures.Xue in 2015 et al. exists《RGB-D saliency detection via mutual guided manifold ranking》Middle four angles for assuming RGB color image are background, with color and depth Common trait implements the conspicuousness detection of RGB color image, then real to Depth depth maps using its result as foreground seeds node Manifold ranking is applied, the two completion RGB-D figures conspicuousness detection of finally fusion.Guo in 2016 et al. is in paper《Salient object detection for RGB-D image via saliency evolution》Middle Fusion of Color contrast and depth On the basis of contrast, final notable figure is obtained using the method for iterative diffusion.
But, above-mentioned existing method is directly to detect the conspicuousness that original depth-map is applied to RGB-D figures as feature In, in view of the roughening of original depth-map, it is difficult to the depth difference degree between accurate expression super-pixel, cause final RGB-D The accuracy rate of figure conspicuousness result of calculation receives influence.
The content of the invention
It is an object of the invention to provide a kind of detection method of RGB-D figures conspicuousness, in terms of improving RGB-D figure conspicuousnesses Calculate the accuracy rate of result.
To realize object above, the technical solution adopted by the present invention is:A kind of detection side of RGB-D figures conspicuousness is provided Method, including:
The Depth depth maps in RGB-D figures are significantly detected using notable detection algorithm, depth notable figure is obtained Sd
By depth notable figure SdStrengthen the manifold ranking of the RGB color figure in RGB-D figures as feature, obtain RGB color The notable figure S of chromatic graphc
Utilize the notable figure S of RGB color figurecThe manifold ranking of Depth depth maps is instructed, the aobvious of Depth depth maps is obtained Write figure Sd′
By the notable figure S of RGB color figurecWith the notable figure S of Depth depth mapsd′Merged, obtain the aobvious of RGB-D figures Figure S is write, to detect the conspicuousness of RGB-D figures.
Further, by depth notable figure SdStrengthen the manifold ranking of the RGB color figure in RGB-D figures as feature, Obtain the notable figure S of RGB color figurec, specifically include:
Construct the non-directed graph G of RGB color figurec=(Vc,Ec), wherein VcIt is vertex set, comprising RGB color figure through over-segmentation The set of the super-pixel obtained after algorithm segmentation, EcFor incidence matrixThe side collection of weighting,Represent two summit i, j Between side connection weight;
Defined feature vector fc=[c,D,n]T, wherein c represents that the super-pixel that RGB color figure is obtained after over-segmentation exists Average in CIELAB color spaces, D represents that obtaining super-pixel after the segmentation of RGB color figure maps directly in RGB-D images The super-pixel that Depth depth maps are formed is in depth notable figure SdIn average, n represents that RGB color figure is obtained after over-segmentation Super-pixel normal average, wherein,Definition be:scc, scD And scnIt is the constants of control weight respectively;
By the use of four corner regions of RGB color figure as background seed, the RGB color figure is obtained after over-segmentation Super-pixel node between correlation implement manifold ranking algorithm, obtain the notable figure S of RGB color figurec
Further, the notable figure S of RGB color figure is utilizedcThe manifold ranking of Depth depth maps is instructed, Depth is obtained deep Spend the notable figure S of figured′, specifically include:
Construct the non-directed graph G of Depth depth mapsd=(Vd,Ed), wherein VdTo be obtained after the segmented algorithm segmentation of RGB color figure The Depth depth maps that map directly in RGB-D images of super-pixel form corresponding super-pixel vertex set, EdFor association MatrixThe side collection of weighting,Represent the connection weight on the side between two summit i, j;
Defined feature vector fd=[D, n]T, wherein D represent RGB color figure segmentation after obtain super-pixel map directly to The super-pixel that Depth depth maps in RGB-D images are formed is in depth notable figure SdIn average, n represents RGB color figure point The super-pixel normal average obtained after cutting, wherein,Definition be:sdDAnd sdn It is the constants of control weight respectively;
Using auto-thresholding algorithm, to the notable figure S of the RGB color figurecBinary conversion treatment is carried out, is obtained The marking area of RGB color figure;
The marking area of RGB color figure is arbitrarily divided into t equal portions, manifold ranking algorithm is respectively adopted and calculates the RGB coloured silks The correlation that super-pixel is mapped between the super-pixel node in Depth depth maps is obtained after chromatic graph segmentation, Depth depth is obtained The notable figure S of figured′, wherein t is constant.
Further, by the notable figure S of RGB color figurecWith the notable figure S of Depth depth mapsd′Merged, obtained The notable figure S of RGB-D figures, is specifically included:
To notable figure ScWith notable figure Sd′Merged, fusion formula is specially:
S=α Sc+Sd′,
Wherein, α is control parameter constant.
Further, the notable figure S of RGB color figurecCalculation formula, be specially:
Wherein,The notable figure produced by background seed, j are used as each corner region super-pixel of RGB color figurecFor Normalized parameter,Represent the fusion that is multiplied.
Further, the notable figure S of Depth depth mapsd′Calculation formula, be specially:
Wherein,For each equal portions RGB color figure marking area super-pixel as notable produced by foreground seeds Figure, jdFor normalized parameter,Represent to be added fusion.
Compared with prior art, there is following technique effect in the present invention:It should be noted that RGB-D figures include RGB coloured silks Chromatic graph and Depth depth maps, the present embodiment are obtained by carrying out conspicuousness detection to the Depth depth maps in RGB-D figures The depth notable figure S of Depth depth mapsd, utilize the depth notable figure S of Depth depth mapsdThe manifold of RGB color figure is instructed to arrange Sequence, obtains the notable figure S of RGB color figurec.In turn, then with the notable figure S of RGB color figurecInstruct the manifold of Depth depth maps Sequence, obtains the notable figure S of Depth depth mapsd′.Utilize the depth notable figure S of Depth depth mapsd′With showing for RGB color figure Write figure ScMerged, by by the depth notable figure S of Depth depth mapsdDepth characteristic is used as instead of original Depth depth maps To measure depth difference, depth information is more accurately make use of, the accuracy rate of RGB-D figures conspicuousness detection is improved.
Brief description of the drawings
Below in conjunction with the accompanying drawings, the embodiment to the present invention is described in detail:
Fig. 1 is a kind of schematic flow sheet of the detection method of RGB-D figures conspicuousness of the invention;
Fig. 2 is the detection method for the RGB-D figure conspicuousnesses that the present invention is provided and the MGMR side using original Depth depth maps Conspicuousness testing result PR curve comparison figure of the method on data set RGBD1000;
Fig. 3 is the detection method for the RGB-D figure conspicuousnesses that the present invention is provided and the MGMR side using original Depth depth maps Conspicuousness accuracy rate of testing result-recall rate-F value column comparison chart of the method on data set RGBD1000;
Fig. 4 is the detection method for the RGB-D figure conspicuousnesses that the present invention is provided and the MGMR side using original Depth depth maps Conspicuousness testing result PR curve comparison figure of the method on data set NJU2000DS;
Fig. 5 is the detection method for the RGB-D figure conspicuousnesses that the present invention is provided and the MGMR side using original Depth depth maps Conspicuousness accuracy rate of testing result-recall rate-F value column comparison chart of the method on data set NJU2000DS.
Embodiment
In order to illustrate further the feature of the present invention, please refer to the following detailed descriptions related to the present invention and accompanying drawing.Institute Accompanying drawing is only for reference and purposes of discussion, not for being any limitation as to protection scope of the present invention.
As shown in figure 1, present embodiment discloses a kind of detection method of RGB-D figures conspicuousness, this method includes following step Rapid S1 to S4:
S1, using notable detection algorithm the Depth depth maps in RGB-D figures are significantly detected, obtain depth notable Scheme Sd
Specifically, the notable detection algorithm in the present embodiment includes but is not limited to《Local Background Enclosure for RGB-D Salient Object Detection》The LBE methods of proposition, to the Depth in RGB-D figures Depth map is handled.
S2, by depth notable figure SdStrengthen the manifold ranking of the RGB color figure in RGB-D figures as feature, obtain RGB The notable figure S of cromogramc
S3, the notable figure S using RGB color figurecThe manifold ranking of Depth depth maps is instructed, Depth depth maps are obtained Notable figure Sd′
S4, the notable figure S by RGB color figurecWith the notable figure S of Depth depth mapsd′Merged, obtain RGB-D figures Notable figure S, to detect the conspicuousness of RGB-D figures.
Further, step S2:" by depth notable figure SdStrengthen the stream of the RGB color figure in RGB-D figures as feature Shape sorts, and obtains the notable figure S of RGB color figurec", specifically include following fine division step:
Construct the non-directed graph G of RGB color figurec=(Vc,Ec), wherein VcIt is vertex set, comprising RGB color figure through over-segmentation The set of the super-pixel obtained after algorithm (SimpleLinearIterative Cluster, SLIC) segmentation, EcFor incidence matrixThe side collection of weighting,Represent the connection weight on the side between two summit i, j;
Defined feature vector fc=[c,D,n]T, wherein c represents that the super-pixel that RGB color figure is obtained after over-segmentation exists Average in CIELAB color spaces, D represents that obtaining super-pixel after the segmentation of RGB color figure maps directly in RGB-D images The super-pixel that Depth depth maps are formed is in depth notable figure SdIn average, n represents that RGB color figure is obtained after over-segmentation Super-pixel normal average, T represents transposition;
Wherein,Definition be:scc, scDAnd scnIt is respectively The constants of control weight.
By the use of four corner regions of RGB color figure as background seed, what RGB color figure was obtained after over-segmentation is super Correlation between pixel node implements manifold ranking algorithm, obtains the notable figure S of RGB color figurec
Specifically, the notable figure S of RGB color figure is calculatedcDetailed process it is as follows:The artwork of RGB color figure is constructed first Type:Non-directed graph Gc=(Vc,Ec), then the background seed in graph model is ranked up:Assuming that four turnings of RGB color figure Region super-pixel implements manifold ranking algorithm, according to equation below phase as background seed the correlation super-pixel node Multiply and be normalized, obtain the notable figure S of RGB color figurec, formula is specially:
In formula,The notable figure produced by background seed, j are used as each corner region super-pixel of RGB color figurecFor Normalized parameter,Represent the fusion that is multiplied.
Further, step S3:" utilize the notable figure S of RGB color figurecThe manifold ranking of Depth depth maps is instructed, is obtained To the notable figure S of Depth depth mapsd′", specifically include following fine division step:
Construct the non-directed graph G of Depth depth mapsd=(Vd,Ed), wherein VdTo be obtained after the segmented algorithm segmentation of RGB color figure The Depth depth maps that map directly in RGB-D images of super-pixel form corresponding super-pixel vertex set, EdFor association MatrixThe side collection of weighting,Represent the connection weight on the side between two summit i, j;
Defined feature vector fd=[D, n]T, wherein D represent RGB color figure segmentation after obtain super-pixel map directly to The super-pixel that Depth depth maps in RGB-D images are formed is in depth notable figure SdIn average, n represents RGB color figure point The super-pixel normal average obtained after cutting.
Wherein,Definition be:sdDAnd sdnIt is control weight respectively Constants.
Using auto-thresholding algorithm (intra-class variance or the variance within The class, OSTU), to the notable figure S of the RGB color figurecBinary conversion treatment is carried out, the notable area of RGB color figure is obtained Domain;
The marking area of RGB color figure is arbitrarily divided into t equal portions, manifold ranking algorithm is respectively adopted and calculates the RGB coloured silks The correlation that super-pixel is mapped between the super-pixel node in Depth depth maps is obtained after chromatic graph segmentation, Depth depth is obtained The notable figure S of figured′, wherein t is constant.
Specifically, the t in the present embodiment can value be 4, calculate Depth depth maps notable figure Sd′Process be:First Construct the graph model of Depth depth maps:Non-directed graph Gd=(Vd,Ed), then utilize the notable figure S of RGB color figurecInstruct Depth The sequence of depth map, is arbitrarily divided into quarter by the marking area of RGB color figure, regard super-pixel in quarter region as prospect Seed produces corresponding notable figure, and the corresponding notable figure that every equal portions marking area is produced is added and returned according to equation below One changes, and obtains the notable figure S of Depth depth mapsd′, formula is specially:
Wherein,For each equal portions RGB color figure marking area super-pixel as notable produced by foreground seeds Figure, jdFor normalized parameter,Represent to be added fusion.
Further, step S4:" by the notable figure S of RGB color figurecWith the notable figure S of Depth depth mapsd′Melted Close, the notable figure S of RGB-D figures obtained, to detect the conspicuousness of RGB-D figures ", specifically include following fine division step:
Using fusion formula, to notable figure ScWith notable figure Sd′Merged, formula is specially:
S=aSc+Sd',
Wherein, a is control parameter constant.
It should be noted that the invention thinking that the present embodiment is provided replaces original with pretreated depth notable figure Coarse Depth depth maps weigh depth difference as depth characteristic, and by depth notable figure SdWith color characteristic combination shape Into comprehensive characteristics, incidence matrix is constructed, is applied in the calculating process of well-marked target, effectively raise the standard of conspicuousness detection True rate.As Figure 2-Figure 5, by carrying out conspicuousness detection on data set RGBD-1000 and NJU2000DS, it was demonstrated that this reality Apply and depth notable figure S is utilized in exampledThere is the effect of enhancing RGB-D saliency detections as feature, effectively raise The accuracy rate of RGB-D saliencies detection.
Specifically, the conspicuousness detection side provided on data set RGBD-1000, NJU2000DS using the present embodiment Method represents with OURS, and Xue et al. exists《RGB-D saliency detection via mutual guided manifold ranking》In the method that is characterized with original depth-map be expressed as MGMR, conspicuousness is carried out to RGB-D figures respectively Detection.Accuracy rate-the recall rate obtained it can be seen from Fig. 2, Fig. 4 using the conspicuousness detection method of the present embodiment (Precision-recall) curve is located on accuracy rate-recall rate (Precision-recall) curve that MGMR methods are obtained Side.Accuracy rate-recall rate that the conspicuousness detection method for disclosing method using the present embodiment it can be seen from Fig. 3, Fig. 5 is obtained- Accuracy rate, recall rate, F values in F values (precision-recall-F-measure) block diagram are above MGMR methods and obtained Accuracy rate, recall rate, F values.Therefore, conspicuousness detection method effectively raises the standard significantly detected disclosed in the present embodiment True rate.
The foregoing is only presently preferred embodiments of the present invention, be not intended to limit the invention, it is all the present invention spirit and Within principle, any modification, equivalent substitution and improvements made etc. should be included in the scope of the protection.

Claims (6)

1. a kind of detection method of RGB-D figures conspicuousness, it is characterised in that including:
The Depth depth maps in RGB-D figures are significantly detected using notable detection algorithm, depth notable figure S is obtainedd
By depth notable figure SdStrengthen the manifold ranking of the RGB color figure in RGB-D figures as feature, obtain RGB color figure Notable figure Sc
Utilize the notable figure S of RGB color figurecThe manifold ranking of Depth depth maps is instructed, the notable figure of Depth depth maps is obtained Sd′
By the notable figure S of RGB color figurecWith the notable figure S of Depth depth mapsd′Merged, obtain the notable figure of RGB-D figures S, to detect the conspicuousness of RGB-D figures.
2. the method as described in claim 1, it is characterised in that described by depth notable figure SdStrengthen RGB-D as feature The manifold ranking of RGB color figure in figure, obtains the notable figure S of RGB color figurec, specifically include:
Construct the non-directed graph G of RGB color figurec=(Vc,Ec), wherein VcIt is vertex set, comprising RGB color figure by partitioning algorithm point The set of the super-pixel obtained after cutting, EcFor incidence matrixThe side collection of weighting,Represent between two summit i, j The connection weight on side;
Defined feature vector fc=[c, D, n]T, wherein c represents super-pixel that RGB color figure obtains after over-segmentation in CIELAB Average in color space, D represents to obtain the Depth depths that super-pixel is mapped directly in RGB-D images after the segmentation of RGB color figure The super-pixel that degree figure is formed is in depth notable figure SdIn average, n represents the super-pixel that RGB color figure is obtained after over-segmentation Normal average, wherein,Definition be:scc, scDAnd scnRespectively It is the constants of control weight;
By the use of four corner regions of RGB color figure as background seed, what the RGB color figure was obtained after over-segmentation is super Correlation between pixel node implements manifold ranking algorithm, obtains the notable figure S of RGB color figurec
3. the method as described in claim 1, it is characterised in that the notable figure S of described utilization RGB color figurecInstruct Depth The manifold ranking of depth map, obtains the notable figure S of Depth depth mapsd′, specifically include:
Construct the non-directed graph G of Depth depth mapsd=(Vd,Ed), wherein VdTo be surpassed after the segmented algorithm segmentation of RGB color figure The Depth depth maps that pixel is mapped directly in RGB-D images form corresponding super-pixel vertex set, EdFor incidence matrixThe side collection of weighting,Represent the connection weight on the side between two summit i, j;
Defined feature vector fd=[D, n]T, wherein D represent RGB color figure segmentation after obtain super-pixel map directly to RGB-D figure The super-pixel that Depth depth maps as in are formed is in depth notable figure SdIn average, n represent RGB color figure segmentation after obtain The super-pixel normal average obtained, wherein,Definition be:sdDAnd sdnIt is respectively The constants of control weight;
Using auto-thresholding algorithm, to the notable figure S of the RGB color figurecBinary conversion treatment is carried out, RGB color is obtained The marking area of figure;
The marking area of RGB color figure is arbitrarily divided into t equal portions, manifold ranking algorithm is respectively adopted and calculates the RGB color figure The correlation that super-pixel is mapped between the super-pixel node in Depth depth maps is obtained after segmentation, Depth depth maps are obtained Notable figure Sd′, wherein t is constant.
4. the method as described in claim 1, it is characterised in that the notable figure S by RGB color figurecWith Depth depth The notable figure S of figured′Merged, obtain the notable figure S of RGB-D figures, specifically include:
To notable figure ScWith notable figure Sd′Merged, fusion formula is specially:
S=aSc+Sd',
Wherein, a is control parameter constant.
5. method as claimed in claim 2, it is characterised in that the notable figure S of described RGB color figurecCalculation formula, tool Body is:
Wherein,The notable figure produced by background seed, j are used as each corner region super-pixel of RGB color figurecFor normalizing Change parameter,Represent the fusion that is multiplied.
6. method as claimed in claim 3, it is characterised in that the notable figure S of described Depth depth mapsd′Calculation formula, Specially:
Wherein,Super-pixel for the marking area of each equal portions RGB color figure is used as the notable figure produced by foreground seeds, jd For normalized parameter,Represent to be added fusion.
CN201710263003.8A 2017-04-20 2017-04-20 Method for detecting significance of RGB-D (Red, Green and blue-D) image Pending CN107085848A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710263003.8A CN107085848A (en) 2017-04-20 2017-04-20 Method for detecting significance of RGB-D (Red, Green and blue-D) image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710263003.8A CN107085848A (en) 2017-04-20 2017-04-20 Method for detecting significance of RGB-D (Red, Green and blue-D) image

Publications (1)

Publication Number Publication Date
CN107085848A true CN107085848A (en) 2017-08-22

Family

ID=59612872

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710263003.8A Pending CN107085848A (en) 2017-04-20 2017-04-20 Method for detecting significance of RGB-D (Red, Green and blue-D) image

Country Status (1)

Country Link
CN (1) CN107085848A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107909078A (en) * 2017-10-11 2018-04-13 天津大学 Conspicuousness detection method between a kind of figure
CN108345892A (en) * 2018-01-03 2018-07-31 深圳大学 A kind of detection method, device, equipment and the storage medium of stereo-picture conspicuousness
CN108447060A (en) * 2018-01-29 2018-08-24 上海数迹智能科技有限公司 Front and back scape separation method based on RGB-D images and its front and back scene separation device
CN108805841A (en) * 2018-06-12 2018-11-13 西安交通大学 A kind of depth map recovery and View Synthesis optimization method based on cromogram guiding
CN109598268A (en) * 2018-11-23 2019-04-09 安徽大学 A kind of RGB-D well-marked target detection method based on single flow depth degree network
CN109712105A (en) * 2018-12-24 2019-05-03 浙江大学 A kind of image well-marked target detection method of combination colour and depth information
CN110189294A (en) * 2019-04-15 2019-08-30 杭州电子科技大学 RGB-D image significance detection method based on depth Analysis on confidence
CN114627116A (en) * 2022-05-13 2022-06-14 启东新朋莱纺织科技有限公司 Fabric defect identification method and system based on artificial intelligence
CN117237343A (en) * 2023-11-13 2023-12-15 安徽大学 Semi-supervised RGB-D image mirror detection method, storage medium and computer equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104966286A (en) * 2015-06-04 2015-10-07 电子科技大学 3D video saliency detection method
CN105447873A (en) * 2015-12-07 2016-03-30 天津大学 RGB-D significant object detection method based on random forest learning
CN105513070A (en) * 2015-12-07 2016-04-20 天津大学 RGB-D salient object detection method based on foreground and background optimization
CN105869173A (en) * 2016-04-19 2016-08-17 天津大学 Stereoscopic vision saliency detection method
CN105894502A (en) * 2016-03-30 2016-08-24 浙江大学 RGBD image salience detection method based on hypergraph model
CN106373162A (en) * 2015-07-22 2017-02-01 南京大学 Salient object detection method based on saliency fusion and propagation

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104966286A (en) * 2015-06-04 2015-10-07 电子科技大学 3D video saliency detection method
CN106373162A (en) * 2015-07-22 2017-02-01 南京大学 Salient object detection method based on saliency fusion and propagation
CN105447873A (en) * 2015-12-07 2016-03-30 天津大学 RGB-D significant object detection method based on random forest learning
CN105513070A (en) * 2015-12-07 2016-04-20 天津大学 RGB-D salient object detection method based on foreground and background optimization
CN105894502A (en) * 2016-03-30 2016-08-24 浙江大学 RGBD image salience detection method based on hypergraph model
CN105869173A (en) * 2016-04-19 2016-08-17 天津大学 Stereoscopic vision saliency detection method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
DAVID FENG等: "Local Background Enclosure for RGB-D Salient Object Detection", 《2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR)》 *
HAOYANG XUE等: "RGB-D SALIENCY DETECTION VIA MUTUAL GUIDED MANIFOLD RANKING", 《2015 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP)》 *
张晴等: "基于图的流行排序的显著目标检测改进算法", 《计算机工程与应用》 *
黄子超等: "特征融合与S-D概率矫正的RGB-D显著检测", 《中国图象与图形学报》 *

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107909078A (en) * 2017-10-11 2018-04-13 天津大学 Conspicuousness detection method between a kind of figure
CN107909078B (en) * 2017-10-11 2021-04-16 天津大学 Inter-graph significance detection method
CN108345892A (en) * 2018-01-03 2018-07-31 深圳大学 A kind of detection method, device, equipment and the storage medium of stereo-picture conspicuousness
CN108447060A (en) * 2018-01-29 2018-08-24 上海数迹智能科技有限公司 Front and back scape separation method based on RGB-D images and its front and back scene separation device
CN108447060B (en) * 2018-01-29 2021-07-09 上海数迹智能科技有限公司 Foreground and background separation method based on RGB-D image and foreground and background separation device thereof
CN108805841B (en) * 2018-06-12 2021-01-19 西安交通大学 Depth map recovery and viewpoint synthesis optimization method based on color map guide
CN108805841A (en) * 2018-06-12 2018-11-13 西安交通大学 A kind of depth map recovery and View Synthesis optimization method based on cromogram guiding
CN109598268A (en) * 2018-11-23 2019-04-09 安徽大学 A kind of RGB-D well-marked target detection method based on single flow depth degree network
CN109598268B (en) * 2018-11-23 2021-08-17 安徽大学 RGB-D (Red Green blue-D) significant target detection method based on single-stream deep network
CN109712105A (en) * 2018-12-24 2019-05-03 浙江大学 A kind of image well-marked target detection method of combination colour and depth information
CN110189294A (en) * 2019-04-15 2019-08-30 杭州电子科技大学 RGB-D image significance detection method based on depth Analysis on confidence
CN110189294B (en) * 2019-04-15 2021-05-07 杭州电子科技大学 RGB-D image significance detection method based on depth reliability analysis
CN114627116A (en) * 2022-05-13 2022-06-14 启东新朋莱纺织科技有限公司 Fabric defect identification method and system based on artificial intelligence
CN114627116B (en) * 2022-05-13 2022-07-15 启东新朋莱纺织科技有限公司 Fabric defect identification method and system based on artificial intelligence
CN117237343A (en) * 2023-11-13 2023-12-15 安徽大学 Semi-supervised RGB-D image mirror detection method, storage medium and computer equipment
CN117237343B (en) * 2023-11-13 2024-01-30 安徽大学 Semi-supervised RGB-D image mirror detection method, storage medium and computer equipment

Similar Documents

Publication Publication Date Title
CN107085848A (en) Method for detecting significance of RGB-D (Red, Green and blue-D) image
Pavithra et al. An efficient framework for image retrieval using color, texture and edge features
CN109522908B (en) Image significance detection method based on region label fusion
CN104835175B (en) Object detection method in a kind of nuclear environment of view-based access control model attention mechanism
Manno-Kovacs et al. Orientation-selective building detection in aerial images
Recky et al. Windows detection using k-means in cie-lab color space
CN104850850B (en) A kind of binocular stereo vision image characteristic extracting method of combination shape and color
CN108537239B (en) Method for detecting image saliency target
CN110232387B (en) Different-source image matching method based on KAZE-HOG algorithm
Wang et al. Multifocus image fusion using convolutional neural networks in the discrete wavelet transform domain
TW201005673A (en) Example-based two-dimensional to three-dimensional image conversion method, computer readable medium therefor, and system
CN106228544A (en) A kind of significance detection method propagated based on rarefaction representation and label
CN113392856B (en) Image forgery detection device and method
CN107067037B (en) Method for positioning image foreground by using LL C criterion
Cheng et al. Efficient sea–land segmentation using seeds learning and edge directed graph cut
CN108182705A (en) A kind of three-dimensional coordinate localization method based on machine vision
CN108388901B (en) Collaborative significant target detection method based on space-semantic channel
CN106845458A (en) A kind of rapid transit label detection method of the learning machine that transfinited based on core
CN102629325B (en) Image characteristic extraction method, device thereof, image copy detection method and system thereof
Yu et al. Mean shift based clustering of neutrosophic domain for unsupervised constructions detection
Song et al. Depth-aware saliency detection using discriminative saliency fusion
CN110210561B (en) Neural network training method, target detection method and device, and storage medium
Martins et al. On the completeness of feature-driven maximally stable extremal regions
US10796435B2 (en) Image processing method and image processing apparatus
CN107093183A (en) A kind of gluing path extraction method based on Sobel edge detection technique

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20170822

RJ01 Rejection of invention patent application after publication