CN111161291A - Contour detection method based on target depth of field information - Google Patents
Contour detection method based on target depth of field information Download PDFInfo
- Publication number
- CN111161291A CN111161291A CN201911412629.6A CN201911412629A CN111161291A CN 111161291 A CN111161291 A CN 111161291A CN 201911412629 A CN201911412629 A CN 201911412629A CN 111161291 A CN111161291 A CN 111161291A
- Authority
- CN
- China
- Prior art keywords
- field
- pixel point
- depth
- response value
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 30
- 230000003042 antagnostic effect Effects 0.000 claims description 12
- 238000001914 filtration Methods 0.000 claims description 12
- 238000000034 method Methods 0.000 claims description 6
- 230000001629 suppression Effects 0.000 claims description 4
- 230000007547 defect Effects 0.000 abstract description 2
- 230000000007 visual effect Effects 0.000 description 5
- 230000000694 effects Effects 0.000 description 3
- 238000011156 evaluation Methods 0.000 description 3
- 239000003086 colorant Substances 0.000 description 1
- 230000001054 cortical effect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The invention aims to provide a contour detection method based on target depth of field information, which comprises the following steps: A. collecting a gray level image and a depth of field image; B. respectively calculating the optimal response value of the gray classical receptive field and the optimal response value of the depth classical receptive field of the gray image and the depth image; C. respectively calculating a gray level contour response value and a depth of field contour response value of the gray level image and the depth of field image; D. calculating the final contour response value of each pixel point; E. and calculating the final contour value of each pixel point. The detection method overcomes the defects of the prior art and has the characteristics of comprehensive operation and high outline identification rate.
Description
Technical Field
The invention relates to the field of image processing, in particular to a contour detection method based on target depth of field information.
Background
Object contour information is important information for the visual system to perceive and recognize the object, so contour detection also becomes a fundamental problem to be solved by many computer vision tasks. The Human Visual System (HVS) has great ability to extract contour features quickly and accurately from complex scenes. Neurophysiologically, cortical visual cells, binocular cells, are sensitive to depth of field information and such cells are called depth (or parallax) sensitive cells. Depth of field information allows us to obtain a relative depth resolution that is vivid and accurate over the surrounding world. Human vision is a complex system, has extremely high combining capability, and can integrate various visual information such as shapes, colors, depths and the like in parallel and in sequence through a visual system, so that the consideration of depth of field information in the process of contour detection is a great direction for the research of contour detection methods.
Disclosure of Invention
The invention aims to provide a contour detection method based on target depth of field information, which overcomes the defects of the prior art and has the characteristics of comprehensive operation and high contour identification rate.
The technical scheme of the invention is as follows:
a contour detection method based on target depth information comprises the following steps:
A. collecting a gray level image and a depth of field image;
B. respectively calculating the optimal response value of the gray classical receptive field and the optimal response value of the depth classical receptive field of the gray image and the depth image;
C. respectively calculating a gray level contour response value and a depth of field contour response value of the gray level image and the depth of field image;
D. calculating the final contour response value of each pixel point;
E. and calculating the final contour value of each pixel point.
Preferably, the steps are as follows:
A. acquiring an image to be detected, carrying out gray level processing to obtain a gray level image, and acquiring a depth of field image corresponding to the gray level image;
B. presetting a two-dimensional Gaussian first-order partial derivative function containing a plurality of direction parameters;
for each pixel point of the gray level image, filtering the gray level value of the pixel point by adopting a two-dimensional Gauss first-order partial derivative function to obtain a gray level classical receptive field response of each pixel point, respectively taking the maximum value of the gray level classical receptive field initial response value of each direction parameter for the gray level classical receptive field initial response value of each direction parameter of each pixel point, and taking the maximum value as the gray level classical receptive field optimal response value of the pixel point;
for each pixel point of the depth-of-field image, filtering the depth-of-field value of each pixel point by adopting a two-dimensional Gaussian first-order partial derivative function to obtain a depth-of-field classical receptive field initial response value of each pixel point in each direction parameter; respectively taking the maximum value of the depth of field classical receptive field initial response value of each direction parameter of each pixel point as the depth of field classical receptive field optimal response value of the pixel point;
C. presetting a normalized Gaussian difference function and non-classical receptive field antagonistic strength;
for each pixel point of the gray level image, filtering the optimal response value of the gray level classical receptive field by adopting a normalized Gaussian difference function to obtain the response value of the gray level non-classical receptive field of each pixel point; for each pixel point of the gray image, subtracting the product of the gray non-classical receptive field response value and the non-classical receptive field antagonistic strength from the gray classical receptive field optimal response value of the pixel point to obtain the gray contour response value of each pixel point;
for each pixel point of the depth-of-field image, filtering the optimal response value of the depth-of-field classical receptive field by adopting a normalized Gaussian difference function to obtain the response value of the depth-of-field non-classical receptive field of each pixel point; for each pixel point of the depth-of-field image, subtracting the product of the depth-of-field non-classical receptive field response value and the non-classical receptive field antagonistic strength from the depth-of-field classical receptive field optimal response value of the pixel point to obtain the depth-of-field contour response value of each pixel point;
D. presetting a connection coefficient of the gray level image and the depth of field image, and calculating a final contour response value of each pixel point through the gray level contour response value, the depth of field contour response value and the connection coefficient to obtain the final contour response value of each pixel point;
E. and for each pixel point, carrying out non-maximum suppression and double-threshold processing on the final contour response value of each pixel point to obtain the final contour value of each pixel point.
Preferably, the step B specifically comprises:
WhereinSigma is a Gaussian function standard deviation, and gamma is a constant representing the ratio of the long axis to the short axis of the elliptical receptive field;
Wherein N isθIs the number of directional parameters;
the gray classical receptive field initial response value of each pixel point in each direction parameter is as follows:
eIM(x,y;σ,θi)=I(x,y)*DG(x,y;σ,θi) (3);
wherein, I (x, y) is the gray value of each pixel point, and is convolution operation;
the optimal response value of the gray classical receptive field of each pixel point is as follows:
EIM(x,y;σ)=max{eIM(x,y;σ,θi)|i=1,2,…Nθ} (4);
the depth of field classical receptive field initial response value of each pixel point in each direction parameter is as follows:
eDE(x,y;σ,θi)=D(x,y)*DG(x,y;σ,θi) (5);
wherein D (x, y) is the depth of field value of each pixel point;
the optimal response value of the depth of field classical receptive field of each pixel point is as follows:
EDE(x,y;σ)=max{eDE(x,y;σ,θi)|i=1,2,…Nθ} (6)。
preferably, the step C specifically includes:
||·||1is L1Norm, n (x) max (0, x);
the gray level non-classical receptive field response value of each pixel point is InhIM(x,y;σ);
InhIM(x,y;σ)=EIM(x,y;σ)*Wd(x,y;σ) (8);
Gray contour response value F of each pixel pointIM(x,y)=N(EIM(x,y;σ)-αInhIM(x,y;σ)) (9);
The depth of field of each pixel point is the classical reception field response value of InhDE(x,y;σ);
InhDE(x,y;σ)=EDE(x,y;σ)*Wd(x,y;σ) (10);
Depth of field contour response value F of each pixel pointDE(x,y)=N(EDE(x,y;σ)-αInhDE(x,y;σ)) (11);
Wherein α is the non-classical receptor antagonistic strength.
Preferably, the step D specifically includes:
the final contour response value R (x, y) of each pixel point is β & FIM(x,y)+(1-β)FDE(x,y) (12);
β is the connection coefficient between the grayscale image and the depth image.
The invention improves the simulation degree of the detection model by combining the depth-of-field image and the natural image to detect the contour, and can avoid the situation that the edge pixels are difficult to detect when the brightness and the color of the objects are similar by adopting the depth-of-field image to detect the contour and calculate, thereby improving the applicability of the contour detection model; and by optimizing the fusion ratio of the natural gray level image and the depth image, background texture information can be reduced, and the detection accuracy is improved.
Drawings
Fig. 1 is a block flow diagram of a contour detection method based on depth information of an object according to the present invention;
fig. 2 is a comparison graph of the detection effect of the method of example 1 and the detection effect of the contour detection model of document 1.
Detailed Description
The present invention will be described in detail with reference to the accompanying drawings and examples.
Example 1
As shown in fig. 1-2, the contour detection method based on the depth of field information of the target provided in this embodiment includes the following steps:
A. acquiring an image to be detected, carrying out gray level processing to obtain a gray level image, and acquiring a depth of field image corresponding to the gray level image;
B. presetting a two-dimensional Gaussian first-order partial derivative function containing a plurality of direction parameters;
for each pixel point of the gray level image, filtering the gray level value of the pixel point by adopting a two-dimensional Gauss first-order partial derivative function to obtain a gray level classical receptive field response of each pixel point, respectively taking the maximum value of the gray level classical receptive field initial response value of each direction parameter for the gray level classical receptive field initial response value of each direction parameter of each pixel point, and taking the maximum value as the gray level classical receptive field optimal response value of the pixel point;
for each pixel point of the depth-of-field image, filtering the depth-of-field value of each pixel point by adopting a two-dimensional Gaussian first-order partial derivative function to obtain a depth-of-field classical receptive field initial response value of each pixel point in each direction parameter; respectively taking the maximum value of the depth of field classical receptive field initial response value of each direction parameter of each pixel point as the depth of field classical receptive field optimal response value of the pixel point;
the step B is specifically as follows:
WhereinSigma is a Gaussian function standard deviation, and gamma is a constant representing the ratio of the long axis to the short axis of the elliptical receptive field;
Wherein N isθIs the number of directional parameters;
the gray classical receptive field initial response value of each pixel point in each direction parameter is as follows:
eIM(x,y;σ,θi)=I(x,y)*DG(x,y;σ,θi) (3);
wherein, I (x, y) is the gray value of each pixel point, and is convolution operation;
the optimal response value of the gray classical receptive field of each pixel point is as follows:
EIM(x,y;σ)=max{eIM(x,y;σ,θi)|i=1,2,…Nθ} (4);
the depth of field classical receptive field initial response value of each pixel point in each direction parameter is as follows:
eDE(x,y;σ,θi)=D(x,y)*DG(x,y;σ,θi) (5);
wherein D (x, y) is the depth of field value of each pixel point;
the optimal response value of the depth of field classical receptive field of each pixel point is as follows:
EDE(x,y;σ)=max{eDE(x,y;σ,θi)|i=1,2,…Nθ} (6);
C. presetting a normalized Gaussian difference function and non-classical receptive field antagonistic strength;
for each pixel point of the gray level image, filtering the optimal response value of the gray level classical receptive field by adopting a normalized Gaussian difference function to obtain the response value of the gray level non-classical receptive field of each pixel point; for each pixel point of the gray image, subtracting the product of the gray non-classical receptive field response value and the non-classical receptive field antagonistic strength from the gray classical receptive field optimal response value of the pixel point to obtain the gray contour response value of each pixel point;
for each pixel point of the depth-of-field image, filtering the optimal response value of the depth-of-field classical receptive field by adopting a normalized Gaussian difference function to obtain the response value of the depth-of-field non-classical receptive field of each pixel point; for each pixel point of the depth-of-field image, subtracting the product of the depth-of-field non-classical receptive field response value and the non-classical receptive field antagonistic strength from the depth-of-field classical receptive field optimal response value of the pixel point to obtain the depth-of-field contour response value of each pixel point;
the step C is specifically as follows:
||·||1is L1Norm, n (x) max (0, x);
the gray level non-classical receptive field response value of each pixel point is InhIM(x,y;σ);
InhIM(x,y;σ)=EIM(x,y;σ)*Wd(x,y;σ) (8);
Gray contour response value F of each pixel pointIM(x,y)=N(EIM(x,y;σ)-αInhIM(x,y;σ)) (9);
The depth of field of each pixel point is the classical reception field response value of InhDE(x,y;σ);
InhDE(x,y;σ)=EDE(x,y;σ)*Wd(x,y;σ) (10);
Depth of field contour response value F of each pixel pointDE(x,y)=N(EDE(x,y;σ)-αInhDE(x,y;σ)) (11);
Wherein α is the non-classical receptor antagonistic strength;
D. presetting a connection coefficient of the gray level image and the depth of field image, and calculating a final contour response value of each pixel point through the gray level contour response value, the depth of field contour response value and the connection coefficient to obtain the final contour response value of each pixel point;
the step D is specifically as follows:
the final contour response value R (x, y) of each pixel point is β & FIM(x,y)+(1-β)FDE(x,y) (12);
β is the connection coefficient of the gray scale image and the depth image;
E. and for each pixel point, carrying out non-maximum suppression and double-threshold processing on the final contour response value of each pixel point to obtain the final contour value of each pixel point.
The following compares the effectiveness of the contour detection method of the present embodiment with the contour detection method provided in document 1, where document 1 is as follows:
document 2: yang K F, Li C Y, Li Y J, Multi-feature-based failure detection in natural images [ J ]. IEEE Transactions on image processing,2014,23(12): 5020-5032;
to ensure the effectiveness of the comparison, the same non-maximum suppression and double-threshold processing as in document 1 are used for the final contour integration for this embodiment, wherein two thresholds t are includedh,tlIs set to tl=0.5thCalculated from a threshold quantile p;
wherein the performance evaluation index F adopts the following criteria given in document 2:
wherein P represents the accuracy, R represents the recall rate, the value of the performance evaluation index F is between [0,1], the closer to 1, the better the effect of the contour detection is represented, and in addition, the definition tolerance is as follows: all detected within 5 x 5 neighbourhoods are counted as correct detections.
Selecting four random natural images of the NYUD data set and depth-of-field images corresponding to the four natural images, and respectively adopting the scheme of embodiment 1 and the scheme of document 1 to carry out detection, wherein the corresponding real profile and the optimal profile detected by the method of document 1 are shown in FIG. 2; wherein, in the optimal contour map detected by the method of document 1, the number at the upper right corner in the optimal contour map detected by the method of embodiment 1 is the value of the corresponding performance evaluation index F, and table 1 is the parameter values selected by the embodiment 1 and the comparison document 1;
table 1 example 1 parameter set table
As can be seen from fig. 2, the contour detection result of the embodiment 1 is superior to that of the document 1.
Claims (5)
1. A contour detection method based on target depth information is characterized by comprising the following steps:
A. collecting a gray level image and a depth of field image;
B. respectively calculating the optimal response value of the gray classical receptive field and the optimal response value of the depth classical receptive field of the gray image and the depth image;
C. respectively calculating a gray level contour response value and a depth of field contour response value of the gray level image and the depth of field image;
D. calculating the final contour response value of each pixel point;
E. and calculating the final contour value of each pixel point.
2. The contour detection method based on the depth of field information of the object as claimed in claim 1, wherein:
the method comprises the following steps:
A. acquiring an image to be detected, carrying out gray level processing to obtain a gray level image, and acquiring a depth of field image corresponding to the gray level image;
B. presetting a two-dimensional Gaussian first-order partial derivative function containing a plurality of direction parameters;
for each pixel point of the gray level image, filtering the gray level value of the pixel point by adopting a two-dimensional Gauss first-order partial derivative function to obtain a gray level classical receptive field response of each pixel point, respectively taking the maximum value of the gray level classical receptive field initial response value of each direction parameter for the gray level classical receptive field initial response value of each direction parameter of each pixel point, and taking the maximum value as the gray level classical receptive field optimal response value of the pixel point;
for each pixel point of the depth-of-field image, filtering the depth-of-field value of each pixel point by adopting a two-dimensional Gaussian first-order partial derivative function to obtain a depth-of-field classical receptive field initial response value of each pixel point in each direction parameter; respectively taking the maximum value of the depth of field classical receptive field initial response value of each direction parameter of each pixel point as the depth of field classical receptive field optimal response value of the pixel point;
C. presetting a normalized Gaussian difference function and non-classical receptive field antagonistic strength;
for each pixel point of the gray level image, filtering the optimal response value of the gray level classical receptive field by adopting a normalized Gaussian difference function to obtain the response value of the gray level non-classical receptive field of each pixel point; for each pixel point of the gray image, subtracting the product of the gray non-classical receptive field response value and the non-classical receptive field antagonistic strength from the gray classical receptive field optimal response value of the pixel point to obtain the gray contour response value of each pixel point;
for each pixel point of the depth-of-field image, filtering the optimal response value of the depth-of-field classical receptive field by adopting a normalized Gaussian difference function to obtain the response value of the depth-of-field non-classical receptive field of each pixel point; for each pixel point of the depth-of-field image, subtracting the product of the depth-of-field non-classical receptive field response value and the non-classical receptive field antagonistic strength from the depth-of-field classical receptive field optimal response value of the pixel point to obtain the depth-of-field contour response value of each pixel point;
D. presetting a connection coefficient of the gray level image and the depth of field image, and calculating a final contour response value of each pixel point through the gray level contour response value, the depth of field contour response value and the connection coefficient to obtain the final contour response value of each pixel point;
E. and for each pixel point, carrying out non-maximum suppression and double-threshold processing on the final contour response value of each pixel point to obtain the final contour value of each pixel point.
3. The contour detection method based on the depth of field information of the object as claimed in claim 2, wherein:
the step B is specifically as follows:
WhereinSigma is a Gaussian function standard deviation, and gamma is a constant representing the ratio of the long axis to the short axis of the elliptical receptive field;
Wherein N isθIs the number of directional parameters;
the gray classical receptive field initial response value of each pixel point in each direction parameter is as follows:
eIM(x,y;σ,θi)=I(x,y)*DG(x,y;σ,θi) (3);
wherein, I (x, y) is the gray value of each pixel point, and is convolution operation;
the optimal response value of the gray classical receptive field of each pixel point is as follows:
EIM(x,y;σ)=max{eIM(x,y;σ,θi)|i=1,2,…Nθ} (4);
the depth of field classical receptive field initial response value of each pixel point in each direction parameter is as follows:
eDE(x,y;σ,θi)=D(x,y)*DG(x,y;σ,θi) (5);
wherein D (x, y) is the depth of field value of each pixel point;
the optimal response value of the depth of field classical receptive field of each pixel point is as follows:
EDE(x,y;σ)=max{eDE(x,y;σ,θi)|i=1,2,…Nθ} (6)。
4. the contour detection method based on the depth of field information of the object as claimed in claim 3, wherein:
the step C is specifically as follows:
||·||1is L1Norm, n (x) max (0, x);
the gray level non-classical receptive field response value of each pixel point is InhIM(x,y;σ);
InhIM(x,y;σ)=EIM(x,y;σ)*Wd(x,y;σ) (8);
Gray contour response value F of each pixel pointIM(x,y)=N(EIM(x,y;σ)-αInhIM(x,y;σ)) (9);
The depth of field of each pixel point is the classical reception field response value of InhDE(x,y;σ);
InhDE(x,y;σ)=EDE(x,y;σ)*Wd(x,y;σ) (10);
Depth of field contour response value F of each pixel pointDE(x,y)=N(EDE(x,y;σ)-αInhDE(x,y;σ)) (11);
Wherein α is the non-classical receptor antagonistic strength.
5. The contour detection method based on the depth of field information of the object as claimed in claim 4, wherein:
the step D is specifically as follows:
the final contour response value R (x, y) of each pixel point is β & FIM(x,y)+(1-β)FDE(x,y) (12);
β is the connection coefficient between the grayscale image and the depth image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911412629.6A CN111161291A (en) | 2019-12-31 | 2019-12-31 | Contour detection method based on target depth of field information |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911412629.6A CN111161291A (en) | 2019-12-31 | 2019-12-31 | Contour detection method based on target depth of field information |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111161291A true CN111161291A (en) | 2020-05-15 |
Family
ID=70559965
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911412629.6A Pending CN111161291A (en) | 2019-12-31 | 2019-12-31 | Contour detection method based on target depth of field information |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111161291A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111476810A (en) * | 2020-06-28 | 2020-07-31 | 北京美摄网络科技有限公司 | Image edge detection method and device, electronic equipment and storage medium |
CN113344997A (en) * | 2021-06-11 | 2021-09-03 | 山西方天圣华数字科技有限公司 | Method and system for rapidly acquiring high-definition foreground image only containing target object |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102288613A (en) * | 2011-05-11 | 2011-12-21 | 北京科技大学 | Surface defect detecting method for fusing grey and depth information |
JP2012151776A (en) * | 2011-01-21 | 2012-08-09 | Hitachi Consumer Electronics Co Ltd | Video processing apparatus and video display device using the same |
CN106327464A (en) * | 2015-06-18 | 2017-01-11 | 南京理工大学 | Edge detection method |
CN107067407A (en) * | 2017-04-11 | 2017-08-18 | 广西科技大学 | Profile testing method based on non-classical receptive field and linear non-linear modulation |
CN107578418A (en) * | 2017-09-08 | 2018-01-12 | 华中科技大学 | A kind of indoor scene profile testing method of confluent colours and depth information |
CN108010046A (en) * | 2017-12-14 | 2018-05-08 | 广西科技大学 | Based on the bionical profile testing method for improving classical receptive field |
CN108764186A (en) * | 2018-06-01 | 2018-11-06 | 合肥工业大学 | Personage based on rotation deep learning blocks profile testing method |
CN109146901A (en) * | 2018-08-03 | 2019-01-04 | 广西科技大学 | Profile testing method based on color antagonism receptive field |
CN109949324A (en) * | 2019-02-01 | 2019-06-28 | 广西科技大学 | Profile testing method based on the non-linear subunit response of non-classical receptive field |
-
2019
- 2019-12-31 CN CN201911412629.6A patent/CN111161291A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012151776A (en) * | 2011-01-21 | 2012-08-09 | Hitachi Consumer Electronics Co Ltd | Video processing apparatus and video display device using the same |
CN102288613A (en) * | 2011-05-11 | 2011-12-21 | 北京科技大学 | Surface defect detecting method for fusing grey and depth information |
CN106327464A (en) * | 2015-06-18 | 2017-01-11 | 南京理工大学 | Edge detection method |
CN107067407A (en) * | 2017-04-11 | 2017-08-18 | 广西科技大学 | Profile testing method based on non-classical receptive field and linear non-linear modulation |
CN107578418A (en) * | 2017-09-08 | 2018-01-12 | 华中科技大学 | A kind of indoor scene profile testing method of confluent colours and depth information |
CN108010046A (en) * | 2017-12-14 | 2018-05-08 | 广西科技大学 | Based on the bionical profile testing method for improving classical receptive field |
CN108764186A (en) * | 2018-06-01 | 2018-11-06 | 合肥工业大学 | Personage based on rotation deep learning blocks profile testing method |
CN109146901A (en) * | 2018-08-03 | 2019-01-04 | 广西科技大学 | Profile testing method based on color antagonism receptive field |
CN109949324A (en) * | 2019-02-01 | 2019-06-28 | 广西科技大学 | Profile testing method based on the non-linear subunit response of non-classical receptive field |
Non-Patent Citations (4)
Title |
---|
COSMIN GRIGORESCU: "Contour Detection Based on Nonclassical Receptive Field inhibition", 《IEEE TRANSACTIONS ON IMAGE PROCESSING》 * |
HAOSONG YUE 等: "Combining color and depth data for edge detection", 《2013 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS (ROBIO)》 * |
JIE XIAO , CHAO CAI: "Contour Detection Combined With Depth Information", 《MIPPR 2015:PATTERN RECOGNITION AND COMPUTER VISION》 * |
KAI-FU YANG 等: "Multifeature-Based Surround Inhibition Improves", 《IEEE TRANSACTIONS ON IMAGE PROCESSING》 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111476810A (en) * | 2020-06-28 | 2020-07-31 | 北京美摄网络科技有限公司 | Image edge detection method and device, electronic equipment and storage medium |
CN113344997A (en) * | 2021-06-11 | 2021-09-03 | 山西方天圣华数字科技有限公司 | Method and system for rapidly acquiring high-definition foreground image only containing target object |
CN113344997B (en) * | 2021-06-11 | 2022-07-26 | 方天圣华(北京)数字科技有限公司 | Method and system for rapidly acquiring high-definition foreground image only containing target object |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107767413B (en) | Image depth estimation method based on convolutional neural network | |
CN104966285B (en) | A kind of detection method of salient region | |
CN109034017A (en) | Head pose estimation method and machine readable storage medium | |
CN105279772B (en) | A kind of trackability method of discrimination of infrared sequence image | |
CN114820773B (en) | Silo transport vehicle carriage position detection method based on computer vision | |
CN105740775A (en) | Three-dimensional face living body recognition method and device | |
CN108875623B (en) | Face recognition method based on image feature fusion contrast technology | |
TWI457853B (en) | Image processing method for providing depth information and image processing system using the same | |
CN103729649A (en) | Image rotating angle detection method and device | |
CN108470178B (en) | Depth map significance detection method combined with depth credibility evaluation factor | |
CN107146258B (en) | Image salient region detection method | |
CN104408728A (en) | Method for detecting forged images based on noise estimation | |
CN101908153A (en) | Method for estimating head postures in low-resolution image treatment | |
CN111161291A (en) | Contour detection method based on target depth of field information | |
CN113252103A (en) | Method for calculating volume and mass of material pile based on MATLAB image recognition technology | |
CN107392953B (en) | Depth image identification method based on contour line | |
CN116129195A (en) | Image quality evaluation device, image quality evaluation method, electronic device, and storage medium | |
CN117593193B (en) | Sheet metal image enhancement method and system based on machine learning | |
CN109344758B (en) | Face recognition method based on improved local binary pattern | |
CN112396016B (en) | Face recognition system based on big data technology | |
CN104851102B (en) | A kind of infrared small target detection method based on human visual system | |
CN111080663B (en) | Bionic contour detection method based on dynamic receptive field | |
CN111179293B (en) | Bionic contour detection method based on color and gray level feature fusion | |
CN106446832B (en) | Video-based pedestrian real-time detection method | |
EP2752817A1 (en) | Device for detecting line segment and arc |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200515 |
|
RJ01 | Rejection of invention patent application after publication |