CN103530878A - Edge extraction method based on fusion strategy - Google Patents
Edge extraction method based on fusion strategy Download PDFInfo
- Publication number
- CN103530878A CN103530878A CN201310475874.8A CN201310475874A CN103530878A CN 103530878 A CN103530878 A CN 103530878A CN 201310475874 A CN201310475874 A CN 201310475874A CN 103530878 A CN103530878 A CN 103530878A
- Authority
- CN
- China
- Prior art keywords
- pixel
- edge
- weight
- matrix
- weight matrix
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Landscapes
- Image Analysis (AREA)
Abstract
The invention discloses an edge extraction method based on a fusion strategy, and belongs to the field of computer application, such as image processing, pattern recognition and vision computation. The edge extraction method comprises the following steps of inputting a grayscale image; integrating extraction results of three typical edge extraction algorithms to obtain voting weight reflecting and belonging to edge possibility degree; analyzing a difference value of the maximum luminance difference and the minimum luminance difference of pixel points and a neighborhood to obtain difference weight describing luminance mutation degree; counting deleted neighborhood variance distribution, and according to the characteristic that an edge point at least has the relative larger luminance dispersity on the basis that the luminance dispersity of four neighborhoods of the whole image is average, obtaining edge distribution weight of all the pixel points; fusing three weight matrixes to carry out edge decision-making; outputting an edge image. The edge extraction method based on the fusion strategy, which is disclosed by the invention, has the advantages that the accuracy of edge extraction is improved, the influence on the noise is reasonably reduced, and the preparation information is provided for subsequent processing, such as further image analysis and feature point positioning.
Description
Technical field
The present invention relates to Digital Image Processing and technical field of computer vision, be specifically related to the edge extracting image processing method based on convergence strategy.
Background technology
Edge is that in image, the information such as gray level or structure exists the place of sudden change in various degree, is the end in a region, is also the beginning in another region.Edge is that image is cut apart, and the important foundation that the graphical analyses such as texture feature extraction and Shape Feature Extraction are understood, is also the important foundation of the research fields such as computer vision, pattern-recognition.
In recent years, existing a lot of edge extracting methods are proposed by numerous researchers.Classical edge extracting method, utilizes the gradient information of image, the first-order filtering of image, and second-order filter or zero crossing detect to extract, such as Sobel operator, Prewitt operator, Roberts operator, LoG operator, the arithmetic operators such as Canny operator.These Operator Image Edge are occupied leading position in extracting always, but due to the otherness of image, different operators have some difference to the edge detection results of different images.Also have at present method, utilize knowledge of statistics and machine learning to carry out edge extracting, such as there being fuzzy edge to detect, logistic regression detects, the rim detection based on Markov random field, multi-scale morphology.But still do not have a kind of edge detection results can well be applicable to different images.Therefore need to there be the edge extracting method more with robustness to be suggested.
Summary of the invention
The object of the present invention is to provide a kind of edge extracting method based on convergence strategy, the described edge extracting method based on convergence strategy will improve the inadequate robust of existing single edge extracting technology, the marginal information integrality of extracting is not enough, is subject to noise jamming impact, and accuracy is undesirable.
The present invention realizes by the following method, and the present invention is the data continuity criterion based on gray level image, and obtained gray level image is carried out to edge extracting.A kind of edge extracting method based on convergence strategy proposed by the invention, is characterized in that, the method comprises the following steps:
1. the edge extracting method based on convergence strategy, is characterized in that comprising the following steps:
Step 1, input gray level image
Step 2, in computed image, pixel is the ballot weight matrix of edge possibility:
Here at respective pixel point, refer to the eight neighborhood scope statistics at detected pixel, three kinds of operator testing results are added up respectively separately; By detected pixel namely the central pixel point of eight neighborhoods to give be 1 weights, it is 0.5 weights that four, its neighbours territory pixel is given, to give be 0.25 weights to remaining four pixels of eight neighborhoods four angles; When eight Shang Wei edges, neighborhood relevant position assignment be 1, by weighting, judge; Resulting weight and be more than or equal to 2, think this pixel that this kind of operator detect namely the central pixel point of eight neighborhoods be marginal point, and be 1 by weight and assignment, if weight and be less than 2 is thought not to be marginal point and by weight and assignment 0; Add up successively each pixel weight and, can obtain codomain for the Nearest Neighbor with Weighted Voting of [0,3];
Step 3, in computed image, pixel is the difference weight matrix of edge possibility
Step 31, input gray level image;
The Logic Regression Models is here:
Y(x)=1/(1+exp(a*x+b))
Wherein, x is maximum luminance difference or the minimum brightness difference that each pixel is corresponding, Y (x) is maximum luminance difference or the minimum brightness difference after the normalization that each pixel is corresponding, a and b are luminance difference parameter, and for each pixel maximum difference in luminance parameter, be: a equals-0.1, and b equals 2; For the poor parameter-definition of each pixel minimum brightness: a equals-0.3, and b equals 5;
Step 4, in computed image, pixel is the marginal distribution weight matrix of edge possibility
Step 5, carries out fusion treatment to obtain three kinds of weight matrix
Here by ask merge weight matrix all elements and and divided by the element of marginal distribution weight matrix and, then be multiplied by coefficient 0.7, obtain benchmark numerical value, as threshold value;
Step 6, output edge image.
The invention has the beneficial effects as follows: edge extracting effect is superior, marginal information is lost few and degree of accuracy is high, has certain noise fault-tolerance.The present invention utilizes the thought of convergence strategy, make full use of the directviewing description of the existing validity of classical Boundary extracting algorithm and the partial structurtes characteristic of image border, carry out rim detection, can at utmost adapt to different images local edge, approach the best extraction results of different edge algorithms, improve final edge extracting result, improve edge extracting accurate.
Accompanying drawing explanation
Fig. 1 is the present invention and three kinds of classical edge detection operator Sobel, Canny and the experimental result comparison diagram of LoG.
Fig. 2 is a kind of edge extracting method process flow diagram based on convergence strategy proposed by the invention.
Fig. 3 is that the present invention obtains ballot weight module process flow diagram.
Fig. 4 is that the present invention obtains difference weight module process flow diagram.
Fig. 5 is that the present invention obtains marginal distribution weight module process flow diagram.
Fig. 6 is that weight of the present invention merges decision-making module process flow diagram.
Embodiment
For making the object, technical solutions and advantages of the present invention clearer, below in conjunction with specific embodiment, and with reference to accompanying drawing, the present invention is described in more detail.As shown in Figure 1, Fig. 1-1 and 2-1 are two different test patterns; Fig. 1-2 and Fig. 2-2nd, the inventive method edge detection results figure; Fig. 1-3 and Fig. 2-3rd, Sobel operator edge detection result figure; Fig. 1-4 and Fig. 2-4th, Canny operator edge detection result figure; Fig. 1-5 and Fig. 2-5th, LoG operator edge detection result figure; By result figure contrast, can find out, the edge integrity that the inventive method is extracted is relatively good, and redundant information is few, has effectively given prominence to marginal information, and rim detection is effective.
Fig. 2 is a kind of edge extracting method process flow diagram based on convergence strategy proposed by the invention, and the described edge extracting method based on convergence strategy specifically comprises the following steps:
Step 1, input gray level image.
Described gray level image is single channel image, and the gray level image here can change from coloured image, also can directly choose arbitrary passage of coloured image, and wherein each pixel span is 0 to 255.
Step 2, in computed image, pixel is the ballot weight matrix of edge possibility.
Use is obtained 102 pairs of images that read of ballot weight module and is processed, and finally obtains ballot weight matrix.Obtaining the handled data source of ballot weight module 102 is the single channel view data that step 1 obtains, by three kinds of typical edge extracting methods, carry out respectively edge extracting, utilize Voting Algorithm, three edge image ballots that these three kinds of edge extracting methods are obtained, according to each pixel and eight neighborhood territory pixels point thereof, be weighted ballot statistics, can obtain needed ballot weight matrix.
Step 3, in computed image, pixel is the difference weight matrix of edge possibility.
Use is obtained 103 pairs of images that read of difference weight module and is processed, and obtains difference weight matrix.Obtaining the handled data source of difference weight module 103 is the single channel view data that step 1 obtains, obtain the luminance difference of each pixel and its neighbours territory pixel, and obtain respectively maximal value and the minimum value of four differences of each pixel, then each pixel maximum difference and two Logic Regression Models of minimal difference substitution are separately tried to achieve to corresponding normalized value, finally two normalized values of each pixel are subtracted each other, by each pixel normalization minimax neighborhood luminance difference, form matrix, be required difference weight matrix.
Step 4, in computed image, pixel is the marginal distribution weight matrix of edge possibility.
Use is obtained 104 pairs of images that read of marginal distribution weight module and is processed, and finally obtains marginal distribution weight matrix.Obtaining the handled data source of marginal distribution weight module 104 is the single channel view data that step 1 obtains, calculate the variance of removing four pixels in heart neighbours territory of each pixel, calculate again the average of all pixel variances, and be multiplied by coefficient 0.8, as threshold value, binaryzation is removed heart variance matrix, detailed process: each pixel is tried to achieve to deleted neighbourhood variance and compare with threshold value, if the variance of correspondence position is greater than or equal to benchmark numerical value separately, by the value assignment of this pixel, be 1; If the variance of correspondence position is less than benchmark numerical value separately, by the value assignment of this pixel, be 0, obtain one by 0 and 1 matrix forming, as two-value marginal distribution weight matrix.
Step 5, carries out fusion treatment to obtain three kinds of weight matrix.
Three kinds of weight matrix that 105 pairs of decision-making modules of weight fusion obtain merge, and final acquisition merged matrix.Weight merges decision-making module 105 and adopts the Hadamard products weight matrix of voting, difference weight matrix and marginal distribution weight matrix carry out corresponding element product calculation, obtain and merge matrix, calculate to merge respectively matrix and marginal distribution weight matrix all elements and, by merge matrix element and divided by marginal distribution matrix element and, then be multiplied by coefficient 0.7 and obtain a benchmark numerical value, as the decision-making value of marginal point, point-by-point comparison, to merging matrix binaryzation.
Step 6, output edge image.
Output edge image module 106 merges the required two-value obtaining with matrix display device and is shown as bianry image, is final edge extracting result.
Below each main modular relating in above-mentioned steps is described in detail.
1, obtain ballot weight module 102
For the edge that makes to extract has robustness, utilize the extraction result of existing typical edge extracting method, by ballot, add up, based on poll, calculate the weight that each pixel is marginal point, for guaranteeing that representative edge extraction algorithm is extracted to deviation certain error correcting capability, the present invention proposes weight calculation and not only adds up the edge judged result of each algorithm to this point, and the neighborhood point edge court verdict after stack weighting, according to giving different weights from the distance of pixel to be extracted, by this fusion mode, obtain integrated representative edge and extract the edge judgement ballot weight matrix of result, as follow-up basis of further merging judgement.As shown in Figure 3, obtain ballot weight matrix specific implementation process comprise the following steps:
Here at respective pixel point, refer to the eight neighborhood scope statistics at detected pixel, three kinds of operator testing results are added up respectively separately.By detected pixel namely the central pixel point of eight neighborhoods to give be 1 weights, it is 0.5 weights that four, its neighbours territory pixel is given, to give be 0.25 weights to remaining four pixels of eight neighborhoods four angles.When eight Shang Wei edges, neighborhood relevant position assignment be 1, by weighting, judge.Resulting weight and be more than or equal to 2, think this pixel that this kind of operator detect namely the central pixel point of eight neighborhoods be marginal point, and be 1 by weight and assignment, if weight and be less than 2 is thought not to be marginal point and by weight and assignment 0.Add up successively each pixel weight and, can obtain codomain for the Nearest Neighbor with Weighted Voting of [0,3].Illustrate as follows: suppose original image part local message matrix A, utilize Sobel, Canny and LoG operator to detect, obtain respectively matrix B _ Sobel, B_Canny and B_LoG, specific as follows:
Enumerating now pixel A (3,3) judges.For Sobel testing result, its weight and=(0*0.25*4)+(0*0.5*2+1*0.5*2)+1*1=2, this weight and be more than or equal to 2, so assignment is 1, in known Sobel testing result, A (3,3) is marginal point.
Can try to achieve successively Canny testing result weight and=(0*0.25*3+1*0.25*1)+(0*0.5*2+1*0.5*2)+0*1=1.25, this weight and be less than 2, so assignment is 0, so A (3,3) is not marginal point in known Canny testing result.
In like manner can try to achieve in LoG testing result, weight and be 1.75 to be less than 2, assignment is 0, so A (3,3) is not marginal point in known LoG testing result.
Three's weight assignment is added and obtains 1+0+0=1 the most at last, is the size of final vote weight.
Here the value assignment 0.5 that is 0 by ballot weight matrix, is for fear of follow-up fusion process, and the edge judged result of shielding based on partial structurtes, affects multi-angle edge determine effect, loses possible marginal point.
2, obtain difference weight module 103
According to the architectural feature of image border point and its neighborhood territory pixel point, the present invention proposes to utilize to have and treat mutually neighborhood point and the difference between this luminance difference that judging point has maximum luminance variation and minimum brightness variation and explain the weight that this point is edge.Consider that this expression method is subject to noise and infects, the present invention adopts normalized luminance difference calculated difference weight matrix, make its weighted value [0,1] interval, its effect in follow-up weight matrix merges of efficient balance, retained based on difference and described the validity that jump in brightness carries out edge judgement on the one hand, avoided on the other hand the excessive increasing that brings noise effect of weights.As shown in Figure 4, obtain the specific implementation process of difference weight matrix further comprising the steps:
Step 31, input gray level image.
Here poor calculating is to adopt different parameters according to experiment to maximum difference in luminance with minimum brightness.Described in following regression model, obtain the minimum and maximum luminance difference after normalization that each pixel is corresponding.
The Logic Regression Models is here:
Y(x)=1/(1+exp(a*x+b))
Wherein, x is maximum luminance difference or the minimum brightness difference that each pixel is corresponding, Y (x) is maximum luminance difference or the minimum brightness difference after the normalization that each pixel is corresponding, a and b are luminance difference parameter, and for each pixel maximum difference in luminance parameter, be: a equals-0.1, and b equals 2; For the poor parameter-definition of each pixel minimum brightness: a equals-0.3, and b equals 5.
3, obtain marginal distribution weight module 104
As shown in Figure 5, obtain the specific implementation step of marginal distribution weight matrix as follows:
4, weight merges decision-making module 105
By merging decision-making, can utilize to greatest extent the edge-description information of different angles, obtain more complete edge detection results, improve the correctness of edge extracting.As shown in Figure 6, the specific implementation process of weight matrix fusion decision-making comprises the following steps:
Here by ask merge weight matrix all elements and and divided by the element of marginal distribution weight matrix and, here in marginal distribution weight matrix each element with the marginal point sum representing based on neighborhood variance analysis, be multiplied by again coefficient 0.7, obtain reference value, as threshold value.
Above-described specific embodiment; object of the present invention, technical scheme and beneficial effect are further described; institute is understood that; the foregoing is only specific embodiments of the invention; be not limited to the present invention; within the spirit and principles in the present invention all, any modification of making, be equal to replacement, improvement etc., within all should being included in protection scope of the present invention.
Claims (1)
1. the edge extracting method based on convergence strategy, is characterized in that comprising the following steps:
Step 1, input gray level image
Step 2, in computed image, pixel is the ballot weight matrix of edge possibility:
Step 21, input gray level image;
Step 22, utilizes Sobel operator to carry out rim detection;
Step 23, utilizes Canny operator to carry out rim detection;
Step 24, utilizes LoG operator to carry out rim detection;
Step 25, three kinds of operator testing results are carried out Nearest Neighbor with Weighted Voting statistics on respective pixel point;
Here at respective pixel point, refer to the eight neighborhood scope statistics at detected pixel, three kinds of operator testing results are added up respectively separately; By detected pixel namely the central pixel point of eight neighborhoods to give be 1 weights, it is 0.5 weights that four, its neighbours territory pixel is given, to give be 0.25 weights to remaining four pixels of eight neighborhoods four angles; When eight Shang Wei edges, neighborhood relevant position assignment be 1, by weighting, judge; Resulting weight and be more than or equal to 2, think this pixel that this kind of operator detect namely the central pixel point of eight neighborhoods be marginal point, and be 1 by weight and assignment, if weight and be less than 2 is thought not to be marginal point and by weight and assignment 0; Add up successively each pixel weight and, can obtain codomain for the Nearest Neighbor with Weighted Voting of [0,3];
Step 3, in computed image, pixel is the difference weight matrix of edge possibility
Step 31, input gray level image;
Step 32, calculates four, the neighbours territory point of each pixel and luminance difference own;
Step 33, obtains respectively maximal value and the minimum value of each pixel four differences separately;
Step 34, is normalized corresponding difference maximal value and the minimum value substitution Logic Regression Models of each pixel respectively;
The Logic Regression Models is here:
Y(x)=1/(1+exp(a*x+b))
Wherein, x is maximum luminance difference or the minimum brightness difference that each pixel is corresponding, Y (x) is maximum luminance difference or the minimum brightness difference after the normalization that each pixel is corresponding, a and b are luminance difference parameter, and for each pixel maximum difference in luminance parameter, be: a equals-0.1, and b equals 2; For the poor parameter-definition of each pixel minimum brightness: a equals-0.3, and b equals 5;
Step 4, in computed image, pixel is the marginal distribution weight matrix of edge possibility
Step 41, input gray level image;
Step 42, calculates the brightness variance of four pixels that go to heart neighbours territory of each pixel;
Step 43, the variance that all pixels are tried to achieve is averaging, and is multiplied by coefficient 0.8, can obtain a threshold value;
Step 44, building size be the matrix of picture size, and initialize is zero, constructs initial edge distribution of weights matrix;
Step 45, all pixels of traversing graph picture, if its deleted neighbourhood statistical variance is more than or equal to threshold value, are 1 by the numerical value assignment of the correspondence position of marginal distribution weight matrix; Otherwise, keep initial 0 value, set up the 0-1 weight matrix of describing marginal distribution;
Step 5, carries out fusion treatment to obtain three kinds of weight matrix
Step 51, reads ballot weight matrix data;
Step 52, reads difference weight matrix data;
Step 53, reads marginal distribution weight matrix data;
Step 54, utilizes Hadamard product to merge three kinds of weight matrix, obtains and merges weight matrix;
Step 55, calculates benchmark numerical value, i.e. threshold value;
Here by ask merge weight matrix all elements and and divided by the element of marginal distribution weight matrix and, then be multiplied by coefficient 0.7, obtain benchmark numerical value, as threshold value;
Step 56, binaryzation merges weight matrix, if merge the value of matrix, is greater than or equal to benchmark numerical value, by the assignment that merges matrix relevant position, is 1; If be less than benchmark numerical value, by the value assignment that merges matrix relevant position, be 0;
Step 57, matrix is merged in output;
Step 6, output edge image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310475874.8A CN103530878B (en) | 2013-10-12 | 2013-10-12 | A kind of edge extracting method based on convergence strategy |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310475874.8A CN103530878B (en) | 2013-10-12 | 2013-10-12 | A kind of edge extracting method based on convergence strategy |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103530878A true CN103530878A (en) | 2014-01-22 |
CN103530878B CN103530878B (en) | 2016-01-13 |
Family
ID=49932857
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310475874.8A Expired - Fee Related CN103530878B (en) | 2013-10-12 | 2013-10-12 | A kind of edge extracting method based on convergence strategy |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103530878B (en) |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105139384A (en) * | 2015-08-11 | 2015-12-09 | 北京天诚盛业科技有限公司 | Defective capsule detection method and apparatus |
CN106504263A (en) * | 2016-11-04 | 2017-03-15 | 辽宁工程技术大学 | A kind of quick continuous boundary extracting method of image |
CN107909555A (en) * | 2017-11-27 | 2018-04-13 | 北京大恒图像视觉有限公司 | A kind of gridding noise elimination method for keeping acutance |
CN108513044A (en) * | 2018-04-16 | 2018-09-07 | 深圳市华星光电技术有限公司 | Picture smooth treatment method, electronic device and computer readable storage medium |
CN108734158A (en) * | 2017-04-14 | 2018-11-02 | 成都唐源电气股份有限公司 | A kind of real-time train number identification method and device |
CN108830870A (en) * | 2018-05-21 | 2018-11-16 | 千寻位置网络有限公司 | Satellite image high-precision field boundary extracting method based on Multi-scale model study |
CN109934836A (en) * | 2017-12-15 | 2019-06-25 | 中国科学院深圳先进技术研究院 | A kind of detection method of image sharpening |
CN110288558A (en) * | 2019-06-26 | 2019-09-27 | 纳米视觉(成都)科技有限公司 | A kind of super depth image fusion method and terminal |
CN110298858A (en) * | 2019-07-01 | 2019-10-01 | 北京奇艺世纪科技有限公司 | A kind of image cropping method and device |
CN111179291A (en) * | 2019-12-27 | 2020-05-19 | 凌云光技术集团有限责任公司 | Edge pixel point extraction method and device based on neighborhood relationship |
CN111300987A (en) * | 2020-02-27 | 2020-06-19 | 深圳怡化电脑股份有限公司 | Ink jet interval time determining method, device, computer equipment and storage medium |
CN111445491A (en) * | 2020-03-24 | 2020-07-24 | 山东智翼航空科技有限公司 | Three-neighborhood maximum difference value edge detection narrow lane guidance algorithm for micro unmanned aerial vehicle |
CN111986096A (en) * | 2019-05-22 | 2020-11-24 | 上海哔哩哔哩科技有限公司 | Cartoon generation method and cartoon generation device based on edge extraction |
US11126808B1 (en) | 2019-05-30 | 2021-09-21 | Owens-Brockway Glass Container Inc. | Methods for dot code image processing on a glass container |
CN114419081A (en) * | 2022-03-28 | 2022-04-29 | 南昌工程学院 | Image semantic segmentation method and system and readable storage medium |
CN115578627A (en) * | 2022-09-21 | 2023-01-06 | 凌度(广东)智能科技发展有限公司 | Monocular image boundary identification method and device, medium and curtain wall robot |
CN116827899A (en) * | 2023-08-30 | 2023-09-29 | 湖南于一科技有限公司 | Object adding method and device based on Internet tool APP |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101286233A (en) * | 2008-05-19 | 2008-10-15 | 重庆邮电大学 | Fuzzy edge detection method based on object cloud |
CN101968885A (en) * | 2010-09-25 | 2011-02-09 | 西北工业大学 | Method for detecting remote sensing image change based on edge and grayscale |
US20120308153A1 (en) * | 2011-06-03 | 2012-12-06 | Sunghyun Hwang | Device and method of removing noise in edge area |
JP2013165352A (en) * | 2012-02-09 | 2013-08-22 | Canon Inc | Imaging apparatus, control method of the same and program |
-
2013
- 2013-10-12 CN CN201310475874.8A patent/CN103530878B/en not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101286233A (en) * | 2008-05-19 | 2008-10-15 | 重庆邮电大学 | Fuzzy edge detection method based on object cloud |
CN101968885A (en) * | 2010-09-25 | 2011-02-09 | 西北工业大学 | Method for detecting remote sensing image change based on edge and grayscale |
US20120308153A1 (en) * | 2011-06-03 | 2012-12-06 | Sunghyun Hwang | Device and method of removing noise in edge area |
JP2013165352A (en) * | 2012-02-09 | 2013-08-22 | Canon Inc | Imaging apparatus, control method of the same and program |
Non-Patent Citations (2)
Title |
---|
JIA XIBIN 等: "A novel edge detection in medical images by fusing of multi-model from different spatial structure clues", 《2ND INTERNATIONAL CONFERENCE ON BIOMEDICAL ENGINEERING AND BIOTECHNOLOGY(ICBEB)》 * |
张引 等: "复杂背景下文本提取的色彩边缘检测算子设计", 《软件学报》 * |
Cited By (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105139384B (en) * | 2015-08-11 | 2017-12-26 | 北京天诚盛业科技有限公司 | The method and apparatus of defect capsule detection |
CN105139384A (en) * | 2015-08-11 | 2015-12-09 | 北京天诚盛业科技有限公司 | Defective capsule detection method and apparatus |
CN106504263A (en) * | 2016-11-04 | 2017-03-15 | 辽宁工程技术大学 | A kind of quick continuous boundary extracting method of image |
CN106504263B (en) * | 2016-11-04 | 2019-07-12 | 辽宁工程技术大学 | A kind of quick continuous boundary extracting method of image |
CN108734158B (en) * | 2017-04-14 | 2020-05-19 | 成都唐源电气股份有限公司 | Real-time train number identification method and device |
CN108734158A (en) * | 2017-04-14 | 2018-11-02 | 成都唐源电气股份有限公司 | A kind of real-time train number identification method and device |
CN107909555A (en) * | 2017-11-27 | 2018-04-13 | 北京大恒图像视觉有限公司 | A kind of gridding noise elimination method for keeping acutance |
CN107909555B (en) * | 2017-11-27 | 2020-06-02 | 北京大恒图像视觉有限公司 | Sharpness-keeping grid noise elimination method |
CN109934836A (en) * | 2017-12-15 | 2019-06-25 | 中国科学院深圳先进技术研究院 | A kind of detection method of image sharpening |
CN108513044A (en) * | 2018-04-16 | 2018-09-07 | 深圳市华星光电技术有限公司 | Picture smooth treatment method, electronic device and computer readable storage medium |
CN108513044B (en) * | 2018-04-16 | 2020-11-13 | 深圳市华星光电技术有限公司 | Image smoothing method, electronic device and computer readable storage medium |
CN108830870A (en) * | 2018-05-21 | 2018-11-16 | 千寻位置网络有限公司 | Satellite image high-precision field boundary extracting method based on Multi-scale model study |
CN108830870B (en) * | 2018-05-21 | 2021-12-28 | 千寻位置网络有限公司 | Satellite image high-precision farmland boundary extraction method based on multi-scale structure learning |
CN111986096B (en) * | 2019-05-22 | 2024-02-13 | 上海哔哩哔哩科技有限公司 | Cartoon generation method and cartoon generation device based on edge extraction |
CN111986096A (en) * | 2019-05-22 | 2020-11-24 | 上海哔哩哔哩科技有限公司 | Cartoon generation method and cartoon generation device based on edge extraction |
US11126808B1 (en) | 2019-05-30 | 2021-09-21 | Owens-Brockway Glass Container Inc. | Methods for dot code image processing on a glass container |
CN110288558A (en) * | 2019-06-26 | 2019-09-27 | 纳米视觉(成都)科技有限公司 | A kind of super depth image fusion method and terminal |
CN110298858A (en) * | 2019-07-01 | 2019-10-01 | 北京奇艺世纪科技有限公司 | A kind of image cropping method and device |
CN110298858B (en) * | 2019-07-01 | 2021-06-22 | 北京奇艺世纪科技有限公司 | Image clipping method and device |
CN111179291A (en) * | 2019-12-27 | 2020-05-19 | 凌云光技术集团有限责任公司 | Edge pixel point extraction method and device based on neighborhood relationship |
CN111179291B (en) * | 2019-12-27 | 2023-10-03 | 凌云光技术股份有限公司 | Edge pixel point extraction method and device based on neighborhood relation |
CN111300987A (en) * | 2020-02-27 | 2020-06-19 | 深圳怡化电脑股份有限公司 | Ink jet interval time determining method, device, computer equipment and storage medium |
CN111445491B (en) * | 2020-03-24 | 2023-09-15 | 山东智翼航空科技有限公司 | Three-neighborhood maximum difference edge detection narrow channel guiding method for miniature unmanned aerial vehicle |
CN111445491A (en) * | 2020-03-24 | 2020-07-24 | 山东智翼航空科技有限公司 | Three-neighborhood maximum difference value edge detection narrow lane guidance algorithm for micro unmanned aerial vehicle |
CN114419081B (en) * | 2022-03-28 | 2022-06-21 | 南昌工程学院 | Image semantic segmentation method and system and readable storage medium |
CN114419081A (en) * | 2022-03-28 | 2022-04-29 | 南昌工程学院 | Image semantic segmentation method and system and readable storage medium |
CN115578627A (en) * | 2022-09-21 | 2023-01-06 | 凌度(广东)智能科技发展有限公司 | Monocular image boundary identification method and device, medium and curtain wall robot |
CN115578627B (en) * | 2022-09-21 | 2023-05-09 | 凌度(广东)智能科技发展有限公司 | Monocular image boundary recognition method, monocular image boundary recognition device, medium and curtain wall robot |
CN116827899A (en) * | 2023-08-30 | 2023-09-29 | 湖南于一科技有限公司 | Object adding method and device based on Internet tool APP |
CN116827899B (en) * | 2023-08-30 | 2023-12-01 | 湖南于一科技有限公司 | Object adding method and device based on Internet tool APP |
Also Published As
Publication number | Publication date |
---|---|
CN103530878B (en) | 2016-01-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103530878B (en) | A kind of edge extracting method based on convergence strategy | |
CN108364017B (en) | A kind of picture quality classification method, system and terminal device | |
Aquino et al. | A new methodology for estimating the grapevine-berry number per cluster using image analysis | |
CN103886589B (en) | Object-oriented automated high-precision edge extracting method | |
CN116205919B (en) | Hardware part production quality detection method and system based on artificial intelligence | |
CN107220649A (en) | A kind of plain color cloth defects detection and sorting technique | |
CN106709450A (en) | Recognition method and system for fingerprint images | |
Jirachaweng et al. | Residual orientation modeling for fingerprint enhancement and singular point detection | |
CN110298790A (en) | A kind of pair of image carries out the processing method and processing device of super-resolution rebuilding | |
CN108230321A (en) | Defect inspection method and device | |
CN102201120B (en) | Multifeature-based target object contour detection method | |
CN106228528B (en) | A kind of multi-focus image fusing method based on decision diagram and rarefaction representation | |
CN102879401A (en) | Method for automatically detecting and classifying textile flaws based on pattern recognition and image processing | |
CN109636824A (en) | A kind of multiple target method of counting based on image recognition technology | |
CN103955922A (en) | Method for detecting flaws of printed fabric based on Gabor filter | |
Selvakumar et al. | The performance analysis of edge detection algorithms for image processing | |
CN106340000A (en) | Bone age assessment method | |
CN110223266A (en) | A kind of Railway wheelset tread damage method for diagnosing faults based on depth convolutional neural networks | |
CN111091109A (en) | Method, system and equipment for predicting age and gender based on face image | |
CN109191430A (en) | A kind of plain color cloth defect inspection method based on Laws texture in conjunction with single classification SVM | |
CN111415339B (en) | Image defect detection method for complex texture industrial product | |
CN105447859A (en) | Field wheat aphid counting method | |
CN107240086B (en) | A kind of fabric defects detection method based on integral nomography | |
CN105787912A (en) | Classification-based step type edge sub pixel localization method | |
CN115171218A (en) | Material sample feeding abnormal behavior recognition system based on image recognition technology |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20160113 Termination date: 20191012 |
|
CF01 | Termination of patent right due to non-payment of annual fee |