CN110956640A - Heterogeneous image edge point detection and registration method - Google Patents
Heterogeneous image edge point detection and registration method Download PDFInfo
- Publication number
- CN110956640A CN110956640A CN201911262672.9A CN201911262672A CN110956640A CN 110956640 A CN110956640 A CN 110956640A CN 201911262672 A CN201911262672 A CN 201911262672A CN 110956640 A CN110956640 A CN 110956640A
- Authority
- CN
- China
- Prior art keywords
- edge
- points
- point
- image
- gradient
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
- G06T7/337—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10048—Infrared image
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y04—INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
- Y04S—SYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
- Y04S10/00—Systems supporting electrical power generation, transmission or distribution
- Y04S10/50—Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention belongs to the technical field of image processing, and discloses a method for detecting and registering edge points of a heterogeneous image, which comprises the steps of calculating the gray gradient value and the gradient direction of an infrared image and a visible light image; comparing the gradient value of each pixel point with the gradient contrast factor of the related pixel points in the field, and screening out edge candidate points through a field detection method to form an edge candidate point matrix; accurately extracting strong edge points through edge correlation detection, and obtaining a real edge point set of the infrared image and the visible light image as real edge points; and carrying out bilateral neighbor matching to obtain a final edge registration point set.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a heterogeneous image edge point detection and registration method.
Background
The operation state and the operation environment condition of the distribution network equipment are mastered, and the hidden danger of the operation of the distribution network equipment is found in time and is a key problem of the operation and maintenance management of the power equipment. In the aspect of image detection, the main work of foreign power equipment image detection and analysis research is to improve the efficiency and accuracy of transmission line flight inspection and image monitoring. The research related to the equipment state detection and automatic analysis based on the image in China mainly focuses on the theoretical method analysis and feasibility research stage.
The appearance of an intelligent platform and an infrared camera based on a mobile phone peripheral at the present stage and the popularization and popularization of a partial discharge detection technology provide preconditions for the low cost of the detection of distribution network equipment. Therefore, the red head and the visible light are fused and detected to realize the intellectualization of the distribution network state detection work, which is the development trend of the state detection work in recent years.
In actual fusion detection, no matter a binocular stereo vision technology or the field of image fusion, automatic registration work of images is indispensable, for example, in the medical industry, a plurality of scanning images of the cranium or the eyeball are registered and fused to obtain the clearest diagnostic image, and remote sensing images shot by an unmanned aerial vehicle are registered and spliced in geological measurement. In the process of researching the measurement of the spatial position of the electrical equipment, the infrared-visible light images must be registered to obtain a measurement result with small error.
Different from homologous image registration, the difference between colors and resolution between infrared-visible light images is large, the textures of the infrared images are unclear, and noise is relatively large, so that the registration difficulty of the infrared images and the visible light images is large, and the accuracy is low. Therefore, the invention provides a method for detecting and registering edge points of different-source images, which improves the registration precision and supports the detection of the state of power distribution equipment.
Disclosure of Invention
The invention provides a heterogeneous image edge point detection and registration method, which solves the problems of difficult edge detection, low registration precision and the like caused by large difference of texture definition of an infrared image and a visible light gray level image.
The invention can be realized by the following technical scheme:
a method for detecting and registering edge points of a heterogeneous image comprises the following steps:
step one, aiming at an image of an infrared image and an image of a visible light image to be registered, calculating a gray gradient value and a gradient direction of the images.
The horizontal and vertical gray gradients of the two images are calculated. Let I be an image pixel value matrix, Gu and Gv represent gradients on u and v axes, and then the transverse and longitudinal gray gradient matrix expressions of the image are
The calculation formula for determining the gradient value G and the direction theta of the pixel point is
θ=tan-1(Gv/Gu) (2)
And step two, carrying out preliminary screening through a domain detection method, comparing the gradient value of the pixel point to be detected with the gradient contrast factor of the related pixel point, screening out edge candidate points, and forming an edge candidate point matrix.
And detecting 3 multiplied by 3 neighborhoods of all the pixel points one by one, comparing the gradient values of the pixel points with the gradient values of the related pixel points in the gradient direction in the field, and extracting edge candidate points.
Let the gradient value of the pixel point to be detected be GpThen the gradient direction theta of the pixel point in the fieldpThe value of the upper correlated pixel point is G1,G2,G3,G4As shown in fig. 1.
Comparing the gradient value of the pixel point to be detected with the gradient value of the related pixel point, and enabling Gp+And Gp-For the positive and negative contrast factors, the formula is as follows
Gp-=G3tanθp+G4(1-tanθp)
Gp+=G2tanθp+G1(1-tanθp) (3)
When G isp>Gp+Or Gp>Gp-Then, pixel point G is setpAs edge candidate points, the gradient values G of the edge candidate points are setpForming edge candidate point matrix E ═ G together with contrast factorp,Gp+,Gp-)。
And traversing the edge candidate points of the whole image, and extracting all edge candidate point matrixes.
Step three: and on the basis of the edge candidate points, accurately extracting strong edge points through edge correlation detection to serve as real edge points.
Let the edge point candidate matrix set of the infrared image be Eh=(Eh1,Eh2,Eh3,...,Ehn) The set of edge point candidate matrices of the visible light image is El=(El1,El2,El3,...,Eln)。
Defining the edge correlation degree between the corresponding edge point candidate matrixes as ECtWherein t is 1, 2.
WhereinAndare respectively a matrix Eht、EltM × N is the matrix dimension. EC (EC)tThe value range is [ -1,1 [ ]]The parameter can effectively judge the similarity degree of two edge images, and the higher the similarity degree of the edge images is, the higher the EC istThe larger; when the two edge images are completely overlapped, ECtIs 1; when the edge image similarity is low, ECtThe closer to 0. Therefore, the threshold value C is determined when ECt>And C, extracting the edge point to serve as a real edge point, wherein C is 0.85 in the invention.
Finally, the set of the real edge points of the infrared image and the visible light image is Fh=(Fh1,Fh2,Fh3,...,Fhm) And Fl=(Fl1,Fl2,Fl3,...,Flm) Wherein m is<n。
And fourthly, performing edge point registration on the infrared image and the visible light image by using a neighbor matching and bilateral algorithm to obtain a registration point set.
On the basis of the third step, firstly, the visible light image is taken as a reference, and the infrared image is registered to the visible light image, namely infrared → visible light.
The real edge point set corresponding to the infrared image Ih to be registered is Fh, and the real edge point set corresponding to the visible light image Il is FlFor the ith element F in FhhiDefinition of FhiTo set FlA distance of
For FhiSearch set D (F)hi,Fl) F corresponding to the maximum value ofliAs its matching point. In this context, to ensure the suitability of the algorithm, use is made of Fhi-maxAnd Fhi-smaxRespectively represent sets D (F)hi,Fl) Maximum and sub-maximum of (d); only two satisfy the relationship Fhi-smax<Fhi-max<When R is reached, the maximum value F is selectedhi-maxCorresponding FliAnd as a matching point, otherwise, the matching is regarded as failed, and in order to ensure the accuracy of the registration, the value of R is 0.9.
When the matching is successful, F is addedhi-maxAnd FliAs matching point, putting M (I) in the unilateral matching seth,Il) Then traversing the real edge point set Fh of the infrared image, and putting all the matching points into the matching set M (I)h,Il) In (1).
Many-to-one situations occur due to one-sided matching, i.e. multiple elements in Fh may be matched to FlThe same element in (1). In order to solve the problem, the infrared image is taken as a reference, the visible light image is registered to the infrared image, namely visible light → infrared, and a matching set M (I) is obtainedl,Ih)。
Finally, reserving M (I)h,Il) And M (I)l,Ih) The same element in (b) as the point where the final match is successful.
The beneficial technical effects of the invention are as follows:
because the detail texture of the infrared image is seriously lost, the most remarkable corner points are distributed on the outline of an object, and the visible light image is clearer and contains clear outline and texture information, the edge point detection enables corresponding matching points of infrared and visible light to be more accurate, and the robustness of the matching effect is better.
The method accurately extracts the strong edge points through a domain detection method and edge correlation detection, reduces registration errors caused by the difference of the images, reduces the problem that the gradient difference is large because the detail texture of the visible light image is clearer than that of the infrared image, and simultaneously obtains the most accurate matching point set through the domain algorithm and the bilateral matching algorithm, thereby improving the registration accuracy.
The method is simple and rapid in the whole calculation process, high in accuracy and capable of being popularized in power equipment monitoring.
Drawings
FIG. 1 is a schematic overall flow diagram of the present invention;
FIG. 2 is a schematic diagram illustrating edge candidate extraction according to the present invention;
FIG. 3 is a schematic diagram of edge point detection and registration according to the present invention;
fig. 4 is a schematic diagram of another edge point detection and registration method according to the present invention.
Detailed Description
The following detailed description of the preferred embodiments will be made with reference to the accompanying drawings.
As shown in fig. 1, the present invention provides a method for detecting and registering edge points of a heterogeneous image, which first calculates a gray gradient value and a gradient direction of an image for an infrared image and a visible light image to be registered. And carrying out preliminary screening by a field detection method to screen out edge candidate points. And on the basis of the edge candidate points, accurately extracting strong edge points through edge correlation detection to serve as real edge points. And performing edge point registration on the infrared image and the visible light image by using a neighbor matching and bilateral algorithm to obtain a registration point set.
Firstly, preprocessing the image, and deciding whether a filtering operation is needed according to the quality of the image. Firstly, for the redistribution of the gray value of the image, firstly, the gray value image is subjected to linear transformation, and then the gray value is stretched to 0-255, and the specific formula is as follows:
secondly, the resolution of the image is adjusted, and the resolution of the highest one of the two infrared and visible images to be registered is taken as a reference resolution, and the other image is scaled in an equal proportion, for example, the two infrared and visible images to be registered of 200 × 400 and 400 × 800, 400 × 800 needs to be scaled to the proportion of 200 × 400, so that the SIFT descriptor extraction can be carried out by adopting the template with the same size in the following process, and preparation is made for realizing the scale invariance in the following process.
Then, for an image of an infrared image and an image of a visible light image to be registered, a gray gradient value and a gradient direction of the images are calculated.
The horizontal and vertical gray gradients of the two images are calculated. Let I be an image pixel value matrix, Gu and Gv represent gradients on u and v axes, and then the transverse and longitudinal gray gradient matrix expressions of the image are
The calculation formula for determining the gradient value G and the direction theta of the pixel point is
θ=tan-1(Gv/Gu)
Further, preliminary screening is carried out through a field detection method, and edge candidate points are screened out.
And detecting 3-by-3 neighborhoods of all the pixel points one by one, comparing the gradient values of the pixel points with the gradient values of the related pixel points in the gradient direction in the field, and extracting edge candidate points.
Let the gradient value of the pixel point to be detected be GpThen the gradient direction theta of the pixel point in the fieldpThe value of the upper correlated pixel point is G1,G2,G3,G4As shown in fig. 1.
Comparing the gradient value of the pixel point to be detected with the gradient value of the related pixel point, and enabling Gp+And Gp-For the positive and negative contrast factors, the formula is as follows
Gp-=G3tanθp+G4(1-tanθp)
Gp+=G2tanθp+G1(1-tanθp)
When G isp>Gp+Or Gp>Gp-Then, pixel point G is setpAs edge candidate points, the gradient values G of the edge candidate points are setpForming edge candidate point matrix E ═ G together with contrast factorp,Gp+,Gp-)。
And traversing the edge candidate points of the whole image, and extracting all edge candidate point matrixes.
Further, on the basis of the edge candidate points, strong edge points are accurately extracted through edge correlation degree detection and serve as real edge points.
Let the edge point candidate matrix set of the infrared image be Eh=(Eh1,Eh2,Eh3,...,Ehn) The set of edge point candidate matrices of the visible light image is El=(El1,El2,El3,...,Eln)。
Calculating the edge correlation EC between the corresponding edge point candidate matrixestWherein t is 1, 2.
ECtThe value range is [ -1,1 [ ]]The parameter can effectively judge the similarity degree of two edge images, and the higher the similarity degree of the edge images is, the higher the EC istThe larger; when the two edge images are completely overlapped, ECtIs 1; when the edge image similarity is low, ECtThe closer to 0. Therefore, the threshold value C is determined as ECt>And C, extracting the edge point to serve as a real edge point, wherein C is 0.85 in the invention.
Finally, the set of the real edge points of the infrared image and the visible light image is Fh=(Fh1,Fh2,Fh3,...,Fhm) And Fl=(Fl1,Fl2,Fl3,...,Flm) Wherein m is<n。
And finally, performing edge point registration on the infrared image and the visible light image by using a neighbor matching and bilateral algorithm to obtain a registration point set.
First, the infrared image is registered to the visible image, i.e. infrared → visible light, with the visible image as a reference.
The real edge point set corresponding to the infrared image Ih to be registered is Fh, and the real edge point set corresponding to the visible light image Il is FlFor the ith element F in FhhiDefinition of FhiTo set FlA distance of
For FhiSearch set D (F)hi,Fl) F corresponding to the maximum value ofliAs its matching point. In this context, to ensure the suitability of the algorithm, use is made of Fhi-maxAnd Fhi-smaxRespectively represent sets D (F)hi,Fl) Maximum and sub-maximum of (d); only two satisfy the relationship Fhi-smax<Fhi-max<When R is reached, the maximum value F is selectedhi-maxCorresponding FliAnd as a matching point, otherwise, the matching is regarded as failed, and in order to ensure the accuracy of the registration, the value of R is 0.9.
When the matching is successful, F is addedhi-maxAnd FliAs matching point, putting M (I) in the unilateral matching seth,Il) Then traversing the real edge point set Fh of the infrared image, and putting all the matching points into the matching set M (I)h,Il) In (1).
Many-to-one situations occur due to one-sided matching, i.e. multiple elements in Fh may be matched to FlThe same element in (1). In order to solve the problem, the infrared image is taken as a reference, the visible light image is registered to the infrared image, namely visible light → infrared, and a matching set M (I) is obtainedl,Ih). Finally, reserving M (I)h,Il) And M (I)l,Ih) The same element in (b) as the point where the final match is successful.
The results of edge detection and registration of the infrared and visible images of the power distribution equipment are shown in fig. 3 and 4 (infrared image on the left and visible image on the right). The Root Mean Square Error (RMSE) is used as an evaluation index after registration, i.e.
Wherein (xi, yi) the coordinate of the registration point obtained by image registration, and (x 'i, y' i) the coordinate of the theoretical registration point after the registration point passes through the theoretical perspective transformation matrix. The index can objectively reflect that the matching precision of the registration algorithm is high, and the smaller the value of the index is, the higher the registration precision is. The RMSE for the registration of the edge points of the image in fig. 3 reaches 6.89, and the percentage of the correct registration points is 87.2%, and the distribution of the registration points on the image is very comprehensive, the RMSE in fig. 4 is 8.41, and the percentage of the correct registration points is 77.5%. The method has the characteristic of high accuracy for the edge detection and registration of the heterogeneous images, and has certain feasibility.
Although specific embodiments of the present invention have been described above, it will be appreciated by those skilled in the art that these are merely examples and that many variations or modifications may be made to these embodiments without departing from the principles and spirit of the invention, the scope of which is therefore defined by the appended claims.
Claims (3)
1. A method for detecting and registering edge points of a different-source image is characterized by comprising the following steps:
step one, aiming at the infrared image and the visible light image to be registered, calculating the gray gradient value and the gradient direction of the image.
And step two, carrying out preliminary screening through a field detection method, comparing the gradient value of each pixel point with the gradient contrast factor of the related pixel points in the field, screening out edge candidate points, and forming an edge candidate point matrix.
Step three: and on the basis of the edge candidate points, accurately extracting strong edge points through edge correlation detection to serve as real edge points.
And fourthly, performing edge point registration on the infrared image and the visible light image by using a neighbor matching and bilateral algorithm to obtain a registration point set.
2. The method for detecting and registering the edge points of the heterogeneous images according to claim 1, wherein in the second step, a domain detection method is used for carrying out primary screening, the gradient value of each pixel point is compared with the gradient contrast factor of the related pixel points in the domain, and edge candidate points are screened out to form an edge candidate point matrix.
And detecting 3 multiplied by 3 neighborhoods of all the pixel points one by one, comparing the gradient values of the pixel points with the gradient values of the related pixel points in the gradient direction in the field, and extracting edge candidate points.
Let the gradient value of the pixel point to be detected be GpThen the gradient direction theta of the pixel point in the fieldpThe value of the upper correlated pixel point is G1,G2,G3,G4As shown in fig. 2.
Comparing the gradient value of the pixel point to be detected with the gradient value of the related pixel point to orderAndfor the positive and negative contrast factors, the formula is as follows
Gp-=G3tanθp+G4(1-tanθp)
Gp+=G2tanθp+G1(1-tanθp)
When in useOrThen, pixel point G is setpAs edge candidate points, the gradient values G of the edge candidate points are setpThe edge candidate point matrix is formed by the edge candidate point matrix and the contrast factor
And traversing the edge candidate points of the whole image, and extracting all edge candidate point matrixes.
3. The method for detecting and registering the edge points of the heterogeneous images according to claim 1, wherein in the third step, on the basis of the edge candidate points, strong edge points are accurately extracted through edge correlation detection and serve as real edge points.
Let the edge point candidate matrix set of the infrared image be Eh=(Eh1,Eh2,Eh3,...,Ehn) The set of edge point candidate matrices of the visible light image is El=(El1,El2,El3,...,Eln)。
Defining the edge correlation degree between the corresponding edge point candidate matrixes as ECtWherein t is 1, 2.
WhereinAndare respectively a matrix Eht、EltM × N is the matrix dimension. EC (EC)tThe value range is [ -1,1 [ ]]The parameter can effectively judge the similarity degree of two edge images, and the higher the similarity degree of the edge images is, the higher the EC istThe larger; when the two edge images are completely overlapped, ECtIs 1; when the edge image similarity is low, ECtThe closer to 0. Therefore, the threshold value C is determined as ECt>And C, extracting the edge point to serve as a real edge point, wherein C is 0.85 in the invention.
Finally, the set of the real edge points of the infrared image and the visible light image is Fh=(Fh1,Fh2,Fh3,...,Fhm) And Fl=(Fl1,Fl2,Fl3,...,Flm) Wherein m is<n。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911262672.9A CN110956640B (en) | 2019-12-04 | 2019-12-04 | Heterogeneous image edge point detection and registration method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911262672.9A CN110956640B (en) | 2019-12-04 | 2019-12-04 | Heterogeneous image edge point detection and registration method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110956640A true CN110956640A (en) | 2020-04-03 |
CN110956640B CN110956640B (en) | 2023-05-05 |
Family
ID=69980759
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911262672.9A Active CN110956640B (en) | 2019-12-04 | 2019-12-04 | Heterogeneous image edge point detection and registration method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110956640B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111833384A (en) * | 2020-05-29 | 2020-10-27 | 武汉卓目科技有限公司 | Method and device for quickly registering visible light and infrared images |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5550937A (en) * | 1992-11-23 | 1996-08-27 | Harris Corporation | Mechanism for registering digital images obtained from multiple sensors having diverse image collection geometries |
US6519372B1 (en) * | 1999-08-31 | 2003-02-11 | Lockheed Martin Corporation | Normalized crosscorrelation of complex gradients for image autoregistration |
CN102999939B (en) * | 2012-09-21 | 2016-02-17 | 魏益群 | Coordinate acquiring device, real-time three-dimensional reconstructing system and method, three-dimensional interactive device |
CN106257535B (en) * | 2016-08-11 | 2019-05-21 | 河海大学常州校区 | Electrical equipment based on SURF operator is infrared and visible light image registration method |
CN107464252A (en) * | 2017-06-30 | 2017-12-12 | 南京航空航天大学 | A kind of visible ray based on composite character and infrared heterologous image-recognizing method |
CN107862708A (en) * | 2017-11-08 | 2018-03-30 | 合肥工业大学 | A kind of SAR and visible light image registration method |
CN108038476B (en) * | 2018-01-03 | 2019-10-11 | 东北大学 | A kind of facial expression recognition feature extracting method based on edge detection and SIFT |
CN110223330B (en) * | 2019-06-12 | 2021-04-09 | 国网河北省电力有限公司沧州供电分公司 | Registration method and system for visible light and infrared images |
-
2019
- 2019-12-04 CN CN201911262672.9A patent/CN110956640B/en active Active
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111833384A (en) * | 2020-05-29 | 2020-10-27 | 武汉卓目科技有限公司 | Method and device for quickly registering visible light and infrared images |
CN111833384B (en) * | 2020-05-29 | 2023-12-26 | 武汉卓目科技有限公司 | Method and device for rapidly registering visible light and infrared images |
Also Published As
Publication number | Publication date |
---|---|
CN110956640B (en) | 2023-05-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111223133B (en) | Registration method of heterogeneous images | |
CN107154014B (en) | Real-time color and depth panoramic image splicing method | |
CN109472776B (en) | Depth significance-based insulator detection and self-explosion identification method | |
CN109211198B (en) | Intelligent target detection and measurement system and method based on trinocular vision | |
CN110544258A (en) | Image segmentation method and device, electronic equipment and storage medium | |
CN111383209A (en) | Unsupervised flaw detection method based on full convolution self-encoder network | |
CN108470356B (en) | Target object rapid ranging method based on binocular vision | |
CN110110131B (en) | Airplane cable support identification and parameter acquisition method based on deep learning and binocular stereo vision | |
CN108550166B (en) | Spatial target image matching method | |
CN106952262B (en) | Ship plate machining precision analysis method based on stereoscopic vision | |
CN111027415A (en) | Vehicle detection method based on polarization image | |
CN115330684A (en) | Underwater structure apparent defect detection method based on binocular vision and line structured light | |
CN104966283A (en) | Imaging layered registering method | |
CN110956640B (en) | Heterogeneous image edge point detection and registration method | |
CN112634259A (en) | Automatic modeling and positioning method for keyboard keycaps | |
CN111681271A (en) | Multichannel multispectral camera registration method, system and medium | |
JPH05215547A (en) | Method for determining corresponding points between stereo images | |
CN114638805B (en) | Track slab crack detection method, system and storage medium | |
CN112562008B (en) | Target point matching method in local binocular vision measurement | |
CN110349129A (en) | A kind of high density flexible IC exterior substrate defect inspection method | |
CN115388785A (en) | Flexible wire harness measuring method and system based on vision | |
CN115546716A (en) | Binocular vision-based method for positioning fire source around power transmission line | |
CN115330751A (en) | Bolt detection and positioning method based on YOLOv5 and Realsense | |
CN115375762A (en) | Three-dimensional reconstruction method for power line based on trinocular vision | |
CN111260625B (en) | Automatic extraction method for offset printing large image detection area |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |