CN107369168B - Method for purifying registration points under heavy pollution background - Google Patents
Method for purifying registration points under heavy pollution background Download PDFInfo
- Publication number
- CN107369168B CN107369168B CN201710423423.8A CN201710423423A CN107369168B CN 107369168 B CN107369168 B CN 107369168B CN 201710423423 A CN201710423423 A CN 201710423423A CN 107369168 B CN107369168 B CN 107369168B
- Authority
- CN
- China
- Prior art keywords
- registration
- points
- point pairs
- sampling
- ratio
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 52
- 238000005070 sampling Methods 0.000 claims abstract description 43
- 238000001514 detection method Methods 0.000 claims description 4
- 238000000746 purification Methods 0.000 abstract description 11
- 230000000007 visual effect Effects 0.000 abstract description 2
- 241001270131 Agaricus moelleri Species 0.000 abstract 1
- 238000011109 contamination Methods 0.000 description 5
- 239000011159 matrix material Substances 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 230000000087 stabilizing effect Effects 0.000 description 1
- 230000009897 systematic effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a method for purifying registration points under a heavy pollution background, belongs to the field of data purification, and relates to a method for purifying registration points under the condition that real registration points of an image are seriously polluted. The method comprises the following steps: randomly extracting 2 registration point pairs from the registration point pairs; respectively connecting two sampling points in the two images, and solving the ratio of the sum of pixel gray scales on a line segment connecting the two end points to the midpoint; constructing a kernel set according to the difference value of the gray scale sum ratio of the two images in the step two; combining the repeated redundant point pairs in the kernel set; and sampling in the kernel set by adopting an RANSAC method, detecting interior points in the full set, extracting real registration point pairs, and solving registration models of the two images. The invention constructs the kernel set, samples in the kernel set, improves the sampling efficiency, solves the registration identification of the target under the conditions of different scenes, different visual angles, the shape and the appearance of the target, and further solves the purification problem of extremely polluted registration point pairs.
Description
Technical Field
The invention belongs to the technical field of data purification, relates to an image registration direction, and particularly relates to a method for purifying registration points under a large pollution background, namely a method for purifying registration points under the condition that real registration points of an image are seriously polluted.
Background
Classical image registration is achieved by descriptors, usually the registration point pair pointed to by a local feature descriptor is contaminated, that is: the registration of descriptors is both correct and incorrect. For this reason, in the field of image registration, the registered dot pairs registered by descriptors need to be refined. RANSAC (random Brown, David G.Lowe, Automatic systematic image stabilizing using innovative features) is commonly used for the purification of descriptors, but when descriptors are severely polluted, the speed of directly adopting RANSAC for purification is slow, and a practically unsolved solution is formed.
With respect to descriptor contamination, one aspect is that the descriptor algorithm itself decides. Descriptors are generally established in local regions of an image, image registration needs to be performed on a global image, and locally associated registration point pairs cannot guarantee to be still associated in a global sense. On the other hand, due to the complexity of the registration task, the contamination is determined to be sometimes severe. As is the registration of an object in different scenes. Even the same object, the scene presented in different sky will be different. Contamination is thus always accompanied by registration and sometimes also severe. The purification of severely contaminated descriptor pairs is an irrevocable problem.
With regard to the problem of RANSAC purification efficiency, there have been some efforts (Chum,Matas,Optimal randomized RANSAC,IEEE Transactions on Pattern Analysis and Machine Intelligence,2008,30(8):1472-1482;Tom Botterill,Steven Mills,Richard Green,Fast RANSAC hypothesis generation for essential matrix estimation,Digital Image Computing Techniques and Applications(DICTA),2011International Conference on.IEEE,2011:561-566;Anders Hast,JohanAndrea Marchetti, Optimal ranging-towards a repeatable for fining the Optimal set,2013,21: 21-30; hongxia Gao, Jianhe Xie, Yueming Hu, Ze Yang, Hough-RANSAC: A Fast and Robust Method for Rejecting Mismatches, Chinese Conference on Pattern recognition.Springer Berlin Heidelberg,2014: 363-370.). Wherein Tom Botterill et al reduces the time required for RANSAC purification by 25%;the optimized RANSAC method proposed by Chum et al is 2-10 times faster than classical RANSAC; the Hough-RANSAC method can take effect when only 20% of data is real data, which is proposed by Hongxia Gao and the like; the method proposed by Anders Hast et al was still effective when the contamination data reached 95%. Both of these methods improve RANSAC efficiency under heavily contaminated data, but when the true registration point pairs are extremely sparse, especially when contamination reaches 99%, these methods detect intra-fieldsThe ability of the dots still appears to be insufficient.
Disclosure of Invention
The invention provides a method for purifying registration points under a large pollution background aiming at image registration, extracts a registration model with extreme pollution, filters polluted outer points and extracts real inner points, and belongs to the technology in the field of image registration.
In order to solve the technical problems, the invention adopts the technical scheme that: a method for purifying registration points against a highly contaminating background, said method comprising the steps of: step one, aiming at two images I to be registered1And I2Optionally extracting 2 registration point pairs from the plurality of registration point pairs P; step two, connecting the images I1The coordinates of the middle points of the two sampling points are obtained, and the ratio r of the gray levels of the pixels on the line segment connecting the two end points to the middle points is obtained1(ii) a Connecting images I using the same method2Two sampling points in the middle point, and the ratio r of the gray levels on the line segments on both sides of the middle point is calculated2(ii) a Calculating the ratio r1And r2Difference e between12(ii) a Step three, constructing a kernel set through the difference value of the gray scale sum ratio of the two images in the step two; step four, combining the redundant point pairs in the kernel set; and step five, traversing and sampling in the kernel set by adopting an RANSAC method, performing interior point detection in the full set, extracting real registration point pairs, and solving registration models of the two images.
In the above method, in the second step, the calculation method of the ratio of the gray levels to the sum of gray levels is: connecting two sampling points in one image, firstly calculating the coordinates of the middle points of the two sampling points, then respectively calculating the sum of the gray levels of pixels on the line segments connecting the two end points to the middle points, and further calculating the ratio of the gray levels on the line segments on the two sides of the middle points. In the third step, the specific method for constructing the kernel set comprises the following steps: A. traversing the samples of the two registration point pairs, repeating the first step and the second step, and solving the difference e of the gray scale sum ratio of all the sample point pairsij(ii) a B. By setting a threshold value, the difference e of the gray-scale sum ratio is selectedijSmall pairs of sampling points constitute a kernel set. The method adopts the following attributes for constructing the kernel set: the connecting lines of the two real registration point pairs on the respective images experience the same contentThe gray scale ratio of the middle point is relatively close to the middle point; on the other hand, the content experienced by the connecting lines is different for the false registration point pairs, and the gray scale ratio difference between the connection points and the middle point is larger. The number of traversal sampling times in the step five isWherein n is the number of elements in the set of registration point pairs P to be detected.
The method has the beneficial effects that the method for purifying the registration points under the background of large pollution is provided, and the method for purifying the real registration point pairs is explored aiming at the condition that the registration points generated by the image registration method have serious pollution. The method mainly purifies the registration point pair, identifies the target in the image, solves the registration identification problem of the target in the image, and comprises the following steps: the method solves the registration recognition of the target under different scenes, the registration recognition of the target under different visual angles and the registration recognition of the target under the condition that the shape and the appearance of the target change.
Drawings
The contents of the drawings and the reference numerals in the drawings are briefly described as follows:
FIG. 1 is a schematic process flow diagram of an embodiment of the present invention.
Fig. 2 is a sampling chart of an embodiment of the present invention.
FIG. 3 is a diagram of kernel sets and corpus in accordance with an embodiment of the present invention.
Detailed Description
The following description of the embodiments with reference to the drawings is provided to describe the embodiments of the present invention, and the embodiments of the present invention, such as the shapes and configurations of the components, the mutual positions and connection relationships of the components, the functions and working principles of the components, the manufacturing processes and the operation and use methods, etc., will be further described in detail to help those skilled in the art to more completely, accurately and deeply understand the inventive concept and technical solutions of the present invention.
The invention provides a method for purifying registration points under a large pollution background, which constructs a kernel set by utilizing the spatial domain attribute of image pixelsAnd (4) carrying out sampling in a kernel set so as to improve the sampling efficiency and further solve the problem of purification of extremely polluted registration point pairs. The method solves the extraction of a sparse registration model, and 2 images to be registered are respectively set as I1And I2The registration point pair is obtained by the existing registration method (such as SIFT, SURF, etc.), and is set as P { (P)1i,p2i)|p1i∈I1,p2i∈I2I ═ 1,2, …, n }, where m (p) is1i,p2i) 1 denotes a point pair (p)1i,p2i) Is a true pair of registration points, m (p)1i,p2i) A point pair (p) is represented by 01i,p2i) Are pairs of dummy registration points. The invention solves the problem of solving the set P of all real registration point pairs of PTSo that P isT={(p1i,p2i)|m(p1i,p2i)=1,(p1i,p2i) E.g. P), and solving a registration model between the two images, i.e. a homography transformation matrix H, so that
A purification method of registration points under a large pollution background is a purification method for exploring real registration point pairs aiming at the condition that the registration points generated by an image registration method have serious pollution, and the method specifically comprises the following steps:
step one, 2 images I to be registered1And I2N registration point pairs P { (P)1i,p2i) 1,2, …, n, from which the ith is arbitrarily extracted1And i2Individual registration point pairAndwherein p is1i∈I1,p2i∈I2(e.g., p in FIG. 2)11、p12Is I1Upper 2 registration points).
Step two, connecting the first image I1Two sampling points in the system, and the sampling points are obtainedThe coordinates of the middle point of the two sampling points are obtained, and the ratio r of the gray levels of the pixels on the line segment connecting the two end points to the middle point is obtained1(ii) a Connecting the second image I by the same method2Two sampling points in the middle point, and the ratio r of the gray levels on the line segments on both sides of the middle point is calculated2(ii) a Calculating a ratio r1And r2Difference e between12. The calculation method of the ratio of the gray levels at the midpoint comprises the following steps: connecting the first image I1Two sampling points inAnd(e.g., p in FIG. 2)11、p12) First, the coordinates q of the midpoint of two sampling points are determined1<i1,i2>(e.g., q in FIG. 2)1) Then, the sum of the gray levels of the pixels on the line segments connecting the two end points to the middle point is calculated, and the ratio r of the gray levels on the line segments on the two sides of the middle point is calculated1. ② similarly to the step I, connecting the second image I2Two sampling points inAnd(e.g., p in FIG. 2)21、p22) First, the coordinates q of the center point thereof are obtained2<i1,i2>(e.g., q in FIG. 2)2) Then, the sum of the gray levels of the pixels on the line segments connecting the two end points to the middle point is calculated, and the ratio r of the gray levels on the line segments on the two sides of the middle point is calculated2. After traversing the sampling in the set P, the difference of the gray scale sum ratio of all the sampling point pairs is obtainedArranging the samples according to a dictionary, and obtaining:
R={e12,e13,…,e1n,e23,e24,…,e2n,…,en-1,n}.
such as: finding an image I1Fall in line segmentAndratio r of upper pixel gray-scale sum1<i1,i2>. If the gray scale at pixel p is not g (p), image I1Middle line segmentThe sum of the gray levels of the pixels above is:image I1Middle line segmentThe sum of the gray levels of the pixels above is:line segmentAndthe ratio of the upper pixel gray levels is:similarly, find image I2Fall in line segmentAndratio of upper pixel gray levels to sumCalculating r1<i1,i2>And r2<i1,i2>Difference of (2)Is recorded as:
and step three, constructing a kernel set through the difference value of the gray scale sum ratio of the two images in the step two. By constructing the kernel set, sampling is performed in the kernel set, and then sampling efficiency is improved. The method for constructing the kernel set comprises the following steps: A. traversing two registration point pairs for sampling, wherein the traversing sampling times areWherein n is the number of elements in the set of registration point pairs P to be detected. Repeating the first step and the second step to obtain the difference e of the ratio of the gray scale sum in the first step and the gray scale sum in the second step of all the sampling point pairsij. B. By giving a threshold, select eijSmall pairs of sampling points constitute a kernel set. The attributes used to construct the kernel set are: the connecting lines of the two real registration point pairs on the respective images experience the same content, and the gray scale ratio of the two real registration point pairs relative to the midpoint is relatively close; on the other hand, the content experienced by the connecting lines by the false registration points is different, so that the gray scale ratio difference between the connecting lines and the midpoint is larger. E.g. for threshold T1And T2Is provided with T1<T2Constructing a core set P of the set Pc,(as shown by the black registration dot pairs in fig. 3).
Step four, merging the kernel sets PcAnd the repeated redundant point pairs are combined in the combined kernel set, namely, the combined set of two sampling point pairs is solved. Two point pairs as in the first snapshot are { (A)1,A2),(B1,B2) Two of the second decimationThe point pair is { (A)1,A2),(C1,C2) And the difference values of the points in the image obtained by two times of sampling are within the allowable range of the threshold value, the point pair (A) which appears twice in the kernel set is repeated1,A2) Merging into a point pair:
{(A1,A2),(B1,B2)}∪{(A1,A2),(C1,C2)}={(A1,A2),(B1,B2),(C1,C2)}。
step five, adopting RANSAC method to centralize P in the kernelcSampling, detecting inner points in the corpus P (registration point pairs of gray dotted lines and black solid lines in figure 3 form a detection corpus P), extracting real registration point pairs, and solving two images I1And I2True registration point pair PT={(p1i,p2i)|m(p1i,p2i)=1,(p1i,p2i) Belongs to P }. And solving a registration model between the two images, namely a homography transformation matrix H. Solving for H by adopting a least square method, so that:the core set sampling method comprises the following steps: in kernel set P at a timecAnd randomly extracting four registration pairs, and solving a homography model matrix H determined by the four registration pairs. The method for detecting the points in the complete set comprises the following steps: substituting all registration point pairs in the complete set P into the model determined by random sampling one by one, and obtaining a given difference value epsilon0If the registration point pair | H (P) in P1i)-p2i|<ε0Then the registration point pair is an interior point pair. The inner point detection is the number of inner points as much as possible.
The invention has been described above with reference to the accompanying drawings, it is obvious that the invention is not limited to the specific implementation in the above-described manner, and it is within the scope of the invention to apply the inventive concept and solution to other applications without substantial modification. The protection scope of the present invention shall be subject to the protection scope defined by the claims.
Claims (3)
1. A method for purifying registration points against a highly contaminating background, said method comprising the steps of:
step one, aiming at two images I to be registered1And I2Optionally extracting 2 registration point pairs from the plurality of registration point pairs P;
step two, connecting the images I1The coordinates of the middle points of the two sampling points are obtained, and the ratio r of the gray levels of the pixels on the line segment connecting the two end points to the middle points is obtained1(ii) a Connecting images I using the same method2Two sampling points in the middle point, and the ratio r of the gray levels on the line segments on both sides of the middle point is calculated2(ii) a Calculating the ratio r1And r2Difference e between12;
Step three, constructing a kernel set through the difference value of the gray scale sum ratio of the two images in the step two;
the method for constructing the kernel set comprises the following steps: A. traversing two registration point pairs for sampling, wherein the traversing sampling times areWherein n is the number of elements in the set of registration point pairs P to be detected; repeating the first step and the second step to obtain the difference e of the ratio of the gray scale sum in the first step and the gray scale sum in the second step of all the sampling point pairsij(ii) a B. By giving a threshold, select eijThe smaller sampling point pairs form a kernel set;
the method comprises the following steps: connecting a first image I1Two sampling points inAndfirstly, the coordinates q of the middle points of two sampling points are obtained1<i1,i2>Then, the sum of the gray levels of the pixels on the line segment connecting the two end points to the midpoint is calculated, and further the sum of the gray levels of the pixels on the line segment connecting the two end points to the midpoint is calculatedRatio r of gray levels on line segments on both sides of the midpoint1(ii) a Step two is similar to step one, and step two is: connecting the second image I2Two sampling points inAndfirstly, the coordinate of the middle point is obtained, then the sum of the gray levels of the pixels on the line segments connecting the two end points to the middle point is obtained, and the ratio r of the gray levels on the line segments on the two sides of the middle point is obtained2;
Step four, combining the redundant point pairs in the kernel set;
and step five, traversing and sampling in the kernel set by adopting an RANSAC method, performing interior point detection in the full set, extracting real registration point pairs, and solving registration models of the two images.
2. The method for purifying registration points against a highly contaminated background as claimed in claim 1, wherein the attributes used for constructing the kernel set in the method are: the connecting lines of the two real registration point pairs on the respective images experience the same content, and the gray scale ratio of the two real registration point pairs relative to the midpoint is relatively close; on the other hand, the content experienced by the connecting lines is different for the false registration point pairs, and the gray scale ratio difference between the connection points and the middle point is larger.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710423423.8A CN107369168B (en) | 2017-06-07 | 2017-06-07 | Method for purifying registration points under heavy pollution background |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710423423.8A CN107369168B (en) | 2017-06-07 | 2017-06-07 | Method for purifying registration points under heavy pollution background |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107369168A CN107369168A (en) | 2017-11-21 |
CN107369168B true CN107369168B (en) | 2021-04-02 |
Family
ID=60304803
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710423423.8A Active CN107369168B (en) | 2017-06-07 | 2017-06-07 | Method for purifying registration points under heavy pollution background |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107369168B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110458870B (en) * | 2019-07-05 | 2020-06-02 | 北京迈格威科技有限公司 | Image registration, fusion and occlusion detection method and device and electronic equipment |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102509293A (en) * | 2011-11-04 | 2012-06-20 | 华北电力大学(保定) | Method for detecting consistency of different-source images |
CN104156968A (en) * | 2014-08-19 | 2014-11-19 | 山东临沂烟草有限公司 | Large-area complex-terrain-region unmanned plane sequence image rapid seamless splicing method |
CN105335524A (en) * | 2015-11-27 | 2016-02-17 | 中国科学院自动化研究所 | Graph search algorithm applied to large-scale irregular structure data |
CN105701766A (en) * | 2016-02-24 | 2016-06-22 | 网易(杭州)网络有限公司 | Image matching method and device |
CN106231282A (en) * | 2015-12-30 | 2016-12-14 | 深圳超多维科技有限公司 | Parallax calculation method, device and terminal |
CN106231349A (en) * | 2015-12-30 | 2016-12-14 | 深圳超多维科技有限公司 | Main broadcaster's class interaction platform server method for changing scenes and device, server |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2014054958A2 (en) * | 2012-10-05 | 2014-04-10 | Universidade De Coimbra | Method for aligning and tracking point regions in images with radial distortion that outputs motion model parameters, distortion calibration, and variation in zoom |
-
2017
- 2017-06-07 CN CN201710423423.8A patent/CN107369168B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102509293A (en) * | 2011-11-04 | 2012-06-20 | 华北电力大学(保定) | Method for detecting consistency of different-source images |
CN104156968A (en) * | 2014-08-19 | 2014-11-19 | 山东临沂烟草有限公司 | Large-area complex-terrain-region unmanned plane sequence image rapid seamless splicing method |
CN105335524A (en) * | 2015-11-27 | 2016-02-17 | 中国科学院自动化研究所 | Graph search algorithm applied to large-scale irregular structure data |
CN106231282A (en) * | 2015-12-30 | 2016-12-14 | 深圳超多维科技有限公司 | Parallax calculation method, device and terminal |
CN106231349A (en) * | 2015-12-30 | 2016-12-14 | 深圳超多维科技有限公司 | Main broadcaster's class interaction platform server method for changing scenes and device, server |
CN105701766A (en) * | 2016-02-24 | 2016-06-22 | 网易(杭州)网络有限公司 | Image matching method and device |
Also Published As
Publication number | Publication date |
---|---|
CN107369168A (en) | 2017-11-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109784333B (en) | Three-dimensional target detection method and system based on point cloud weighted channel characteristics | |
CN107767405B (en) | Nuclear correlation filtering target tracking method fusing convolutional neural network | |
CN108121991B (en) | Deep learning ship target detection method based on edge candidate region extraction | |
CN107203781B (en) | End-to-end weak supervision target detection method based on significance guidance | |
CN107705322A (en) | Motion estimate tracking and system | |
CN111709980A (en) | Multi-scale image registration method and device based on deep learning | |
CN104063706A (en) | Video fingerprint extraction method based on SURF algorithm | |
CN108038486A (en) | A kind of character detecting method | |
CN103218600A (en) | Real-time face detection algorithm | |
CN107871315B (en) | Video image motion detection method and device | |
Zhang et al. | DuGAN: An effective framework for underwater image enhancement | |
CN107369168B (en) | Method for purifying registration points under heavy pollution background | |
Zhao et al. | Analysis of image edge checking algorithms for the estimation of pear size | |
CN113095385B (en) | Multimode image matching method based on global and local feature description | |
Zhao | Motion track enhancement method of sports video image based on otsu algorithm | |
Li et al. | Hyperspectral image classification via sample expansion for convolutional neural network | |
CN109993782B (en) | Heterogeneous remote sensing image registration method and device for ring-shaped generation countermeasure network | |
CN107424172A (en) | Motion target tracking method with circle search method is differentiated based on prospect | |
CN108268533A (en) | A kind of Image Feature Matching method for image retrieval | |
Zhao et al. | An improved faster R-CNN algorithm for pedestrian detection | |
Zhang et al. | [Retracted] Posture Recognition and Behavior Tracking in Swimming Motion Images under Computer Machine Vision | |
Wei et al. | Image registration algorithm based on super pixel segmentation and SURF feature points | |
Tian et al. | A method for estimating an unknown target grasping pose based on keypoint detection | |
Feng et al. | Foreground Detection Based on Superpixel and Semantic Segmentation | |
Ma et al. | [Retracted] Weakly Supervised Real‐Time Object Detection Based on Salient Map Extraction and the Improved YOLOv5 Model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |