CN109657524A - A kind of image matching method and device - Google Patents
A kind of image matching method and device Download PDFInfo
- Publication number
- CN109657524A CN109657524A CN201710942633.8A CN201710942633A CN109657524A CN 109657524 A CN109657524 A CN 109657524A CN 201710942633 A CN201710942633 A CN 201710942633A CN 109657524 A CN109657524 A CN 109657524A
- Authority
- CN
- China
- Prior art keywords
- image
- acquired original
- original image
- target
- matching degree
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
- G06V20/582—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of traffic signs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
This application discloses a kind of image matching method and devices, method comprises determining that the matching degree of first object image and the second target image, first object image is to identify from the first acquired original image, and the second target image is to identify from the second acquired original image;According to matching degree, the second target image with each first object image unique match is calculated;The second target image of unique match is determined as the recognition result to same target by first object image and therewith.In this application, reduce workload in the above manner, saved a large amount of audit times, so that review efficiency be made to be greatly improved.
Description
Technical field
This application involves technical field of image processing, more specifically to a kind of image matching method and device.
Background technique
Traffic department usually places road signs in the form marked for public traveling that can be safe and orderly
Road surface two sides sign board or be rendered directly on road surface, alert, forbid, limit or refer to for the driving to driver
Show.
In intelligent transportation field, road signs have huge effect, as navigation Service quotient can be handed over based on road
Logical mark planning road stroke;Whether intelligent automobile can travel according to road signs real-time judge road.Cause
This, the identification of road signs is particularly important.The identification of existing road traffic sign is usually Usage data collection vehicle
Picture or video capture are carried out on road surface to acquire image, and then automatic identification is carried out to the image of acquisition, is identified
Each Traffic Sign Images, and each Traffic Sign Images that will identify that export give operating personnel audit, determined by operating personnel
Most effective information is simultaneously updated in database, provides service for industries such as maps.
But since the input source of automatic identification is the consecutive image of data collecting vehicle acquisition, the same traffic mark
Will is likely to be present in multiple continuous images, and then can be successively identified repeatedly, causes operating personnel to need same to belonging to
Each Traffic Sign Images of a traffic sign carry out repeating audit, waste time, review efficiency is low.
Summary of the invention
In view of this, being needed for solving operating personnel to category this application provides a kind of image matching method and device
It carries out repeating audit in each Traffic Sign Images of the same traffic sign, waste time, the low problem of review efficiency.
To achieve the goals above, it is proposed that scheme it is as follows:
A kind of image matching method, comprising:
Determine the matching degree of first object image and the second target image, the first object image is original to adopt from first
It is identified in collection image, second target image is to identify from the second acquired original image;
According to the matching degree, the second target image with each first object image unique match is calculated;
The second target image of unique match is determined as the identification to same target by the first object image and therewith
As a result.
Preferably, the matching degree of determining the first object image and the second target image, comprising:
Using image similarity calculation method, the similarity of first object image and the second target image is calculated as first
Matching degree;
Or,
According to the pixel motion of the pixel motion field of the first acquired original image and the second acquired original image
, determine the second matching degree of first object image and the second target image;
Or,
According to first object image in the first acquired original image shared first area and the second target image
The shared second area in the second acquired original image, calculates the registration of the first area Yu the second area,
Third matching degree as first object image and the second target image.
Preferably, the matching degree of determining the first object image and the second target image, further includes:
According to first matching degree, second matching degree and the third matching degree, determine first object image with
Total matching degree of second target image.
Preferably, the pixel motion field according to the first acquired original image and the second acquired original image
Pixel motion field, determine the second matching degree of first object image and the second target image, comprising:
If it is determined that the texture of the first acquired original image and the second acquired original image is all satisfied setting texture
Condition is then corresponding second original according to the corresponding first acquired original image of first object image and the second target image
Image is acquired, determines first motion vector and first acquired original figure of the first object image relative to the second target image
As the second motion vector relative to the second acquired original image;
According to first motion vector and second motion vector, first object image and the second target image are determined
The second matching degree.
Preferably, the pixel motion field according to the first acquired original image and the second acquired original image
Pixel motion field, determine the second matching degree of first object image and the second target image, comprising:
If it is determined that the texture of the first acquired original image or the second acquired original image is unsatisfactory for setting texture
Condition, then identify road straight line respectively in the first acquired original image and the second acquired original image, and according to
The road straight line identified determines road end point;
According to position of the first object image in the first acquired original image is stated, the position is calculated to described first
The third motion vector of road end point in acquired original image;
According to position of second target image in the second acquired original image, the position is calculated to described
4th motion vector of the road end point in two acquired original images;
According to the third motion vector and the 4th motion vector, first object image and the second target image are determined
The second matching degree.
Preferably, described according to the corresponding first acquired original image of first object image and the second target image pair
The the second acquired original image answered, determines first motion vector of the first object image relative to the second target image, Yi Ji
Second motion vector of the one acquired original image relative to the second acquired original image, comprising:
According to first position coordinate and described of the first object image in the first acquired original image
Second position coordinate of two target images in the second acquired original image calculates the first position coordinate to described
The motion vector of two position coordinates, as the first motion vector;
Determine the first object image corresponding mapping position coordinate in the second acquired original image;
The motion vector for calculating the first position coordinate to the mapping position coordinate, as the second motion vector.
Preferably, described according to the matching degree, calculate the second target with each first object image unique match
Image, comprising:
Using each first object image and each second target image, bipartite graph is constructed;
According to bipartite graph KM best match algorithm, the second target figure with each first object image unique match is calculated
Picture.
Preferably, further includes:
According to the acquisition time of acquired original image, each recognition result where each recognition result of same target corresponding former
Begin to acquire shared elemental area size and each recognition result present position in corresponding acquired original image in image, from each identification
As a result optimal recognition result is determined in.
A kind of image matching apparatus, comprising:
Matching degree determination unit, for determining the matching degree of first object image and the second target image, first mesh
Logo image is to identify from the first acquired original image, and second target image is to identify from the second acquired original image
Out;
Target image matching unit, for calculating and each first object image unique match according to the matching degree
The second target image;
Recognition result determination unit, for the second target image of the first object image and unique match therewith is true
It is set to the recognition result to same target.
Preferably, the matching degree determination unit, comprising:
Similarity calculated calculates first object image and the second target for using image similarity calculation method
The similarity of image is as the first matching degree;
Or,
Sports ground computing unit, for according to the pixel motion field of the first acquired original image and described second original
The pixel motion field for acquiring image, determines the second matching degree of first object image and the second target image;
Or,
Area coincidence degree computing unit, for according to first object image shared the in the first acquired original image
One region and the second target image shared second area in the second acquired original image, calculate the first area
With the registration of the second area, third matching degree as first object image and the second target image.
Preferably, the matching degree determination unit, further includes:
Total matching degree computing unit, for being matched according to first matching degree, second matching degree and the third
Degree, determines total matching degree of first object image and the second target image.
Preferably, the sports ground computing unit includes:
First sports ground computation subunit, for if it is determined that the first acquired original image and second acquired original
The texture of image is all satisfied setting textured condition, then according to the corresponding first acquired original image of first object image, Yi Ji
The corresponding second acquired original image of two target images determines first movement of the first object image relative to the second target image
The second motion vector of vector and the first acquired original image relative to the second acquired original image;
Second sports ground computation subunit, for determining according to first motion vector and second motion vector
Second matching degree of first object image and the second target image.
Preferably, the sports ground computing unit, comprising:
Third sports ground computation subunit, for if it is determined that the first acquired original image or second acquired original
The texture of image is unsatisfactory for setting textured condition, then in the first acquired original image and the second acquired original image
Road straight line is identified respectively, and road end point is determined according to the road straight line identified;
4th sports ground computation subunit states first object image in the first acquired original image for basis
Position, calculate the position to the road end point in the first acquired original image third motion vector;
5th sports ground computation subunit is used for according to second target image in the second acquired original image
Position, calculate the position to the road end point in the second acquired original image the 4th motion vector;
6th sports ground computation subunit, for determining according to the third motion vector and the 4th motion vector
Second matching degree of first object image and the second target image.
Preferably, the first sports ground computation subunit, comprising:
First motion vector determines subelement, is used for according to the first object image in the first acquired original image
In second position coordinate in the second acquired original image of first position coordinate and second target image,
The motion vector for calculating the first position coordinate to the second position coordinate, as the first motion vector;
Mapping position determines subelement, for determining that the first object image is right in the second acquired original image
The mapping position coordinate answered;
Second motion vector determines subelement, for calculating the fortune of the first position coordinate to the mapping position coordinate
Moving vector, as the second motion vector.
Preferably, the target image matching unit, comprising:
Bipartite graph construction unit, for utilizing each first object image and each second target image, building two
Component;
KM computing unit, for calculating unique with each first object image according to bipartite graph KM best match algorithm
Matched second target image.
Preferably, described device further include:
Optimal identification result determination unit, for adopting according to acquired original image where each recognition result of same target
Collection time, each recognition result shared elemental area size and each recognition result in corresponding acquired original image are adopted correspondence is original
Collect present position in image, optimal recognition result is determined from each recognition result.
It can be seen from the above technical scheme that the image matching method of the application, for from the first acquired original image
In the first object image that identifies, and the second target image identified from the second acquired original image, it is first determined
The matching degree of first object image and the second target image calculates and each first object then according to the matching degree of the two
Second target image of image unique match, the second target image of unique match is determined as together by first object image and therewith
Thus the recognition result of one target identifies all images of same target one by one, and then can be same from what is identified
Setting number is chosen in all images of target and exports the examination amount for reducing operating personnel to operating personnel, is saved
A large amount of audit times, so that review efficiency be made to be greatly improved.
Detailed description of the invention
In order to illustrate the technical solutions in the embodiments of the present application or in the prior art more clearly, to embodiment or will show below
There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this
The embodiment of application for those of ordinary skill in the art without creative efforts, can also basis
The attached drawing of offer obtains other attached drawings.
Fig. 1 is a kind of flow chart of image matching method disclosed in the embodiment of the present application;
Fig. 2 is the schematic diagram for being overlapped situation for illustrating first area with second area;
Fig. 3 is to illustrate the schematic diagram of the first motion vector and the second motion vector location;
Fig. 4 is a kind of logical construction schematic diagram of image matching apparatus disclosed in the embodiment of the present application.
Specific embodiment
Below in conjunction with the attached drawing in the embodiment of the present application, technical solutions in the embodiments of the present application carries out clear, complete
Site preparation description, it is clear that described embodiments are only a part of embodiments of the present application, instead of all the embodiments.It is based on
Embodiment in the application, it is obtained by those of ordinary skill in the art without making creative efforts every other
Embodiment shall fall in the protection scope of this application.
The embodiment of the present application discloses a kind of image matching method, for identified from the first acquired original image
One target image, and the second target image identified from the second acquired original image, it is first determined first object image
It is calculated and each first object image unique match with the matching degree of the second target image then according to the matching degree of the two
The second target image, the second target image of unique match is determined as the identification of same target by first object image and therewith
As a result, thus all matching images of same target are identified one by one.Next to the image matching method of the application into
Row is introduced, can be using two acquired original images of adjacent continuous as one group of implementation image in a kind of optional embodiment
With process, obtain each group of images match as a result, and each group of images match result is summarized, with realize identify
All images of same target.Wherein, the process of images match may comprise steps of referring to Fig. 1:
Step S100, the matching degree of first object image and the second target image is determined.
In the present embodiment, the first object image is to identify from the first acquired original image, second target
Image is to identify from the second acquired original image.First acquired original image and the second acquired original image can be continuously
Image, but because of the distance and visual angle change of acquisition, the first acquired original image and the second acquired original image are different.
It is understood that the first acquired original image may include at least one first object image, second original is adopted
Collecting image may include at least one second target image, identify in the acquired original image from two adjacent continuous all
First object image and the second target image after, it is thus necessary to determine that any one first object image and any one second target
The matching degree of image.
Step S110, according to the matching degree, the second target figure with each first object image unique match is calculated
Picture.
In the present embodiment, using according to the matching degree, second with each first object image unique match is calculated
The mode of target image, determines the first object image and the second target image of unique match is same target therewith.
It is understood that if determining the second target figure existed with first object images match according to the matching degree
Picture can then determine the second target image with first object image unique match from matched second target image.
It should be noted that second target image is only with a first object image, there are matching relationship, Bu Huiyu
There are matching relationships for multiple first object images.
Step S120, by the first object image and therewith, the second target image of unique match is determined as to same mesh
Target recognition result.
As described in step S110, the second target image of the first object image and therewith unique match is same mesh
Mark, therefore the second target image of unique match is determined as the identification knot to same target by the first object image and therewith
Fruit.
In the present embodiment, with the acquired original image of any two adjacent continuous for one group, step S100-S120 is executed,
Obtain each group of images match as a result, and each group of images match result is summarized, identify the institute of same target
After having image, the image of setting number, the output figure as the target can be arbitrarily chosen from all images of same target
Picture.The output image of the target can export gives operating personnel audit, by operating personnel determine most effective information and by it more
Newly into database, service is provided for industries such as maps.
Further, it is defeated can to choose optimal recognition result progress from all recognition results of same target by the application
Out.Optimal recognition result can be clear, complete image.The present embodiment illustrates a kind of all identifications from same target
As a result the optional embodiment of optimal identification result is chosen in, as follows:
According to the acquisition time of acquired original image, each recognition result where each recognition result of same target corresponding former
Begin to acquire shared elemental area size and each recognition result present position in corresponding acquired original image in image, from each identification
As a result optimal recognition result is determined in.
In a kind of optional embodiment, determine that the process of optimal identification result may include:
S1, acquisition time and remaining recognition result place acquired original figure according to recognition result place acquired original image
The sequencing of the acquisition time of picture determines the first weight selection of recognition result;
Wherein, the acquisition time of acquired original image where recognition result is compared to acquired original where remaining recognition result
More rearward, the first weight selection is bigger for the acquisition time of image.
S2, according to recognition result in corresponding acquired original image shared elemental area size, determine the of recognition result
Two weight selections;
Wherein, recognition result shared elemental area in corresponding acquired original image is bigger, and the second weight selection is bigger.
S3, according to recognition result in corresponding acquired original image present position, determine that the third of recognition result chooses power
Weight;
Wherein, recognition result present position in corresponding acquired original image is got over further away from boundary, third weight selection
Greatly.
S4, the first weight selection, the second weight selection, third weight selection according to recognition result, determine recognition result
Total weight selection, and total weight selection according to each recognition result therefrom determines optimal identification result.
It is understood that acquired original image acquisition time where recognition result is more rearward, then the acquired original image
Shooting distance it is closer, clarity is higher.Recognition result elemental area where in corresponding acquired original image is bigger, then it represents that
Recognition result is shot more clear.Recognition result is in corresponding acquired original image further away from boundary, then it represents that at recognition result
In acquired original picture centre region, is not in shooting missing, it is more complete to represent recognition result.
In the present embodiment, image matching method is applied in field of traffic, first object image and the second target image tool
Body may include but be not limited to traffic sign, such as yellow triangle (yellow triangle) mark: for example slow down pay attention to construction
The mark such as watch for pedestrians;Red circular (red circle) mark: it is marked such as maximum Xian Su Xian Kuan Jin Zhizuozhuan forbids non-motor vehicle
Will;Blue circle (Lan Yuan) mark: indicate such as keeping straight on turning bends to right to the left;Bluish-green side's board mark;Ground wire mark.
The image matching method of the application, for the first object image identified from the first acquired original image, with
And the second target image identified from the second acquired original image, it is first determined first object image and the second target image
Matching degree calculate the second target image with each first object image unique match then according to the matching degree of the two,
The second target image of unique match is determined as the recognition result of same target by first object image and therewith, thus will be same
All images of target identify one by one, and then setting number can be chosen from all images of the same target identified
The examination amount for reducing operating personnel to operating personnel is exported, a large amount of audit times have been saved, so that review efficiency be made to obtain
To being greatly improved.
Embodiments herein illustrates several optional embodiments, to determine first object image and the second target
The matching degree of image, several optional way difference are as follows:
(1), using image similarity calculation method, the similarity conduct of first object image and the second target image is calculated
First matching degree.
It, can be using the first matching degree as the first object image determined in previous embodiment step S100 in the present embodiment
With the matching degree of the second target image.
(2), according to the pixel of the pixel motion field of the first acquired original image and the second acquired original image
Sports ground determines the second matching degree of first object image and the second target image.
It, can be using the second matching degree of first object image and the second target image as previous embodiment in the present embodiment
The matching degree of the first object image and the second target image that are determined in step S100.
(3), according to first object image in the first acquired original image shared first area and the second target
Image shared second area in the second acquired original image calculates being overlapped for the first area and the second area
Degree, the third matching degree as first object image and the second target image.
In the present embodiment, position of the pixel of first object image in the first acquired original image can be passed through
Coordinate, to determine first object image shared first area in the first acquired original image.Similarly, second can be passed through
Position coordinates of the pixel of target image in the second acquired original image, to determine the second target image described
Shared second area in two acquired original images.
Optionally, the registration of the first area and the second area can be calculated using following formula:
C=O/S
In above-mentioned formula, C indicates the registration of the first area Yu the second area;
O indicates the overlapping region of the first area Yu the second area;
S indicates the union of the first area Yu the second area.
The first area and the second area are overlapped situation and can be divided into three kinds of situations: not being overlapped completely, part weight
It closes or is completely coincident.Fig. 2 is referred to, is overlapped situation it illustrates first area and second area, as shown in Fig. 2 (a), first
Region is not overlapped completely with second area, and as shown in Fig. 2 (b), first area partially overlaps with second area, such as Fig. 2 (c) institute
Show, first area is completely coincident with the second area.
It should be noted that not being overlapped the corresponding registration of situation completely is zero;The corresponding registration of the situation that partially overlaps
Value range be greater than 0 and less than 100%;Being completely coincident the corresponding registration of situation is 100%.
It, can be using the third matching degree of first object image and the second target image as previous embodiment in the present embodiment
The matching degree of the first object image and the second target image that are determined in step S100.
It is understood that can determine using any one in above-mentioned (1), (2) and (3) three kinds of embodiments
The matching degree of one target image and the second target image guarantees the determination of the matching degree of first object image and the second target image
Mode diversification, flexibility are high.
Optionally, the application can also be by above-mentioned (1), (2), (3) three kinds of determining first object images and the second target figure
The embodiment of the matching degree of picture is combined, to determine the matching degree of first object image and the second target image, such as:
S1, using image similarity calculation method, calculate the similarity conduct of first object image and the second target image
First matching degree.
S2, according to the pixel motion field between the first acquired original image and the second acquired original image, really
Determine the second matching degree of first object image and the second target image.
S3, according to first object image in the first acquired original image shared first area and the second target
Image shared second area in the second acquired original image calculates being overlapped for the first area and the second area
Degree, the third matching degree as first object image and the second target image.
S4, according to first matching degree, second matching degree and the third matching degree, determine first object image
With total matching degree of the second target image.
Specifically, total matching degree of this step determines first object image and the second target image can be used as aforementioned reality
Apply the matching degree of first object image and the second target image determined by step S100 in example.
It, can be directly by first matching degree, second matching degree and the third matching degree phase in the present embodiment
Add, total matching degree of the sum of the three as first object image and the second target image.
Another embodiment may include: by first matching degree, second matching degree and the third
It multiplied by the corresponding weight of three and sums respectively with degree, summed result is as first object image and the second target image
Total matching degree.
It is understood that the embodiment (1), (2) and (3) in previous embodiment is combined, to determine first object
The matching degree of image and the second target image comprehensively considers the calculating of matching degree from Multiple factors, improves determining matching degree
Accuracy.
It should be noted that acquired original image can be divided into two classes according to grain distribution situation, a kind of texture understands,
Another texture is unclear.It is whether clear according to the texture of acquired original image, above-mentioned determining first object image and the second mesh
The process of second matching degree of logo image is different.Next, being introduced respectively by different embodiments.
First, however, it is determined that the texture of the first acquired original image and the second acquired original image is all satisfied setting texture item
Part, i.e., if texture understands, then the second matching degree determination process may include:
S1, according to the corresponding first acquired original image of first object image and the second target image corresponding second
Acquired original image determines that first object image is adopted relative to the first motion vector of the second target image and first are original
Collect second motion vector of the image relative to the second acquired original image.
Specifically, first object image relative to the first motion vector of the second target image may include the direction of motion and
Displacement.
If it is determined that the texture of the first acquired original image and the second acquired original image is all satisfied setting texture
Condition illustrates that the pixel in the first acquired original image and the second acquired original image is textured pixel, by
In for textured pixel, the motion vector of pixel can be obtained by calculation, therefore the first mesh can be determined by calculation
Logo image is relative to the first motion vector of the second target image and the first acquired original image relative to the second acquired original
Second motion vector of image.
Specifically determine first object image relative to the second target image the can be calculated by the method for dense matching
The second motion vector of one motion vector and the first acquired original image relative to the second acquired original image.
S2, according to first motion vector and second motion vector, determine first object image and the second target
Second matching degree of image.
In the present embodiment, it can be come true by the deviation of calculating first motion vector and second motion vector
The second matching degree for determining first object image and the second target image, can specifically include: determine first motion vector and
The angle of second motion vector determines the second matching degree of first object image and the second target image according to angle.Its
In, angle is smaller, shows that first object image and the matching degree of the second target image are higher, then the value of the second matching degree is bigger;
Conversely, angle is bigger, show that first object image is lower with the matching degree of the second target image, then the value of the second matching degree is got over
It is small.
Optionally, above-mentioned steps S1, determine the process of the first motion vector and the second motion vector can specifically include with
Lower step:
S11, the first position coordinate according to the first object image in the first acquired original image, Yi Jisuo
Second position coordinate of second target image in the second acquired original image is stated, calculates the first position coordinate to institute
The motion vector for stating second position coordinate, as the first motion vector.
Specifically, position of the central pixel point of the first object image in first original image can be sat
It is denoted as the first position coordinate.It similarly, can be original described second by the central pixel point of second target image
Position coordinates in image are as the second position coordinate.
The first position coordinate and the second position coordinate can be subjected to additive operation, obtained difference in this step
Value is motion vector of the first position coordinate to the second position coordinate.
S12, the first object image corresponding mapping position coordinate in the second acquired original image is determined.
It is original described second can to calculate first object image by calculating global images match relationship in this step
Acquire corresponding mapping position coordinate in image.
S13, the motion vector for calculating the first position coordinate to the mapping position coordinate, as second move to
Amount.
Specifically, the first position coordinate and the mapping position coordinate can be subjected to additive operation, obtained difference
Value is as the first position coordinate to the motion vector of the mapping position coordinate.
It should be noted that the first position coordinate and the second position coordinate are carried out additive operation, and will
The first position coordinate and the mapping position coordinate carry out additive operation, need to keep additive operation rule consistent, such as two
Kind additive operation is using the first position coordinate as subtrahend, the second position coordinate and mapping position coordinate difference
As minuend;Alternatively, two kinds of additive operations are using the first position coordinate as minuend, the second position coordinate and
The mapping position coordinate is respectively as subtrahend.
Now in conjunction with Fig. 3, citing is illustrated the step S1-S3 in the present embodiment, such as first object image is described
First position coordinate in first acquired original image is (x0, y0), and the second target image is in the second acquired original image
In second position coordinate be (x1, y1), second position coordinate (x1, y1) is subtracted into first position coordinate (x0, y0), obtains institute
The motion vector for stating first position coordinate to the second position coordinate is (x1-x0, y1-y0).And by calculating global figure
As matching relationship, first position coordinate (x0, y0) corresponding mapping position coordinate in the second acquired original image is calculated
For (x2, y2), mapping position coordinate (x2, y2) is subtracted into first position coordinate (x0, y0), obtains the first position coordinate extremely
The motion vector of the mapping position coordinate is (x2-x0, y2-y0).
In another embodiment of the application, however, it is determined that the first acquired original image or second acquired original
The texture of image is unsatisfactory for setting textured condition, i.e. the texture of the first acquired original image or the second acquired original image is unclear
Chu, then the second matching degree determined by the second matching degree determination process according to upper embodiment introduction will appear error, therefore,
This embodiment describes another the second matching degree methods of determination, can specifically include following steps:
S1, road straight line is identified respectively in the first acquired original image and the second acquired original image, and
Road end point is determined according to the road straight line identified.
If it is determined that the texture of the first acquired original image or the second acquired original image is unsatisfactory for setting texture
Condition illustrates that the pixel in the first acquired original image and the second acquired original image is that texture-free or texture is unclear
Clear pixel, due to for no texture or the unsharp pixel of texture, it is not easy to the movement of pixel is directly obtained by calculating
Vector, therefore identified respectively in the first acquired original image and the second acquired original image first in the present embodiment
Road straight line, and road end point is determined according to the road straight line identified.
Several road straight lines identified in the first acquired original image there are public intersection point, by road
Straight line most common intersection in road can be assumed that as road end point.Based on this, road is determined according to the road straight line identified
The detailed process of end point may include: the common intersection of road straight line for determining and identifying, will be most by road straight line
Common intersection is as road end point.
Similarly, there are public intersection points for several road straight lines identified in the second acquired original image, lead to
The most common intersection of the road straight line crossed can be assumed that as road end point.Based on this, executed according to the road identified true
The detailed process for determining road end point may include: to determine the common intersection of the road straight line identified, will pass through road straight line
Most common intersections is as road end point.
S2, the position according to the first object image in the first acquired original image, calculate the position to institute
State the third motion vector of the road end point in the first acquired original image.
S3, the position according to second target image in the second acquired original image, calculate the position to institute
State the 4th motion vector of the road end point in the second acquired original image.
S4, according to the third motion vector and the 4th motion vector, determine first object image and the second target
Second matching degree of image.
In the present embodiment, it can be come true by the difference of the calculating third motion vector and the 4th motion vector
The second matching degree for determining first object image and the second target image, can specifically include: determine the third motion vector and
The angle of 4th motion vector determines the second matching degree of first object image and the second target image according to angle.Its
In, angle is smaller, and surface first object image and the matching degree of the second target image are higher, then the value of the second matching degree is bigger;
Conversely, angle is bigger, show that first object image is lower with the matching degree of the second target image, then the value of the second matching degree is got over
It is small.
It should be noted that when the angle of the determining third motion vector and the 4th motion vector, it can be by institute
It states third motion vector and the 4th motion vector moves in approximately the same plane, and the starting point of the two is identical, risen in the two
Point it is identical and in the same plane when, calculate the angle of the third motion vector and the 4th motion vector.
In another embodiment of the application, to step S110 in previous embodiment, according to the matching degree, calculate with
The process of second target image of each first object image unique match is introduced.The application can be according to the matching
Degree, the second target image with each first object image unique match is calculated by bipartite graph, specific embodiment is such as
Under:
S1, each first object image and each second target image, building bipartite graph are utilized.
In the matching degree for determining first object image and the second target image, when being calculated according to the matching degree, Mei Ge
One target image may match several the second target images.Second target image of different first object images match it
Between may exist overlapping, lift for example, matched second target image of first object image A1 be B1 and B2;First object image
Matched second target image of A2 is B2 and B3.It can be seen that the respective matched second target figure of first object image A1 and A2
There are same objects: B2 as in.
In order to determine the second target image of unique match for each first object image, the application can construct two points
Figure, solves the problems, such as this using bipartite graph.A vertex set is the set of each first object image composition in bipartite graph, another
Vertex set is the set of each second target image composition.It is above-mentioned to have calculated first object image and each second target image
Matching degree namely bipartite graph in, from each first object image form vertex set in arbitrarily take out a first object figure
The matching degree of any one second target image is all known in another summit set in picture, with bipartite graph.It, can be with based on this
Execute following step.
S2, according to bipartite graph best match algorithm KM algorithm, calculate the with each first object image unique match
Two target images.
Specifically, two vertex sets of bipartite graph are respectively defined as X and Y by the application, wherein set X is each first mesh
The vertex set of logo image composition, set Y are the vertex set of each second target image composition.In set X comprising object be Xi
(i=1,2,3 ... n);In set Y comprising object be Yj (j=1,2,3 ... m).The side of connecting object Xi and object Yj
Weight is wij, and weight w ij is equal to the matching degree of the side associated object Xi and object Yj.
By KM (Kuhn and Munkres, best match) algorithm, the every an object obtained in set Y is only associated with one
In the case where side, the weight and value on all associated sides of object in set X;Pair for selecting weight and being worth in maximum set X
As the object in the corresponding set Y in associated side is as the object with the object unique match in set X.
For example, having object X1 and X2 in a grouping, object X1 is matched respectively with Y1, Y2, and matching degree is respectively 50 and 60;
X2 is also matched with Y1, Y2 respectively, and matching degree is respectively 55 and 50;Then based on the principle of unique association, it is assumed that X1 and Y1 matching, this
When X2 and Y2 matching, obtain weight and value be 50+50=100;Again assume that X1 and Y2 matching, X2 and Y1 matching, obtains at this time
Weight is 60+55=115 with value;115 are greater than 100, and therefore, X1 and Y2 is matched, and X2 and Y1 matching are as final matching
As a result.
Bipartite graph matching method through this embodiment can determine with each first object image unique match
Two target images guarantee the accuracy of images match.
Image matching apparatus provided by the embodiments of the present application is described below, image matching apparatus described below with
Above-described image matching method can correspond to each other reference.
Fig. 4 is referred to, it illustrates a kind of logical construction schematic diagram of image matching apparatus provided by the present application, images
It include: matching degree determination unit 11, target image matching unit 12 and recognition result determination unit 13 with device.
Matching degree determination unit 11, for determining the matching degree of first object image and the second target image, described first
Target image is to identify from the first acquired original image, and second target image is to know from the second acquired original image
Do not go out.
Target image matching unit 12, for calculating and each unique of first object image according to the matching degree
The second target image matched.
Recognition result determination unit 13, for by the first object image and the second target image of unique match therewith
It is determined as the recognition result to same target.
In the present embodiment, matching degree determination unit 11 be can specifically include:
Similarity calculated calculates first object image and the second target for using image similarity calculation method
The similarity of image is as the first matching degree;
Or,
Sports ground computing unit, for according to the pixel motion field of the first acquired original image and described second original
The pixel motion field for acquiring image, determines the second matching degree of first object image and the second target image;
Or,
Area coincidence degree computing unit, for according to first object image shared the in the first acquired original image
One region and the second target image shared second area in the second acquired original image, calculate the first area
With the registration of the second area, third matching degree as first object image and the second target image.
In the present embodiment, above-mentioned matching degree determination unit 11 can also include:
Total matching degree computing unit, for being matched according to first matching degree, second matching degree and the third
Degree, determines total matching degree of first object image and the second target image.
Based on above content, the sports ground computing unit be can specifically include:
First sports ground computation subunit, for if it is determined that the first acquired original image and second acquired original
The texture of image is all satisfied setting textured condition, then according to the corresponding first acquired original image of first object image, Yi Ji
The corresponding second acquired original image of two target images determines first movement of the first object image relative to the second target image
The second motion vector of vector and the first acquired original image relative to the second acquired original image;
Second sports ground computation subunit, for determining according to first motion vector and second motion vector
Second matching degree of first object image and the second target image.
Alternatively, the sports ground computing unit, can specifically include:
Third sports ground computation subunit, for if it is determined that the first acquired original image or second acquired original
The texture of image is unsatisfactory for setting textured condition, then in the first acquired original image and the second acquired original image
Road straight line is identified respectively, and road end point is determined according to the road straight line identified;
4th sports ground computation subunit states first object image in the first acquired original image for basis
Position, calculate the position to the road end point in the first acquired original image third motion vector;
5th sports ground computation subunit is used for according to second target image in the second acquired original image
Position, calculate the position to the road end point in the second acquired original image the 4th motion vector;
6th sports ground computation subunit, for determining according to the third motion vector and the 4th motion vector
Second matching degree of first object image and the second target image.
In the present embodiment, the first sports ground computation subunit be can specifically include:
First motion vector determines subelement, is used for according to the first object image in the first acquired original image
In second position coordinate in the second acquired original image of first position coordinate and second target image,
The motion vector for calculating the first position coordinate to the second position coordinate, as the first motion vector;
Mapping position determines subelement, for determining that the first object image is right in the second acquired original image
The mapping position coordinate answered;
Second motion vector determines subelement, for calculating the fortune of the first position coordinate to the mapping position coordinate
Moving vector, as the second motion vector.
In the present embodiment, the target image matching unit 12 be can specifically include:
Bipartite graph construction unit, for utilizing each first object image and each second target image, building two
Component;
KM computing unit, for calculating unique with each first object image according to bipartite graph KM best match algorithm
Matched second target image.
In the present embodiment, described image coalignment can also include: optimal identification result determination unit, for according to same
The acquisition time of acquired original image, each recognition result institute in corresponding acquired original image where each recognition result of one target
Elemental area size and each recognition result present position in corresponding acquired original image are accounted for, determination is optimal from each recognition result
Recognition result.
Finally, it is to be noted that, herein, relational terms such as first and second and the like be used merely to by
One entity or operation are distinguished with another entity or operation, without necessarily requiring or implying these entities or operation
Between there are any actual relationship or orders.Moreover, the terms "include", "comprise" or its any other variant meaning
Covering non-exclusive inclusion, so that the process, method, article or equipment for including a series of elements not only includes that
A little elements, but also including other elements that are not explicitly listed, or further include for this process, method, article or
The intrinsic element of equipment.In the absence of more restrictions, the element limited by sentence "including a ...", is not arranged
Except there is also other identical elements in the process, method, article or apparatus that includes the element.
Each embodiment in this specification is described in a progressive manner, the highlights of each of the examples are with other
The difference of embodiment, the same or similar parts in each embodiment may refer to each other.
The foregoing description of the disclosed embodiments makes professional and technical personnel in the field can be realized or use the application.
Various modifications to these embodiments will be readily apparent to those skilled in the art, as defined herein
General Principle can be realized in other embodiments without departing from the spirit or scope of the application.Therefore, the application
It is not intended to be limited to the embodiments shown herein, and is to fit to and the principles and novel features disclosed herein phase one
The widest scope of cause.
Claims (16)
1. a kind of image matching method characterized by comprising
Determine the matching degree of first object image and the second target image, the first object image is from the first acquired original figure
It is identified as in, second target image is to identify from the second acquired original image;
According to the matching degree, the second target image with each first object image unique match is calculated;
The second target image of unique match is determined as the recognition result to same target by the first object image and therewith.
2. the method according to claim 1, wherein the determining first object image and the second target image
Matching degree, comprising:
Using image similarity calculation method, calculates first object image and matched with the similarity of the second target image as first
Degree;
Or,
According to the pixel motion field of the pixel motion field of the first acquired original image and the second acquired original image, really
Determine the second matching degree of first object image and the second target image;
Or,
According to first object image in the first acquired original image shared first area and the second target image in institute
Shared second area in the second acquired original image is stated, the registration of the first area Yu the second area is calculated, as
The third matching degree of first object image and the second target image.
3. according to the method described in claim 2, it is characterized in that, the determining first object image and the second target image
Matching degree, further includes:
According to first matching degree, second matching degree and the third matching degree, first object image and second are determined
Total matching degree of target image.
4. according to the method described in claim 3, it is characterized in that, described transport according to the pixel of the first acquired original image
The pixel motion field of dynamic field and the second acquired original image, determines second of first object image and the second target image
With degree, comprising:
If it is determined that the texture of the first acquired original image and the second acquired original image is all satisfied setting textured condition,
Then according to the corresponding first acquired original image of first object image and the corresponding second acquired original figure of the second target image
Picture determines that first object image is opposite relative to the first motion vector of the second target image and the first acquired original image
In the second motion vector of the second acquired original image;
According to first motion vector and second motion vector, the of first object image and the second target image is determined
Two matching degrees.
5. according to the method described in claim 3, it is characterized in that, described transport according to the pixel of the first acquired original image
The pixel motion field of dynamic field and the second acquired original image, determines second of first object image and the second target image
With degree, comprising:
If it is determined that the texture of the first acquired original image or the second acquired original image is unsatisfactory for setting textured condition,
Then identify road straight line respectively in the first acquired original image and the second acquired original image, and according to identifying
Road straight line determine road end point;
According to position of the first object image in the first acquired original image is stated, it is original to described first to calculate the position
Acquire the third motion vector of the road end point in image;
According to position of second target image in the second acquired original image, it is former to described second to calculate the position
4th motion vector of the road end point in beginning acquisition image;
According to the third motion vector and the 4th motion vector, the of first object image and the second target image is determined
Two matching degrees.
6. according to the method described in claim 4, it is characterized in that, described original adopt according to first object image corresponding first
Collect image and the corresponding second acquired original image of the second target image, determines first object image relative to the second target
The second motion vector of first motion vector of image and the first acquired original image relative to the second acquired original image,
Include:
According to first position coordinate and second mesh of the first object image in the first acquired original image
Second position coordinate of the logo image in the second acquired original image calculates the first position coordinate to the second
The motion vector for setting coordinate, as the first motion vector;
Determine the first object image corresponding mapping position coordinate in the second acquired original image;
The motion vector for calculating the first position coordinate to the mapping position coordinate, as the second motion vector.
7. the method according to claim 1, wherein described according to the matching degree, calculating and each described first
Second target image of target image unique match, comprising:
Using each first object image and each second target image, bipartite graph is constructed;
According to bipartite graph KM best match algorithm, the second target image with each first object image unique match is calculated.
8. the method according to claim 1, wherein further include:
It is adopted according to the acquisition time of acquired original image where each recognition result of same target, each recognition result correspondence is original
Collect shared elemental area size and each recognition result present position in corresponding acquired original image in image, from each recognition result
The optimal recognition result of middle determination.
9. a kind of image matching apparatus characterized by comprising
Matching degree determination unit, for determining the matching degree of first object image and the second target image, the first object figure
As to identify from the first acquired original image, second target image is to identify from the second acquired original image;
Target image matching unit, for according to the matching degree, calculating the with each first object image unique match
Two target images;
Recognition result determination unit, for the second target image of unique match to be determined as by the first object image and therewith
To the recognition result of same target.
10. device according to claim 9, which is characterized in that the matching degree determination unit, comprising:
Similarity calculated calculates first object image and the second target image for using image similarity calculation method
Similarity as the first matching degree;
Or,
Sports ground computing unit, for according to the first acquired original image pixel motion field and second acquired original
The pixel motion field of image determines the second matching degree of first object image and the second target image;
Or,
Area coincidence degree computing unit, for according to first object image in the first acquired original image shared firstth area
Domain and the second target image shared second area in the second acquired original image, calculate the first area and institute
The registration for stating second area, the third matching degree as first object image and the second target image.
11. device according to claim 10, which is characterized in that the matching degree determination unit, further includes:
Total matching degree computing unit, is used for according to first matching degree, second matching degree and the third matching degree, really
Determine total matching degree of first object image and the second target image.
12. device according to claim 11, which is characterized in that the sports ground computing unit includes:
First sports ground computation subunit, for if it is determined that the first acquired original image and the second acquired original image
Texture be all satisfied setting textured condition, then according to the corresponding first acquired original image of first object image and the second mesh
The corresponding second acquired original image of logo image, determine first object image relative to the first of the second target image move to
The second motion vector of amount and the first acquired original image relative to the second acquired original image;
Second sports ground computation subunit, for determining first according to first motion vector and second motion vector
Second matching degree of target image and the second target image.
13. device according to claim 11, which is characterized in that the sports ground computing unit, comprising:
Third sports ground computation subunit, for if it is determined that the first acquired original image or the second acquired original image
Texture be unsatisfactory for setting textured condition, then in the first acquired original image and the second acquired original image respectively
It identifies road straight line, and road end point is determined according to the road straight line identified;
4th sports ground computation subunit states position of the first object image in the first acquired original image for basis
Set, calculate the position to the road end point in the first acquired original image third motion vector;
5th sports ground computation subunit, for the position according to second target image in the second acquired original image
Set, calculate the position to the road end point in the second acquired original image the 4th motion vector;
6th sports ground computation subunit, for determining first according to the third motion vector and the 4th motion vector
Second matching degree of target image and the second target image.
14. device according to claim 12, which is characterized in that the first sports ground computation subunit, comprising:
First motion vector determines subelement, for according to the first object image in the first acquired original image
The second position coordinate of first position coordinate and second target image in the second acquired original image calculates
The first position coordinate to the second position coordinate motion vector, as the first motion vector;
Mapping position determines subelement, for determining that the first object image is corresponding in the second acquired original image
Mapping position coordinate;
Second motion vector determines subelement, for calculate the first position coordinate to the mapping position coordinate movement to
Amount, as the second motion vector.
15. device according to claim 9, which is characterized in that the target image matching unit, comprising:
Bipartite graph construction unit constructs bipartite graph for utilizing each first object image and each second target image;
KM computing unit, for calculating and each first object image unique match according to bipartite graph KM best match algorithm
The second target image.
16. device according to claim 9, which is characterized in that described device further include:
Optimal identification result determination unit, when for according to the acquisition of acquired original image where each recognition result of same target
Between, each recognition result in corresponding acquired original image shared elemental area size and each recognition result in corresponding acquired original figure
The present position as in determines optimal recognition result from each recognition result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710942633.8A CN109657524B (en) | 2017-10-11 | 2017-10-11 | Image matching method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710942633.8A CN109657524B (en) | 2017-10-11 | 2017-10-11 | Image matching method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109657524A true CN109657524A (en) | 2019-04-19 |
CN109657524B CN109657524B (en) | 2021-03-05 |
Family
ID=66109633
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710942633.8A Active CN109657524B (en) | 2017-10-11 | 2017-10-11 | Image matching method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109657524B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110415339A (en) * | 2019-07-19 | 2019-11-05 | 清华大学 | The method and apparatus for calculating the matching relationship between input three-dimensional body |
WO2021051857A1 (en) * | 2019-09-18 | 2021-03-25 | 北京市商汤科技开发有限公司 | Target object matching method and apparatus, electronic device and storage medium |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100189409A1 (en) * | 2009-01-26 | 2010-07-29 | Paul Brasnett | Video identification |
US20110069158A1 (en) * | 2009-09-21 | 2011-03-24 | Dekel Shiloh | Virtual window system and method |
CN103366571A (en) * | 2013-07-03 | 2013-10-23 | 河南中原高速公路股份有限公司 | Intelligent method for detecting traffic accident at night |
CN103617625A (en) * | 2013-12-13 | 2014-03-05 | 中国气象局北京城市气象研究所 | Image matching method and image matching device |
CN103678661A (en) * | 2013-12-24 | 2014-03-26 | 中国联合网络通信集团有限公司 | Image searching method and terminal |
CN103955481A (en) * | 2014-04-03 | 2014-07-30 | 小米科技有限责任公司 | Picture displaying method and device |
CN104376332A (en) * | 2014-12-09 | 2015-02-25 | 深圳市捷顺科技实业股份有限公司 | License plate recognition method and device |
CN106960451A (en) * | 2017-03-13 | 2017-07-18 | 西安电子科技大学 | A kind of method for lifting the weak texture region characteristic point quantity of image |
-
2017
- 2017-10-11 CN CN201710942633.8A patent/CN109657524B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100189409A1 (en) * | 2009-01-26 | 2010-07-29 | Paul Brasnett | Video identification |
US20110069158A1 (en) * | 2009-09-21 | 2011-03-24 | Dekel Shiloh | Virtual window system and method |
CN103366571A (en) * | 2013-07-03 | 2013-10-23 | 河南中原高速公路股份有限公司 | Intelligent method for detecting traffic accident at night |
CN103617625A (en) * | 2013-12-13 | 2014-03-05 | 中国气象局北京城市气象研究所 | Image matching method and image matching device |
CN103678661A (en) * | 2013-12-24 | 2014-03-26 | 中国联合网络通信集团有限公司 | Image searching method and terminal |
CN103955481A (en) * | 2014-04-03 | 2014-07-30 | 小米科技有限责任公司 | Picture displaying method and device |
CN104376332A (en) * | 2014-12-09 | 2015-02-25 | 深圳市捷顺科技实业股份有限公司 | License plate recognition method and device |
CN106960451A (en) * | 2017-03-13 | 2017-07-18 | 西安电子科技大学 | A kind of method for lifting the weak texture region characteristic point quantity of image |
Non-Patent Citations (1)
Title |
---|
李林: "《数字城市建设指南》", 30 March 2010, 东南大学出版社 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110415339A (en) * | 2019-07-19 | 2019-11-05 | 清华大学 | The method and apparatus for calculating the matching relationship between input three-dimensional body |
CN110415339B (en) * | 2019-07-19 | 2021-07-13 | 清华大学 | Method and device for calculating matching relation between input three-dimensional shapes |
WO2021051857A1 (en) * | 2019-09-18 | 2021-03-25 | 北京市商汤科技开发有限公司 | Target object matching method and apparatus, electronic device and storage medium |
TWI747325B (en) * | 2019-09-18 | 2021-11-21 | 大陸商北京市商湯科技開發有限公司 | Target object matching method, target object matching device, electronic equipment and computer readable storage medium |
JP2022542668A (en) * | 2019-09-18 | 2022-10-06 | ベイジン センスタイム テクノロジー ディベロップメント カンパニー リミテッド | Target object matching method and device, electronic device and storage medium |
JP7262659B2 (en) | 2019-09-18 | 2023-04-21 | ベイジン センスタイム テクノロジー ディベロップメント カンパニー リミテッド | Target object matching method and device, electronic device and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN109657524B (en) | 2021-03-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10223816B2 (en) | Method and apparatus for generating map geometry based on a received image and probe data | |
US10163263B2 (en) | Using image content to facilitate navigation in panoramic image data | |
CN105550199B (en) | A kind of point polymerization and device based on multi-source map | |
CN108280886A (en) | Laser point cloud mask method, device and readable storage medium storing program for executing | |
CN104183016B (en) | A kind of construction method of quick 2.5 dimension building model | |
CN104063711B (en) | A kind of corridor end point fast algorithm of detecting based on K means methods | |
CN106980633A (en) | The generation method and device of indoor map data | |
CN111047626A (en) | Target tracking method and device, electronic equipment and storage medium | |
CN107316332A (en) | The camera and scene relating scaling method and system of a kind of application intelligent driving | |
CN104751511A (en) | 3D scene construction method and device | |
CN109657524A (en) | A kind of image matching method and device | |
CN115346012A (en) | Intersection surface generation method, apparatus, device, storage medium and program product | |
CN110009571A (en) | Calculation of longitude & latitude method, system and the storage medium of position are detected in camera image | |
US9396552B1 (en) | Image change detection | |
CN113950611A (en) | Method and data processing system for predicting road properties | |
AU2015376657B2 (en) | Image change detection | |
CN112149471A (en) | Loopback detection method and device based on semantic point cloud | |
TW202022804A (en) | Method and system for road image reconstruction and vehicle positioning | |
CN108898679A (en) | A kind of method of component serial number automatic marking | |
CN108399742A (en) | A kind of traffic situation thermal map method for visualizing based on traffic saturation degree | |
EP3664038A1 (en) | Geospatial surveying tool | |
CN110298253A (en) | A kind of physically weak quasi- display methods of urban architecture based on population big data and system | |
CN114820931A (en) | Virtual reality-based CIM (common information model) visual real-time imaging method for smart city | |
CN112651393B (en) | Method, device, equipment and storage medium for processing interest point data | |
CN114061563B (en) | Target point rationality judging method, device, terminal equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20200508 Address after: 310052 room 508, floor 5, building 4, No. 699, Wangshang Road, Changhe street, Binjiang District, Hangzhou City, Zhejiang Province Applicant after: Alibaba (China) Co.,Ltd. Address before: 100080 Beijing City, Haidian District Suzhou Street No. 3 floor 16 room 2 Applicant before: AUTONAVI INFORMATION TECHNOLOGY Co.,Ltd. |
|
GR01 | Patent grant | ||
GR01 | Patent grant |