CN106777350B - Method and device for searching pictures with pictures based on bayonet data - Google Patents

Method and device for searching pictures with pictures based on bayonet data Download PDF

Info

Publication number
CN106777350B
CN106777350B CN201710033558.3A CN201710033558A CN106777350B CN 106777350 B CN106777350 B CN 106777350B CN 201710033558 A CN201710033558 A CN 201710033558A CN 106777350 B CN106777350 B CN 106777350B
Authority
CN
China
Prior art keywords
image
target vehicle
feature points
vehicle image
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710033558.3A
Other languages
Chinese (zh)
Other versions
CN106777350A (en
Inventor
刘兵
刘彦甲
王彬
徐文丽
魏楠楠
耿盼盼
朱慧卿
王文建
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense TransTech Co Ltd
Original Assignee
Hisense TransTech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense TransTech Co Ltd filed Critical Hisense TransTech Co Ltd
Priority to CN201710033558.3A priority Critical patent/CN106777350B/en
Publication of CN106777350A publication Critical patent/CN106777350A/en
Application granted granted Critical
Publication of CN106777350B publication Critical patent/CN106777350B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/255Detecting or recognising potential candidate objects based on visual cues, e.g. shapes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Library & Information Science (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides a method and a device for searching pictures by pictures based on bayonet data, comprising the following steps: the method comprises the steps of obtaining a vehicle body area in a target vehicle image, and then extracting feature information of the target vehicle image from the vehicle body area, wherein the feature information comprises feature points of the target vehicle image and the scale, the main direction and the relative position of the feature points. And then inquiring the feature points of the sample image stored in the feature database and the scale, the main direction and the relative position of the feature points of the sample image according to the feature information of the target vehicle image, and determining the image similar to the target vehicle image. In the embodiment of the invention, the similar images are searched from the checkpoint database according to the extracted multiple characteristic points of the target vehicle image and the related information of the characteristic points, so that the similar images can be still searched from the checkpoint database when the inherent information of the vehicle, such as color, model, brand, and the like, is repeated or lost, and the precision of searching the similar vehicle images from the checkpoint database is improved.

Description

Method and device for searching pictures with pictures based on bayonet data
Technical Field
The embodiment of the invention relates to the field of intelligent transportation, in particular to a method and a device for searching pictures by pictures based on checkpoint data.
Background
With the rapid development and progress of cities, city safety is also receiving more and more attention. When video detection, fake-licensed vehicle search and suspect vehicle search are carried out, the pictures of the same vehicle need to be found out from the bayonet database according to the pictures of the target vehicle. The traditional checkpoint vehicle retrieval system can only carry out partial condition query or combined condition query according to license plate information, vehicle color, vehicle type and brand information. The existing vehicles are large in quantity, the probability of repetition of vehicle color, vehicle type and brand information is high, and when license plate information is lost, a target vehicle cannot be quickly and accurately searched, so that the precision of searching similar vehicle images from a checkpoint database according to the target vehicle images is low.
Disclosure of Invention
The embodiment of the invention provides a method and a device for searching a map by using a map based on checkpoint data, which are used for solving the problem of low precision of searching similar vehicle images from a checkpoint database by using a traditional checkpoint vehicle retrieval system according to vehicle target vehicle images.
The embodiment of the invention provides a method for searching a picture by a picture based on bayonet data, which comprises the following steps:
acquiring a vehicle body area in a target vehicle image;
extracting feature information of the target vehicle image from a vehicle body region in the target vehicle image, wherein the feature information comprises feature points of the target vehicle image and the scale, the main direction and the relative position of the feature points of the target vehicle image;
and inquiring the feature points of the sample images stored in a feature database and the scale, the main direction and the relative position of the feature points of the target vehicle image according to the feature points of the target vehicle image and the scale, the main direction and the relative position of the feature points of the sample images, and determining an image similar to the target vehicle image, wherein the feature database is determined according to the images in a bayonet database.
Optionally, before acquiring the vehicle body region in the target vehicle image, the method further includes:
acquiring a set number of images in the checkpoint database as training images and determining a vehicle body area in the training images;
extracting feature points of the training image from a vehicle body region in the training image;
clustering the feature points of the training images, and determining each type of feature points as a word in a visual dictionary, wherein each word in the visual dictionary corresponds to a word number;
and saving the word and the word number to the visual dictionary.
Optionally, the determining a feature database according to the image in the bayonet database includes:
acquiring all images in the checkpoint database as sample images and determining the body area of the sample images;
extracting feature information of the sample image from a body region in the sample image, the feature information including feature points of the sample image and dimensions, principal directions, and relative positions of the feature points of the sample image;
mapping the feature points of the sample image to words in the visual dictionary and determining word numbers of the feature points of the sample image;
and storing the word number of the characteristic point of the sample image and the scale, the main direction and the relative position of the characteristic point of the sample image into the characteristic database.
Optionally, the determining, from a feature database, an image similar to the target vehicle image according to the feature points of the target vehicle image and the scale, the main direction, and the relative position of the feature points of the target vehicle image includes:
mapping the feature points of the target vehicle image to words in the visual dictionary and determining word numbers of the feature points of the target vehicle image;
screening out feature points with the same character number as the feature points of the target vehicle image from the feature database according to the character number of the feature points of the target vehicle image;
taking an image corresponding to the feature point with the same character number as the feature point of the target vehicle image in the feature database as a comparison image;
determining matching feature points of the target vehicle image and the comparison image according to the visual word number of the target vehicle image and the visual word number of the comparison image, wherein the visual word number is determined according to a word number, the scale of the feature points, a main direction and a relative position;
determining the similarity of the target vehicle image and the contrast image according to the feature points of the target vehicle image, the feature points of the contrast image and the matching feature points of the target vehicle image and the contrast image;
and determining the contrast image with the similarity meeting a set threshold as an image similar to the target vehicle image.
Optionally, the determining of the visual word number according to the word number, the scale of the feature point, the main direction, and the relative position complies with the following formula (1):
M=Wid+Tw×(Pid+Tp×(Did+Td×Sid))………………………………(1)
wherein M is a visual word number, WidFor word numbering in visual dictionary, Tw is the total number of words in visual dictionary, PidNumber of relative position, TpIs the total number of relative positions, DidNumber of main direction, TdTotal number of major directions, SidIs the number of the scale.
The similarity of the target vehicle image and the contrast image is determined to accord with the following formula (2) according to the feature points of the target vehicle image, the feature points of the contrast image and the matching feature points of the target vehicle image and the contrast image:
Figure BDA0001210936350000031
where F (a, B) is the similarity between the target vehicle image and the comparison image, P (a ∩ B) is the number of matching feature points between the target vehicle image and the comparison image, P (a) is the number of feature points of the target vehicle image, and P (B) is the number of feature points of the comparison image.
Correspondingly, an embodiment of the present invention further provides a device for searching a picture with a picture based on bayonet data, including:
the acquisition module is used for acquiring a vehicle body area in the target vehicle image;
the feature extraction module is used for extracting feature information of the target vehicle image from a vehicle body region in the target vehicle image, wherein the feature information comprises feature points of the target vehicle image and the scale, the main direction and the relative position of the feature points of the target vehicle image;
and the processing module is used for inquiring the characteristic points of the sample images stored in a characteristic database and the scale, the main direction and the relative position of the characteristic points of the sample images according to the characteristic points of the target vehicle images and the scale, the main direction and the relative position of the characteristic points of the target vehicle images, and determining images similar to the target vehicle images, wherein the characteristic database is determined according to the images in a bayonet database.
Optionally, the obtaining module is further configured to:
acquiring a set number of images in the checkpoint database as training images and determining a vehicle body area in the training images;
extracting feature points of the training image from a vehicle body region in the training image;
clustering the feature points of the training images, and determining each type of feature points as a word in a visual dictionary, wherein each word in the visual dictionary corresponds to a word number;
and saving the word and the word number to the visual dictionary.
Optionally, the processing module is specifically configured to:
acquiring all images in the checkpoint database as sample images and determining the body area of the sample images;
extracting feature information of the sample image from a body region in the sample image, the feature information including feature points of the sample image and dimensions, principal directions, and relative positions of the feature points of the sample image;
mapping the feature points of the sample image to words in the visual dictionary and determining word numbers of the feature points of the sample image;
and storing the word number of the characteristic point of the sample image and the scale, the main direction and the relative position of the characteristic point of the sample image into the characteristic database.
Optionally, the processing module is specifically configured to:
mapping the feature points of the target vehicle image to words in the visual dictionary and determining word numbers of the feature points of the target vehicle image;
screening out feature points with the same character number as the feature points of the target vehicle image from the feature database according to the character number of the feature points of the target vehicle image;
taking an image corresponding to the feature point with the same character number as the feature point of the target vehicle image in the feature database as a comparison image;
determining matching feature points of the target vehicle image and the comparison image according to the visual word number of the target vehicle image and the visual word number of the comparison image, wherein the visual word number is determined according to a word number, the scale of the feature points, a main direction and a relative position;
determining the similarity of the target vehicle image and the contrast image according to the feature points of the target vehicle image, the feature points of the contrast image and the matching feature points of the target vehicle image and the contrast image;
and determining the contrast image with the similarity meeting a set threshold as an image similar to the target vehicle image.
Optionally, the processing module is specifically configured to:
the visual word number determined according to the word number, the scale, the main direction and the relative position of the feature points conforms to the following formula (1):
M=Wid+Tw×(Pid+Tp×(Did+Td×Sid))………………………………(1)
wherein M is a visual word number, WidFor word numbering in visual dictionary, Tw is the total number of words in visual dictionary, PidNumber of relative position, TpIs the total number of relative positions, DidNumber of main direction, TdTotal number of major directions, SidIs the number of the scale.
The similarity of the target vehicle image and the contrast image is determined to accord with the following formula (2) according to the feature points of the target vehicle image, the feature points of the contrast image and the matching feature points of the target vehicle image and the contrast image:
Figure BDA0001210936350000051
where F (a, B) is the similarity between the target vehicle image and the comparison image, P (a ∩ B) is the number of matching feature points between the target vehicle image and the comparison image, P (a) is the number of feature points of the target vehicle image, and P (B) is the number of feature points of the comparison image.
The embodiment of the invention shows that a vehicle body region in a target vehicle image is obtained, and then the characteristic information of the target vehicle image is extracted from the vehicle body region in the target vehicle image, wherein the characteristic information comprises the characteristic points of the target vehicle image and the scale, the main direction and the relative position of the characteristic points of the target vehicle image. And then querying the feature points of the sample images stored in a feature database and the scale, the main direction and the relative position of the feature points of the target vehicle image according to the feature points of the target vehicle image and the scale, the main direction and the relative position of the feature points of the sample images, and determining an image similar to the target vehicle image, wherein the feature database is determined according to the images in a bayonet database. In the embodiment of the invention, the similar images are searched from the checkpoint database by extracting the plurality of characteristic points and the related information of the characteristic points of the target vehicle image and searching the similar images from the checkpoint database according to the extracted characteristic points and the related information of the characteristic points instead of relying on the inherent information of the vehicle, such as license plate information, vehicle color, vehicle type, brand and the like, so that the similar images can be still searched from the checkpoint database when the license plate information, color, vehicle type and brand of the vehicle are repeated or the license plate information is lacked, and the precision of searching the similar vehicle images from the checkpoint database is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic flowchart of a method for searching a graph based on bayonet data according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating main direction division of an image according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating division of relative positions of images according to an embodiment of the present invention;
fig. 4 is a schematic flowchart of another method for searching a graph based on bayonet data according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a device for searching a picture with a picture based on bayonet data according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more clearly apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The monitoring system adopts advanced technologies such as photoelectricity, computer, image processing, mode recognition, remote data access and the like, carries out all-weather real-time monitoring on a motor vehicle lane and a non-motor vehicle lane of a monitored road section and records related image data, wherein the image data comprises information such as passing time, places, driving directions, license plate numbers, license plate colors, vehicle body colors and the like of vehicles, and transmits the acquired information to a database of a control center of the bayonet system through a computer network for data storage, query, comparison and the like.
In the embodiment of the invention, the picture is searched, namely, the picture containing the same vehicle is searched from the checkpoint database by inputting the picture containing the target vehicle. The technology relates to a plurality of disciplines such as computer vision, image processing, pattern recognition, database management, information retrieval and the like.
Based on the above description, fig. 1 exemplarily illustrates a flow of a method for searching for a graph based on bayonet data according to an embodiment of the present invention, where the flow may be performed by a device for searching for a graph based on bayonet data.
As shown in fig. 1, the specific steps of the process include:
step S101, a vehicle body area in the target vehicle image is acquired.
Step S102, extracting characteristic information of the target vehicle image from the vehicle body area in the target vehicle image.
Step S103, inquiring the feature points of the sample images stored in the feature database and the scale, the main direction and the relative position of the feature points of the sample images according to the feature points of the target vehicle images and the scale, the main direction and the relative position of the feature points of the target vehicle images, and determining images similar to the target vehicle images.
Specifically, in step S101, before the vehicle body region in the target vehicle image is acquired, the visual dictionary needs to be trained offline, and the specific process of training the visual dictionary is as follows:
and acquiring a set number of images in the checkpoint database as training images and determining the body area in the training images. Feature points of the training image are then extracted from the body region in the training image. And then clustering the feature points of the training images, determining each type of feature points as a word in the visual dictionary, wherein each word in the visual dictionary corresponds to a word number, and finally storing the word and the word number into the visual dictionary. In specific implementation, the images in the checkpoint database refer to the real-time collection and combination of all checkpointsThe uploaded images contain a vehicle. Before feature information extraction is carried out on the training image, the body area in the training image is determined in a manual deduction mode. The extracted feature points of the training image are feature points of local features of a vehicle body region of the training image, the local features may be Scale-invariant feature transform (SIFT for short) features, or Dense Scale-invariant feature transform (DCSift for short) features, one of the SIFT features is generally 128 dimensions, and the DCSift feature is generally 216 dimensions. In a specific implementation, before training the visual dictionary, the size of the visual dictionary needs to be specified, that is, the number of words contained in the visual dictionary, and the specific numerical value may be determined according to an actual situation, for example, may be 800 or 1000. Setting the size of a visual dictionary to be 1000 in the embodiment of the invention, clustering the extracted feature points of all training images by adopting a K-means clustering algorithm or other clustering algorithms, wherein the total clustering number is 1000, each type of feature points after clustering corresponds to one word in the visual dictionary, each word in the visual dictionary corresponds to one word number and is recorded as WidThe total number of words in the visual dictionary is abbreviated as Tw. And finally, storing the 1000 words and the corresponding word numbers into a visual dictionary.
In step S101, after the target vehicle image is obtained, an accurate vehicle body detection algorithm needs to be used to locate the target vehicle and determine the vehicle body area in the target vehicle image. In specific implementation, a vehicle body detection algorithm based on integral channel characteristics and Cascade boosting (Cascade boosting) can be adopted. The vehicle body detection algorithm extracts the characteristics of four channels of an image, wherein the four channels comprise: color channel characteristics and gradient magnitude characteristics. In the aspect of model training, the model adopts a Cascade boosting structure, a strong classification function is realized by cascading a plurality of iterative algorithm classifiers, each layer of the Cascade classifier is an iterative algorithm classifier, and one iterative algorithm classifier consists of a plurality of weak classifiers. And selecting the features with the most distinguishing degree by a plurality of iterative algorithm classifiers and storing the features into the model to realize the optimal classification performance. When the vehicle body detection algorithm is used for positioning a vehicle body region, a multi-scale sliding window scanning strategy is adopted, and then detection region information fusion is carried out.
In step S102, the feature information includes the feature points of the target vehicle image and the scale, principal direction, and relative position of the feature points of the target vehicle image. It should be noted that the features of the target vehicle image extracted in the embodiment of the present invention are the same as those of the training image extracted when the visual dictionary is trained. For example, when the feature of the training image extracted when the visual dictionary is trained is the SIFT feature, the feature of the extracted target vehicle image is also the SIFT feature. In the embodiment of the invention, besides the feature points of the vehicle body region in the target vehicle image, the scale, the main direction and the relative position of the feature points are also extracted. Specifically, two scales of the feature points in the target vehicle image are extracted, the total scale number is simply Ts, and the number corresponding to the current feature point is SidThe number of the feature point of the larger scale is 1, and the number of the feature point of the smaller scale is 0. The principal directions of the feature points in the target vehicle image are divided into 12 directions in total, and the total number of principal directions is abbreviated as TdThe division of each main direction is as shown in fig. 2, 12 directions in fig. 2 are numbered by using numbers 0 to 11, and the number corresponding to the main direction of the current feature point is Did,DidThe value of (a) is an integer between 0 and 11. The relative positions of the feature points in the target vehicle image are divided into 9 positions in total, and the total number of the relative positions is abbreviated as TpThe division of each relative position is as shown in fig. 3, 9 relative positions in fig. 3 are numbered by using numbers 0 to 8, and the number corresponding to the main direction of the current feature point is Pid,PidThe value of (a) is an integer between 0 and 8.
In step S103, a feature database is determined from the images in the bayonet database. The process of determining the feature database is specifically as follows:
and acquiring an image in the checkpoint database as a sample image and determining the body area of the sample image. And then extracting feature information of the sample image from the body region in the sample image, wherein the feature information comprises feature points of the sample image and the scale, the main direction and the relative position of the feature points of the sample image. And then mapping the characteristic points of the sample image to words in the visual dictionary, determining word numbers of the characteristic points of the sample image, and storing the word numbers of the characteristic points of the sample image and the scale, the main direction and the relative position of the characteristic points of the sample image into a characteristic database.
In specific implementation, images containing vehicles are collected and uploaded to a card database in real time by each card, so that the number of images in the card database is increasing. In the implementation of the invention, each image in the checkpoint database is acquired as a sample image. In order to determine an image similar to the target vehicle image from the feature database according to the feature points of the target vehicle image and the scale, the main direction and the relative position of the feature points of the target vehicle image, the features of the sample image extracted when the feature database is established need to be the same as the features of the extracted target vehicle image, for example, the features of the sample image extracted when the feature database is established are SIFT features, and then the features of the extracted target vehicle image also need to be SIFT features. Likewise, the scale, principal direction and relative position of the feature points need to be extracted in the same way. For example, the scale division, the principal direction division, and the relative position division of the feature points of the sample image and the target vehicle image need to be consistent. The feature points of the sample images are mapped to words in the visual dictionary, so that the body area in each sample image is mapped into a set of a plurality of words, then the word numbers corresponding to the feature points of each sample image can be determined according to the word numbers in the visual dictionary, the feature points of all the sample images and the word numbers corresponding to the feature points form a feature database, and finally the scales, the main directions and the relative positions of the feature points of all the sample images are stored in the feature database.
Optionally, the feature points of the sample images and the scales, the principal directions, and the relative positions of the feature points of the sample images stored in the feature database are queried according to the feature points of the target vehicle image and the scales, the principal directions, and the relative positions of the feature points of the target vehicle image, and an image similar to the target vehicle image is determined, which specifically includes:
and mapping the characteristic points of the target vehicle image to words in the visual dictionary, determining word numbers of the characteristic points of the target vehicle image, and screening out the characteristic points with the same word numbers as the characteristic points of the target vehicle image from the characteristic database according to the word numbers of the characteristic points of the target vehicle image. And then, taking the image corresponding to the feature point with the same character number as the feature point of the target vehicle image in the feature database as a comparison image. And determining the matching characteristic points of the target vehicle image and the comparison image according to the visual word number of the target vehicle image and the visual word number of the comparison image, wherein the visual word number is determined according to the character number, the scale of the characteristic point, the main direction and the relative position. And then determining the similarity of the target vehicle image and the contrast image according to the characteristic points of the target vehicle image, the characteristic points of the contrast image and the matching characteristic points of the target vehicle image and the contrast image. And finally, determining the contrast image with the similarity meeting the set threshold as an image similar to the target vehicle image.
In specific implementation, after the feature points of the target vehicle image are extracted, the extracted feature points of the target vehicle image are mapped to words in the visual dictionary, so that a vehicle body area in the target vehicle image is mapped to a set of a plurality of words, and then word numbers corresponding to the feature points of the target vehicle image are determined according to the visual dictionary. And then, inquiring feature points with the same character numbers corresponding to the feature points of the target vehicle image from the feature database, determining the feature points with the same character numbers as the same feature points, and finally determining a sample image containing the same feature points as the target vehicle image as a comparison image.
Specifically, the process of comparing the comparison image with the target vehicle image is as follows: firstly, determining matching feature points of a target vehicle image and a comparison image according to a visual word number of the target vehicle image and a visual word number of the comparison image, wherein the visual word number is determined according to characters, the scale, the main direction and the relative position of the feature points and specifically accords with the following formula (1):
M=Wid+Tw×(Pid+Tp×(Did+Td×Sid))………………………………(1)
wherein M is the visual word number, WidFor visionNumber of words in the dictionary, Tw is the total number of words in the visual dictionary, PidNumber of relative position, TpIs the total number of relative positions, DidNumber of main direction, TdTotal number of major directions, SidIs the number of the scale.
And then determining the similarity of the target vehicle image and the contrast image according to the characteristic points of the target vehicle image, the characteristic points of the contrast image and the matching characteristic points of the target vehicle image and the contrast image, wherein the similarity specifically accords with the following formula (2):
Figure BDA0001210936350000111
where F (a, B) is the similarity between the target vehicle image and the comparison image, P (a ∩ B) is the number of matching feature points between the target vehicle image and the comparison image, P (a) is the number of feature points of the target vehicle image, and P (B) is the number of feature points of the comparison image.
And finally, determining the contrast image with the similarity meeting the set threshold as an image similar to the target vehicle image, and setting the threshold according to specific conditions.
In the embodiment of the invention, when the features of the target vehicle image and the sample image are extracted, the scale, the main direction and the relative position of the feature points are extracted in addition to the feature points, and a feature extraction algorithm is optimized. Meanwhile, when the target vehicle image is compared with the comparison image, the scale, the main direction and the relative position of the feature point are compared besides the feature point, so that the precision of searching similar vehicle images from the checkpoint database is improved. In addition, in the embodiment of the invention, after the visual dictionary is trained and the feature points of the target vehicle image and the sample image are mapped to the word numbers in the visual dictionary, the word numbers in the visual dictionary are utilized to screen out the comparison image corresponding to the target vehicle image from the feature database, so that when the image similar to the target vehicle image is searched from the checkpoint database, only the screened comparison image needs to be compared with the target vehicle image, and all images in the checkpoint database do not need to be compared with the target vehicle image, thereby accelerating the speed of large-scale image retrieval.
In order to better explain the embodiment of the present invention, the following describes a flow of a method for searching a graph with a graph based on bayonet data according to a specific implementation scenario.
As shown in fig. 4, the method comprises the steps of:
step S401, training the visual dictionary off line.
And step S402, acquiring each image in the bayonet image library as a sample image and determining the body area of the sample image.
And S403, extracting the feature points in the body region of the sample image and the scale, the main direction and the relative positions of the feature points.
Step S404, mapping the sample image feature points to corresponding word numbers in the visual dictionary.
Step S405, forming a characteristic database by the word numbers corresponding to the characteristic points of the sample image and the dimensions, the main directions and the relative positions of the characteristic points.
Step S406, acquiring a target vehicle image and positioning a vehicle body area in the target vehicle image.
Step S407, feature points of the body region of the target vehicle image and the dimensions, principal directions, and relative positions of the feature points are extracted.
In step S408, the feature points of the target vehicle image are mapped to the corresponding word numbers in the visual dictionary.
Step S409, the feature points with the same character numbers as the feature points of the target vehicle image are screened out from the feature database.
In step S410, a sample image containing the same feature points as the target vehicle image is used as a comparison image according to the character number.
Step S411, comparing the target vehicle image with the comparison image to determine the similarity between the target vehicle image and the comparison image.
And step S412, determining a contrast image similar to the target vehicle image according to the similarity.
From the above, it can be seen that, an embodiment of the present invention provides a method and an apparatus for searching a picture with a picture based on bayonet data, including: the method comprises the steps of obtaining a vehicle body area in a target vehicle image, and then extracting feature information of the target vehicle image from the vehicle body area in the target vehicle image, wherein the feature information comprises feature points of the target vehicle image and the scale, the main direction and the relative position of the feature points of the target vehicle image. And then querying the feature points of the sample images stored in a feature database and the scale, the main direction and the relative position of the feature points of the target vehicle image according to the feature points of the target vehicle image and the scale, the main direction and the relative position of the feature points of the sample images, and determining an image similar to the target vehicle image, wherein the feature database is determined according to the images in a bayonet database. In the embodiment of the invention, the similar images are searched from the checkpoint database by extracting the plurality of characteristic points and the related information of the characteristic points of the target vehicle image and searching the similar images from the checkpoint database according to the extracted characteristic points and the related information of the characteristic points instead of relying on the inherent information of the vehicle, such as license plate information, vehicle color, vehicle type, brand and the like, so that the similar images can be still searched from the checkpoint database when the license plate information, color, vehicle type and brand of the vehicle are repeated or the license plate information is lacked, and the precision of searching the similar vehicle images from the checkpoint database is improved.
Based on the same conception, fig. 5 exemplarily shows a structure of a device for searching for a graph based on bayonet data, which can execute a flow of searching for a graph based on bayonet data according to an embodiment of the present invention.
As shown in fig. 5, the apparatus includes:
an obtaining module 501, configured to obtain a vehicle body area in a target vehicle image;
a feature extraction module 502, configured to extract feature information of the target vehicle image from a vehicle body region in the target vehicle image, where the feature information includes feature points of the target vehicle image and a scale, a main direction, and a relative position of the feature points of the target vehicle image;
a processing module 503, configured to query the feature points of the sample image and the scale, the principal direction, and the relative position of the feature points of the target vehicle image stored in a feature database according to the feature points of the target vehicle image and the scale, the principal direction, and the relative position of the feature points of the target vehicle image, and determine an image similar to the target vehicle image, where the feature database is determined according to images in a bayonet database.
Optionally, the obtaining module 501 is further configured to:
acquiring a set number of images in the checkpoint database as training images and determining a vehicle body area in the training images;
extracting feature points of the training image from a vehicle body region in the training image;
clustering the feature points of the training images, and determining each type of feature points as a word in a visual dictionary, wherein each word in the visual dictionary corresponds to a word number;
and saving the word and the word number to the visual dictionary.
Optionally, the processing module 503 is specifically configured to:
acquiring all images in the checkpoint database as sample images and determining the body area of the sample images;
extracting feature information of the sample image from a body region in the sample image, the feature information including feature points of the sample image and dimensions, principal directions, and relative positions of the feature points of the sample image;
mapping the feature points of the sample image to words in the visual dictionary and determining word numbers of the feature points of the sample image;
and storing the word number of the characteristic point of the sample image and the scale, the main direction and the relative position of the characteristic point of the sample image into the characteristic database.
Optionally, the processing module 503 is specifically configured to:
mapping the feature points of the target vehicle image to words in the visual dictionary and determining word numbers of the feature points of the target vehicle image;
screening out feature points with the same character number as the feature points of the target vehicle image from the feature database according to the character number of the feature points of the target vehicle image;
taking an image corresponding to the feature point with the same character number as the feature point of the target vehicle image in the feature database as a comparison image;
determining matching feature points of the target vehicle image and the comparison image according to the visual word number of the target vehicle image and the visual word number of the comparison image, wherein the visual word number is determined according to a word number, the scale of the feature points, a main direction and a relative position;
determining the similarity of the target vehicle image and the contrast image according to the feature points of the target vehicle image, the feature points of the contrast image and the matching feature points of the target vehicle image and the contrast image;
and determining the contrast image with the similarity meeting a set threshold as an image similar to the target vehicle image.
Optionally, the processing module 503 is specifically configured to:
the visual word number determined according to the word number, the scale, the main direction and the relative position of the feature points conforms to the following formula (1):
M=Wid+Tw×(Pid+Tp×(Did+Td×Sid))………………………………(1)
wherein M is a visual word number, WidFor word numbering in visual dictionary, Tw is the total number of words in visual dictionary, PidNumber of relative position, TpIs the total number of relative positions, DidNumber of main direction, TdTotal number of major directions, SidIs the number of the scale.
The similarity of the target vehicle image and the contrast image is determined to accord with the following formula (2) according to the feature points of the target vehicle image, the feature points of the contrast image and the matching feature points of the target vehicle image and the contrast image:
Figure BDA0001210936350000151
where F (a, B) is the similarity between the target vehicle image and the comparison image, P (a ∩ B) is the number of matching feature points between the target vehicle image and the comparison image, P (a) is the number of feature points of the target vehicle image, and P (B) is the number of feature points of the comparison image.
It should be apparent to those skilled in the art that embodiments of the present invention may be provided as a method, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (10)

1. A method for searching a picture by a picture based on bayonet data is characterized by comprising the following steps:
acquiring a vehicle body area in a target vehicle image;
extracting feature information of the target vehicle image from a vehicle body region in the target vehicle image, wherein the feature information comprises feature points of the target vehicle image and the scale, the main direction and the relative position of the feature points of the target vehicle image;
obtaining a contrast image corresponding to the target vehicle image from a feature database according to the feature points of the target vehicle image; wherein the feature database is determined from images in a bayonet database;
determining matching feature points of the target vehicle image and the comparison image according to the visual word number of the target vehicle image and the visual word number of the comparison image, wherein the visual word number is determined according to a word number, the scale, the main direction and the relative position of the feature points, and the word number is obtained by mapping the feature points of the image to a visual dictionary;
determining the similarity of the target vehicle image and the contrast image according to the feature points of the target vehicle image, the feature points of the contrast image and the matching feature points of the target vehicle image and the contrast image;
and determining the contrast image with the similarity meeting a set threshold as an image similar to the target vehicle image.
2. The method of claim 1, wherein prior to acquiring the body region in the target vehicle image, further comprising:
acquiring a set number of images in the checkpoint database as training images and determining a vehicle body area in the training images;
extracting feature points of the training image from a vehicle body region in the training image;
clustering the feature points of the training images, and determining each type of feature points as a word in a visual dictionary, wherein each word in the visual dictionary corresponds to a word number;
and saving the word and the word number to the visual dictionary.
3. The method of claim 2, wherein the method further comprises:
acquiring all images in the checkpoint database as sample images and determining the body area of the sample images;
extracting feature information of the sample image from a body region in the sample image, the feature information including feature points of the sample image and dimensions, principal directions, and relative positions of the feature points of the sample image;
mapping the feature points of the sample image to words in the visual dictionary and determining word numbers of the feature points of the sample image;
and storing the word number of the characteristic point of the sample image and the scale, the main direction and the relative position of the characteristic point of the sample image into the characteristic database.
4. The method according to claim 3, wherein the obtaining a comparison image corresponding to the target vehicle image from the feature database according to the feature points of the target vehicle image comprises:
mapping the feature points of the target vehicle image to words in the visual dictionary and determining word numbers of the feature points of the target vehicle image;
screening out feature points with the same character number as the feature points of the target vehicle image from the feature database according to the character number of the feature points of the target vehicle image;
and taking the image corresponding to the feature point with the same character number as the feature point of the target vehicle image in the feature database as a comparison image.
5. The method of claim 4, wherein the determining the visual word number from the word number, the scale of the feature point, the principal direction, and the relative position conforms to the following equation (1):
M=Wid+Tw×(Pid+Tp×(Did+Td×Sid))………………………………(1)
wherein M is a visual word number, WidFor word numbering in visual dictionary, Tw is the total number of words in visual dictionary, PidNumber of relative position, TpIs the total number of relative positions, DidNumber of main direction, TdTotal number of major directions, SidNumbering for scale;
the similarity of the target vehicle image and the contrast image is determined to accord with the following formula (2) according to the feature points of the target vehicle image, the feature points of the contrast image and the matching feature points of the target vehicle image and the contrast image:
Figure FDA0002202471220000031
where F (a, B) is the similarity between the target vehicle image and the comparison image, P (a ∩ B) is the number of matching feature points between the target vehicle image and the comparison image, P (a) is the number of feature points of the target vehicle image, and P (B) is the number of feature points of the comparison image.
6. A picture searching device based on bayonet data is characterized by comprising:
the acquisition module is used for acquiring a vehicle body area in the target vehicle image;
the feature extraction module is used for extracting feature information of the target vehicle image from a vehicle body region in the target vehicle image, wherein the feature information comprises feature points of the target vehicle image and the scale, the main direction and the relative position of the feature points of the target vehicle image;
the processing module is used for obtaining a contrast image corresponding to the target vehicle image from a feature database according to the feature points of the target vehicle image; wherein the feature database is determined from images in a bayonet database; determining matching feature points of the target vehicle image and the comparison image according to the visual word number of the target vehicle image and the visual word number of the comparison image, wherein the visual word number is determined according to a word number, the scale, the main direction and the relative position of the feature points, and the word number is obtained by mapping the feature points of the image to a visual dictionary; determining the similarity of the target vehicle image and the contrast image according to the feature points of the target vehicle image, the feature points of the contrast image and the matching feature points of the target vehicle image and the contrast image; and determining the contrast image with the similarity meeting a set threshold as an image similar to the target vehicle image.
7. The apparatus of claim 6, wherein the acquisition module is further to:
acquiring a set number of images in the checkpoint database as training images and determining a vehicle body area in the training images;
extracting feature points of the training image from a vehicle body region in the training image;
clustering the feature points of the training images, and determining each type of feature points as a word in a visual dictionary, wherein each word in the visual dictionary corresponds to a word number;
and saving the word and the word number to the visual dictionary.
8. The apparatus of claim 7, wherein the processing module is specifically configured to:
acquiring all images in the checkpoint database as sample images and determining the body area of the sample images;
extracting feature information of the sample image from a body region in the sample image, the feature information including feature points of the sample image and dimensions, principal directions, and relative positions of the feature points of the sample image;
mapping the feature points of the sample image to words in the visual dictionary and determining word numbers of the feature points of the sample image;
and storing the word number of the characteristic point of the sample image and the scale, the main direction and the relative position of the characteristic point of the sample image into the characteristic database.
9. The apparatus of claim 8, wherein the processing module is specifically configured to:
mapping the feature points of the target vehicle image to words in the visual dictionary and determining word numbers of the feature points of the target vehicle image;
screening out feature points with the same character number as the feature points of the target vehicle image from the feature database according to the character number of the feature points of the target vehicle image;
and taking the image corresponding to the feature point with the same character number as the feature point of the target vehicle image in the feature database as a comparison image.
10. The apparatus of claim 9, wherein the processing module is specifically configured to:
the visual word number determined according to the word number, the scale, the main direction and the relative position of the feature points conforms to the following formula (1):
M=Wid+Tw×(Pid+Tp×(Did+Td×Sid))………………………………(1)
wherein M is a visual word number, WidFor word numbering in visual dictionary, Tw is the total number of words in visual dictionary, PidNumber of relative position, TpIs the total number of relative positions, DidNumber of main direction, TdTotal number of major directions, SidNumbering for scale;
the similarity of the target vehicle image and the contrast image is determined to accord with the following formula (2) according to the feature points of the target vehicle image, the feature points of the contrast image and the matching feature points of the target vehicle image and the contrast image:
Figure FDA0002202471220000051
where F (a, B) is the similarity between the target vehicle image and the comparison image, P (a ∩ B) is the number of matching feature points between the target vehicle image and the comparison image, P (a) is the number of feature points of the target vehicle image, and P (B) is the number of feature points of the comparison image.
CN201710033558.3A 2017-01-16 2017-01-16 Method and device for searching pictures with pictures based on bayonet data Active CN106777350B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710033558.3A CN106777350B (en) 2017-01-16 2017-01-16 Method and device for searching pictures with pictures based on bayonet data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710033558.3A CN106777350B (en) 2017-01-16 2017-01-16 Method and device for searching pictures with pictures based on bayonet data

Publications (2)

Publication Number Publication Date
CN106777350A CN106777350A (en) 2017-05-31
CN106777350B true CN106777350B (en) 2020-02-14

Family

ID=58946246

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710033558.3A Active CN106777350B (en) 2017-01-16 2017-01-16 Method and device for searching pictures with pictures based on bayonet data

Country Status (1)

Country Link
CN (1) CN106777350B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107729502A (en) * 2017-10-18 2018-02-23 公安部第三研究所 A kind of bayonet vehicle individualized feature intelligent retrieval system and method
CN111127541B (en) * 2018-10-12 2024-02-27 杭州海康威视数字技术股份有限公司 Method and device for determining vehicle size and storage medium
CN110334763B (en) * 2019-07-04 2021-07-23 北京字节跳动网络技术有限公司 Model data file generation method, model data file generation device, model data file identification device, model data file generation apparatus, model data file identification apparatus, and model data file identification medium
CN111680556B (en) * 2020-04-29 2024-06-07 平安国际智慧城市科技股份有限公司 Method, device, equipment and storage medium for identifying traffic gate vehicle type

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104199842A (en) * 2014-08-07 2014-12-10 同济大学 Similar image retrieval method based on local feature neighborhood information
CN104239898A (en) * 2014-09-05 2014-12-24 浙江捷尚视觉科技股份有限公司 Method for carrying out fast vehicle comparison and vehicle type recognition at tollgate
CN104699726A (en) * 2013-12-18 2015-06-10 杭州海康威视数字技术股份有限公司 Vehicle image retrieval method and device for traffic block port
CN105354533A (en) * 2015-09-28 2016-02-24 江南大学 Bag-of-word model based vehicle type identification method for unlicensed vehicle at gate

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104699726A (en) * 2013-12-18 2015-06-10 杭州海康威视数字技术股份有限公司 Vehicle image retrieval method and device for traffic block port
CN104199842A (en) * 2014-08-07 2014-12-10 同济大学 Similar image retrieval method based on local feature neighborhood information
CN104239898A (en) * 2014-09-05 2014-12-24 浙江捷尚视觉科技股份有限公司 Method for carrying out fast vehicle comparison and vehicle type recognition at tollgate
CN105354533A (en) * 2015-09-28 2016-02-24 江南大学 Bag-of-word model based vehicle type identification method for unlicensed vehicle at gate

Also Published As

Publication number Publication date
CN106777350A (en) 2017-05-31

Similar Documents

Publication Publication Date Title
CN107679078B (en) Bayonet image vehicle rapid retrieval method and system based on deep learning
CN108197538B (en) Bayonet vehicle retrieval system and method based on local features and deep learning
CN109558823B (en) Vehicle identification method and system for searching images by images
CN106777350B (en) Method and device for searching pictures with pictures based on bayonet data
CN102521366B (en) Image retrieval method integrating classification with hash partitioning and image retrieval system utilizing same
CN110209866A (en) A kind of image search method, device, equipment and computer readable storage medium
CN110175615B (en) Model training method, domain-adaptive visual position identification method and device
CN105160309A (en) Three-lane detection method based on image morphological segmentation and region growing
Varghese et al. An efficient algorithm for detection of vacant spaces in delimited and non-delimited parking lots
CN106570439B (en) Vehicle detection method and device
CN103366181A (en) Method and device for identifying scene integrated by multi-feature vision codebook
CN104615986A (en) Method for utilizing multiple detectors to conduct pedestrian detection on video images of scene change
CN111078946A (en) Bayonet vehicle retrieval method and system based on multi-target regional characteristic aggregation
CN113177518A (en) Vehicle weight identification method recommended by weak supervision area
CN106033443B (en) A kind of expanding query method and device in vehicle retrieval
CN110826415A (en) Method and device for re-identifying vehicles in scene image
Sikirić et al. Image representations on a budget: Traffic scene classification in a restricted bandwidth scenario
CN103279738A (en) Automatic identification method and system for vehicle logo
CN105654122A (en) Spatial pyramid object identification method based on kernel function matching
CN107273889B (en) License plate recognition method based on statistics
CN111860219A (en) High-speed road occupation judging method and device and electronic equipment
Peng et al. Real-time illegal parking detection algorithm in urban environments
CN111383286A (en) Positioning method, positioning device, electronic equipment and readable storage medium
CN116704490B (en) License plate recognition method, license plate recognition device and computer equipment
CN114155576A (en) Image processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant