CN111368729A - Vehicle identity discrimination method based on twin neural network - Google Patents

Vehicle identity discrimination method based on twin neural network Download PDF

Info

Publication number
CN111368729A
CN111368729A CN202010142310.2A CN202010142310A CN111368729A CN 111368729 A CN111368729 A CN 111368729A CN 202010142310 A CN202010142310 A CN 202010142310A CN 111368729 A CN111368729 A CN 111368729A
Authority
CN
China
Prior art keywords
network
vehicle
layer
neural network
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202010142310.2A
Other languages
Chinese (zh)
Inventor
王连涛
殷康
侯康馨
李庆武
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changzhou Campus of Hohai University
Original Assignee
Changzhou Campus of Hohai University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changzhou Campus of Hohai University filed Critical Changzhou Campus of Hohai University
Priority to CN202010142310.2A priority Critical patent/CN111368729A/en
Publication of CN111368729A publication Critical patent/CN111368729A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a vehicle identity distinguishing method based on a twin neural network. The method constructs a twin neural network, and trains the network on a Veri-Wild data set; then constructing a data set of a specific application scene, and continuing to train the network; then constructing a target vehicle picture library in a specific application scene; and finally, capturing pictures on site, and judging whether the vehicle is a target vehicle by using the trained network. The invention does not depend on the license plate completely, and effectively solves the problem that the vehicle is difficult to distinguish due to the condition that the fake-licensed vehicle or the license plate is invisible.

Description

Vehicle identity discrimination method based on twin neural network
Technical Field
The invention belongs to the field of image processing, and relates to a vehicle identity discrimination method based on a twin neural network.
Background
At present, the identification of the vehicle identity mainly depends on the identification of the license plate number, but the method can not be used for solving the problems of invisible license plate caused by shooting angles, fake-licensed vehicles and the like. For the identification of the vehicle, besides the license plate, the color of the vehicle body, the appearance of the vehicle, the vehicle logo, the stickers on the windows and the like can be used as the basis for auxiliary judgment.
Disclosure of Invention
The invention discloses a vehicle identity distinguishing method based on a twin neural network. And (4) putting the shot vehicle picture and the original picture in the picture library into a neural network, and determining whether the vehicle is the same vehicle or not by judging the similarity degree of the shot vehicle picture and the original picture.
The invention mainly adopts the technical scheme that:
a vehicle identity distinguishing method based on a twin neural network comprises the following specific steps:
step S1: constructing a twin neural network, wherein the structure is as follows: two convolutional neural networks for extracting characteristics are connected with a distance measurement layer and a probability calculation layer;
step S2: training the network on a Veri-Wild data set;
step S3: constructing a data set of a specific application scene, and continuing to train the network;
step S4: constructing a target vehicle picture library in a specific application scene;
step S5: and (4) snapping pictures on site, and judging whether the vehicle is a target vehicle by using the trained network.
The specific network structure in step S1 is:
s1.1: the two feature extraction networks are convolutional neural networks, and the network formed by compounding all convolutional layers and the pooling layer including VGGNet, GoogLeNet, ResNet and the like can be used. Suppose h1For the output of sample 1 at the feature extraction layer, h2The output of the sample 2 in the feature extraction layer is obtained;
s1.2: the distance metric layer is a weighted L1 computation layer, and the distance between two feature vectors is computed by the following formula:
Figure BDA0002398287790000011
wherein, αjIs composed of
Figure BDA0002398287790000021
And
Figure BDA0002398287790000022
the corresponding coefficients.
S1.3: the probability calculation layer is a full connection layer, wherein the number of nodes of each layer is set according to practical conditions and experience, and the output result of the distance measurement layer is brought into Sigmoid function normalization:
Figure BDA0002398287790000023
finally, the output of the entire network is a probability value y.
The training of the specific network in step S2 includes:
s2.1: and during each training, training the network according to the sample pair, and randomly selecting two pictures from the training set to form a pair of input networks. The two pictures are positive sample pairs for the same vehicle and negative sample pairs for different vehicles. Setting positive and negative sample pairs to respectively account for 50% of the total number of training samples;
s2.2: putting a positive sample and a negative sample into a network, sequentially passing through a feature extraction layer, a distance measurement layer and a probability calculation layer, finally obtaining a probability estimation value of a sample pair, and introducing a contrast loss function:
L=∑yd2+(1-y)max(margin-d,0)2
wherein, margin is a preset value.
S2.3: the network back-propagates according to the loss function to automatically adjust the parameters.
The step of constructing the data set of the specific application scenario and further training the network in step S3 is as follows:
s3.1: according to application requirements, a plurality of photos similar to the actual snapshot in position, angle, time and the like are shot in an actual field to form a small data set (more than 1000 photos).
S3.2: as with the training method of claim 2, a network is further trained on the small dataset.
The step of determining whether the vehicle is the target vehicle using the trained network in step S5 includes:
s5.1: and putting the captured picture and the picture of the license plate in the target vehicle library into a trained network.
S5.2: the network automatically calculates the respective characteristics of the two pictures and the distance between the two characteristic vectors, and finally outputs a probability value y.
S5.3: if the network output y is greater than 0.5, the vehicle is the same vehicle; otherwise, it is a different vehicle.
The invention uses the twin convolution neural network to synthesize the information of various vehicles to judge the identity of the vehicle. The research of the problem can be used for vehicle identity judgment of a parking lot, vehicle flow counting in intersection monitoring environment, determination of suspected vehicles in monitoring and the like. The twin neural network can effectively judge whether two objects are the same object, and hopefully, the problem of judging the identity of the vehicle can be solved through a large amount of training.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 a twin neural network single training process;
FIG. 3 application scenario dataset presentation;
FIG. 4 is a pictorial illustration of a vehicle in the target vehicle library;
FIG. 5 is a fake-licensed vehicle;
fig. 6 is a picture of the real vehicle corresponding to the license plate in the target vehicle library.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the present application, the technical solutions in the embodiments of the present application are clearly and completely described below, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The following describes the technical solution of the present invention in further detail with reference to the accompanying drawings, which take a snapshot of a parking lot as an example.
As shown in fig. 1, a vehicle identity discrimination method based on a twin neural network includes the following specific steps:
step S1: constructing a twin neural network, wherein the structure is as follows: two convolutional neural networks for extracting characteristics are connected with a distance measurement layer and a probability calculation layer;
step S2: training the network on a Veri-Wild data set;
step S3: constructing a data set of a specific application scene, and continuing to train the network;
step S4: constructing a target vehicle picture library in a specific application scene, a part of which is shown in FIG. 3;
step S5: and (4) snapping pictures on site, and judging whether the vehicle is a target vehicle by using the trained network.
The specific network structure of the twin neural network in the step S1 is as follows:
s1.1: the two feature extraction networks are convolutional neural networks, and the network formed by compounding all convolutional layers and the pooling layer including VGGNet, GoogLeNet, ResNet and the like can be used. Suppose h1For the output of sample 1 at the feature extraction layer, h2The output of the sample 2 in the feature extraction layer is obtained;
s1.2: the distance metric layer is a weighted L1 computation layer, and the distance between two feature vectors is computed by the following formula:
Figure BDA0002398287790000031
wherein, αjIs composed of
Figure BDA0002398287790000041
And
Figure BDA0002398287790000042
the corresponding coefficients.
S1.3: the probability calculation layer is a full connection layer, wherein the number of nodes of each layer is set according to practical conditions and experience, and the output result of the distance measurement layer is brought into Sigmoid function normalization:
Figure BDA0002398287790000043
finally, the output of the entire network is a probability value y.
As shown in fig. 2, the training step of the specific network in step S2 is:
s2.1: and during each training, training the network according to the sample pair, and randomly selecting two pictures from the training set to form a pair of input networks. The two pictures are positive sample pairs for the same vehicle and negative sample pairs for different vehicles. Setting positive and negative sample pairs to respectively account for 50% of the total number of training samples;
s2.2: putting a positive sample and a negative sample into a network, sequentially passing through a feature extraction layer, a distance measurement layer and a probability calculation layer, finally obtaining a probability estimation value of a sample pair, and introducing a contrast loss function:
L=∑yd2+(1-y)max(margin-d,0)2
wherein, margin is a preset value.
S2.3: the network back-propagates according to the loss function to automatically adjust the parameters.
The step of constructing the data set of the specific application scenario and further training the network in step S3 is as follows:
s3.1: according to application requirements, a plurality of photos similar to the actual snapshot in position, angle, time and the like are shot in an actual field to form a small data set (more than 1000 photos).
S3.2: the network is further trained on the small dataset as in the training method described in step S1.
The step of constructing the target vehicle picture library in the specific application scenario in step S4 is as follows:
s4.1: and (3) taking a picture of each vehicle according to a certain angle and distance, storing the picture into a target vehicle library so as to be compared with the snapshot image, and obtaining a picture of a certain vehicle in the target vehicle library in fig. 4.
The step of determining whether the vehicle is the target vehicle using the trained network in step S5 includes:
s5.1: and putting the captured picture and the picture of the license plate in the target vehicle library into a trained network.
S5.2: the network automatically calculates the respective characteristics of the two pictures and the distance between the two characteristic vectors, and finally outputs a probability value y.
S5.3: if the network output y is greater than 0.5, the vehicle is the same vehicle; otherwise, it is a different vehicle.
FIG. 5 is a depiction of a fake-licensed vehicle successfully identified as a fake-licensed vehicle when aligned with a vehicle of the license plate in the target vehicle library; fig. 6 is a photograph taken by the real vehicle at an angle different from that of the vehicle in the target vehicle library, and the same vehicle is successfully judged through network comparison.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (5)

1. A vehicle identity distinguishing method based on a twin neural network is characterized by comprising the following specific steps:
step S1: constructing a twin neural network, wherein the structure is as follows: two convolutional neural networks for extracting characteristics are connected with a distance measurement layer and a probability calculation layer;
step S2: training the twin neural network of step S1 on a Veri-Wild data set;
step S3: constructing a data set of a specific application scene, and continuing to train the twin neural network in the step S1;
step S4: constructing a target vehicle picture library in a specific application scene;
step S5: and (4) snapping pictures on site, and judging whether the vehicle is a target vehicle by using the trained network.
2. The twin neural network-based vehicle identity distinguishing method according to claim 1, wherein the twin neural network in the step S1 has a specific structure:
s1.1: two feature extraction networks are convolutional neural networks, assuming h1For the output of sample 1 at the feature extraction layer, h2The output of the sample 2 in the feature extraction layer is obtained;
s1.2: the distance metric layer is a weighted L1 computation layer, and the distance between two feature vectors is computed by the following formula:
Figure FDA0002398287780000011
wherein, αjIs composed of
Figure FDA0002398287780000012
And
Figure FDA0002398287780000013
the corresponding coefficients;
s1.3: the probability calculation layer is a full connection layer, wherein the number of nodes of each layer is set according to practical conditions and experience, and the output result of the distance measurement layer is brought into Sigmoid function normalization:
Figure FDA0002398287780000014
finally, the output of the entire network is a probability value y.
3. The twin neural network-based vehicle identity discrimination method according to claim 2, wherein the training of the specific network in the step S2 is:
s2.1: during each training, training the network according to the sample pairs, randomly selecting two pictures from a training set as a pair of input networks, wherein the two pictures are positive sample pairs of the same vehicle, the two pictures are negative sample pairs of different vehicles, and the positive and negative sample pairs respectively account for 50% of the total number of the training samples;
s2.2: putting a positive sample and a negative sample into a network, sequentially passing through a feature extraction layer, a distance measurement layer and a probability calculation layer, finally obtaining a probability estimation value of a sample pair, and introducing a contrast loss function:
L=∑yd2+(1-y)max(margin-d,0)2
wherein margin is a preset value;
s2.3: the network back-propagates according to the loss function to automatically adjust the parameters.
4. The twin neural network-based vehicle identity discrimination method according to claim 3, wherein the step of constructing the application-specific data set and further training the network in step S3 comprises the steps of:
s3.1: shooting a plurality of photos similar to the actual snapshot position, angle, time and the like in an actual field according to application requirements to form a small data set;
s3.2: as with the training method of claim 2, a network is further trained on the small dataset.
5. The twin neural network-based vehicle identification method according to claim 4, wherein the step of determining whether the vehicle is the target vehicle using the trained network in step S5 is:
s5.1: putting the captured picture and the picture of the license plate in the target vehicle library into a trained network;
s5.2: the network automatically calculates the respective characteristics of the two pictures and the distance between the two characteristic vectors, and finally outputs a probability value y;
s5.3: if the network output y is greater than 0.5, the vehicles are the same; otherwise, it is a different vehicle.
CN202010142310.2A 2020-03-03 2020-03-03 Vehicle identity discrimination method based on twin neural network Withdrawn CN111368729A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010142310.2A CN111368729A (en) 2020-03-03 2020-03-03 Vehicle identity discrimination method based on twin neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010142310.2A CN111368729A (en) 2020-03-03 2020-03-03 Vehicle identity discrimination method based on twin neural network

Publications (1)

Publication Number Publication Date
CN111368729A true CN111368729A (en) 2020-07-03

Family

ID=71206664

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010142310.2A Withdrawn CN111368729A (en) 2020-03-03 2020-03-03 Vehicle identity discrimination method based on twin neural network

Country Status (1)

Country Link
CN (1) CN111368729A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112472136A (en) * 2020-12-09 2021-03-12 南京航空航天大学 Cooperative analysis method based on twin neural network

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112472136A (en) * 2020-12-09 2021-03-12 南京航空航天大学 Cooperative analysis method based on twin neural network
CN112472136B (en) * 2020-12-09 2022-06-17 南京航空航天大学 Cooperative analysis method based on twin neural network

Similar Documents

Publication Publication Date Title
CN110209859B (en) Method and device for recognizing places and training models of places and electronic equipment
CN107529650B (en) Closed loop detection method and device and computer equipment
CN108830145B (en) People counting method based on deep neural network and storage medium
CN110717411A (en) Pedestrian re-identification method based on deep layer feature fusion
CN109711358B (en) Neural network training method, face recognition system and storage medium
CN109544592B (en) Moving object detection algorithm for camera movement
CN103020985B (en) A kind of video image conspicuousness detection method based on field-quantity analysis
CN111667001B (en) Target re-identification method, device, computer equipment and storage medium
CN110781790A (en) Visual SLAM closed loop detection method based on convolutional neural network and VLAD
CN111898651A (en) Tree detection method based on Tiny Yolov3 algorithm
CN109635693B (en) Front face image detection method and device
CN111709313B (en) Pedestrian re-identification method based on local and channel combination characteristics
CN110825900A (en) Training method of feature reconstruction layer, reconstruction method of image features and related device
CN111723773B (en) Method and device for detecting carryover, electronic equipment and readable storage medium
CN108764096B (en) Pedestrian re-identification system and method
CN111832484A (en) Loop detection method based on convolution perception hash algorithm
CN111639564A (en) Video pedestrian re-identification method based on multi-attention heterogeneous network
CN114419349B (en) Image matching method and device
CN112084952B (en) Video point location tracking method based on self-supervision training
CN112668557A (en) Method for defending image noise attack in pedestrian re-identification system
CN111539456B (en) Target identification method and device
CN114882537A (en) Finger new visual angle image generation method based on nerve radiation field
CN112329662B (en) Multi-view saliency estimation method based on unsupervised learning
CN111368729A (en) Vehicle identity discrimination method based on twin neural network
CN112463999A (en) Visual position identification method and device, computer equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20200703

WW01 Invention patent application withdrawn after publication