CN112541453A - Luggage weight recognition model training and luggage weight recognition method - Google Patents

Luggage weight recognition model training and luggage weight recognition method Download PDF

Info

Publication number
CN112541453A
CN112541453A CN202011511333.2A CN202011511333A CN112541453A CN 112541453 A CN112541453 A CN 112541453A CN 202011511333 A CN202011511333 A CN 202011511333A CN 112541453 A CN112541453 A CN 112541453A
Authority
CN
China
Prior art keywords
luggage
baggage
images
image
recognition model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011511333.2A
Other languages
Chinese (zh)
Inventor
陈曦
蓝志坚
钟国海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Richstone Technology Co ltd
Original Assignee
Guangzhou Richstone Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Richstone Technology Co ltd filed Critical Guangzhou Richstone Technology Co ltd
Priority to CN202011511333.2A priority Critical patent/CN112541453A/en
Publication of CN112541453A publication Critical patent/CN112541453A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a luggage weight recognition model training and luggage weight recognition method, which comprises the following steps: s1: acquiring video monitoring data, and extracting a luggage image of the video monitoring data by using a target detection model; s2: labeling the luggage image to obtain a luggage image data set; s3: dividing a baggage image data set into a training set and a verification set; s4: training the luggage weight recognition model by using a training set, evaluating the luggage weight recognition model by using a verification set, and screening out the luggage weight recognition model with the optimal prediction performance; s5: obtaining similarity distribution among the luggage images by using the luggage re-identification model obtained in the step S4, and determining a discrimination threshold value according to the similarity distribution; s6: any two baggage images are input into the baggage re-identification model to predict whether the two baggage images are from the same baggage. The invention overcomes the defects of high cost and low safety of the traditional luggage identification and tracking scheme, and has strong transportability and high identification accuracy.

Description

Luggage weight recognition model training and luggage weight recognition method
Technical Field
The invention relates to the technical field of target re-recognition in computer vision, in particular to a luggage re-recognition model training and luggage re-recognition method.
Background
Existing baggage tracking schemes typically use RFID (radio frequency identification) technology to track baggage, and include the following steps: 1. an RFID (radio frequency identification) card reader in each area respectively reads luggage information in an RFID (radio frequency identification) tag in the area and sends the luggage information to a data summarizing unit, and the RFID tag is attached to the luggage; 2. the data summarization unit in each place summarizes the baggage information in each area and sends the information to the data processing system; 3. the data processing system generates track information according to all the received luggage information and sends the track information to the information pushing system; 4. and the information pushing system pushes the track information to a display system.
The existing baggage tracking scheme has the following disadvantages:
1. the cost is high, the price of the RFID (radio frequency identification) electronic tag is higher than that of a common bar code tag and is dozens of times of that of the common bar code tag, and if the using amount is large, the cost is too high; 2. the safety is not strong enough, and the safety problem faced by the RFID (radio frequency identification) technology is mainly represented by that the RFID (radio frequency identification) electronic tag information is illegally read and maliciously tampered; 3. the technical standards are not uniform, and the transportability is not good; 4. each piece of luggage needs to be tagged with an electronic tag.
Object re-recognition, also known as object re-recognition, is a technique that uses computer vision techniques to determine whether a particular object exists in an image or video sequence. With the rapid development of video monitoring, target re-identification plays an increasingly important role in intelligent monitoring, is mainly used for solving cross-camera tracking of targets, and is the most direct method for solving the problem that targets are matched after visual fields are lost in cross-camera tracking. The target re-identification is widely applied to the fields of cross-scene target tracking, target retrieval and the like. The application scene of luggage re-identification is generally various security check scenes, specific luggage cases are locked in advance through various technical means, and when a passenger carries the luggage case to pass through another position, the appearance of the luggage case is used for identification, and then only the specific luggage cases are checked. Compared with a radio frequency technology, the luggage re-identification technology has the greatest advantage that the target luggage can be tracked only by the monitoring camera without being marked by manual participation in the luggage tracking task.
In the prior art, a chinese patent publication No. CN108830236A discloses a pedestrian re-identification method based on depth features in 2018, 11 th and 16 th, which includes the following steps: s1, acquiring an Alexnet model and modifying the last full connection layer of the Alexnet model; s2, randomly initializing the parameters of the last full connection layer; s3, training by adopting a known label and updating the parameters of the last full connection layer to obtain a neural network for pedestrian re-identification; s4, extracting the depth features of the image to be recognized and the target image respectively according to the neural network; and S5, obtaining the similarity between the image to be recognized and the target image according to the similarity between the depth feature of the image to be recognized and the depth feature of the target image, and re-recognizing the pedestrian according to the similarity between the image to be recognized and the target image. The scheme is completed based on a deep reading degree learning model and needs to modify the model, and the calculation amount is relatively large.
Disclosure of Invention
The invention provides a method for training a luggage heavy identification model and identifying luggage heavy, aiming at overcoming the defects of high luggage tracking identification cost, low safety and difficulty in implementation in the prior art.
The primary objective of the present invention is to solve the above technical problems, and the technical solution of the present invention is as follows:
a luggage heavy identification model training and luggage heavy identification method comprises the following steps:
s1: acquiring video monitoring data, and extracting a luggage image of the video monitoring data by using a target detection model YOLOv 3;
s2: labeling the luggage image to obtain a luggage image data set;
s3: dividing a baggage image data set into a training set and a verification set;
s4: training the luggage weight recognition model by using a training set, evaluating the luggage weight recognition model by using a verification set, and screening out the luggage weight recognition model with the optimal prediction performance;
s5: obtaining similarity distribution among the luggage images by using the luggage re-identification model obtained in the step S4, and determining a discrimination threshold value according to the similarity distribution;
s6: any two baggage images are input into the baggage re-identification model to predict whether the two baggage images are from the same baggage.
Further, in step S2, the baggage image is labeled to obtain a baggage image data set, where the labeled format is as follows: number of baggage ID + image in all images of the current baggage.
Further, the ratio of the training set to the validation set is respectively: 75% and 25%.
Further, the verification set is divided into a query set and a galery set, the query set is an image set to be queried, and the galery set is used for querying a library image set.
Further, in performing the baggage re-identification model verification, all images in the galery set are sorted for each pair of baggage images in the query set.
Further, for the luggage weight recognition model evaluation, the model evaluation is carried out by using an accumulated matching curve, and the accumulated matching curve calculation process is as follows:
firstly, calculating the front K accuracy of each pair of luggage images in a query set, wherein K is a positive integer;
and adding the accuracy of K before each query set, and dividing the sum by the total number of the query sets to obtain an accumulated matching curve.
Further, the front K accuracy is denoted as AccK, and the calculation formula is as follows:
Figure BDA0002846496670000031
further, the step of obtaining the similarity distribution among the baggage images by using the baggage re-identification model obtained in step S4 specifically includes: similarity distribution of all baggage image pairs; similarity distribution of pairs of luggage images from the same piece of luggage; similarity distribution of pairs of baggage images from different baggage.
Further, the specific process of inputting any two baggage images into the baggage re-identification model to predict whether the two baggage images come from the same baggage is as follows:
s601: inputting any two luggage images into a luggage weight recognition model and outputting two eigenvectors;
s602: calculating the similarity of the two feature vectors;
s603: and if the similarity of the two feature vectors is smaller than or equal to a preset threshold value, judging that the two luggage images come from the same piece of luggage, otherwise, judging that the two luggage images do not come from the same piece of luggage.
Further, the preset threshold value is 0.8.
Compared with the prior art, the technical scheme of the invention has the beneficial effects that:
according to the luggage identification tracking method, the luggage image is extracted through the monitoring video data, the re-identification target is trained, and the re-identification target after training is used for carrying out luggage re-identification, so that the defects of high cost and low safety of the traditional luggage identification tracking scheme are overcome.
Drawings
Fig. 1 is a flowchart of a baggage heavy identification model training and baggage heavy identification method according to the present invention.
Fig. 2 is a schematic diagram of baggage image recognition according to the present invention.
Fig. 3 is a schematic diagram of baggage image annotation according to the present invention.
Fig. 4 is a histogram of the inter-baggage similarity distribution according to the present invention.
Fig. 5 is a flowchart of predicting two arbitrary baggage images by the baggage re-identification model according to the present invention.
Detailed Description
In order that the above objects, features and advantages of the present invention can be more clearly understood, a more particular description of the invention will be rendered by reference to the appended drawings. It should be noted that the embodiments and features of the embodiments of the present application may be combined with each other without conflict.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, however, the present invention may be practiced in other ways than those specifically described herein, and therefore the scope of the present invention is not limited by the specific embodiments disclosed below.
Example 1
As shown in fig. 1, a method for training a baggage heavy recognition model and recognizing the baggage heavy recognition includes the following steps:
s1: acquiring video monitoring data, and extracting a luggage image of the video monitoring data by using a target detection model YOLOv 3;
it should be noted that the present invention can be applied to scenes that need baggage identification tracking, such as airports, high-speed rail stations, etc., and video surveillance data is acquired from the above application scenes, and then baggage images are extracted by using the target detection model, where the acquired baggage images include images of the same baggage at different viewing angles, and as shown in fig. 2, the identified baggage image is shown.
S2: labeling the luggage image to obtain a luggage image data set;
it should be noted that, after the baggage image is obtained in step S1, the baggage image may be cut into a uniform size, and then labeled to obtain a baggage image data set, in a specific embodiment, the labeled format is: the number of the baggage ID + image in all images of the current baggage is shown in the baggage image labeling diagram of FIG. 3.
S3: dividing a baggage image data set into a training set and a verification set;
in a specific embodiment, the training set proportion is 75%, and the verification set proportion is 25%, wherein the verification set is further divided into a query set and a galery set, the query set is an image set to be queried, and the galery set is a library image set to be queried. In the luggage re-identification model verification, for each pair of luggage images in the query set, all images in the galery set are sorted, and the specific sorting basis is the similarity with the pair of luggage images in the query set, for example, m pairs of images are in the query set, n pairs of images are in the galery set, and finally m arrangements are obtained, and the length of each arrangement is n.
S4: training the luggage weight recognition model by using a training set, evaluating the luggage weight recognition model by using a verification set, and screening out the luggage weight recognition model with the optimal prediction performance;
it should be noted that the luggage re-identification model in the present invention takes SE-rennext as a backbone, and performs metric learning in combination with BFENet proposed by AI Lab for the pedestrian re-identification task, so as to construct the luggage re-identification model of the present invention, and more specifically, the luggage re-identification model of the present invention has a network structure as follows:
SE-ResNeXt is formed by fusing SEnet and ResNeXt, and the ResNeXt adopts the concept of initiation on the basis of ResNet to widen the network; and the SEnet utilizes two fully-connected layers to form a Bottleneck structure to model the correlation between channels, namely, the characteristic dimension is reduced firstly, and then the original dimension is increased. The final SE-ResNeXt is the exchange of bottleeck in SEnet to bottleeck in ResNeXt.
And secondly, the BFENet calculates triple and softmax by using the global branch and the characteristic erasing branch to perform combined training.
In one specific embodiment, the baggage re-identification model may be evaluated using a cumulative matching curve calculated by:
firstly, calculating the front K accuracy of each pair of luggage images in a query set, wherein K is a positive integer;
and adding the accuracy of K before each query set, and dividing the sum by the total number of the query sets to obtain an accumulated matching curve. The accuracy of the front K is recorded as AccK, and the calculation formula is as follows:
Figure BDA0002846496670000051
s5: obtaining similarity distribution among the luggage images by using the luggage re-identification model obtained in the step S4, and determining a discrimination threshold value according to the similarity distribution; as shown in fig. 4, the similarity distribution among the baggage images specifically includes: similarity distribution of all baggage image pairs; similarity distribution of pairs of luggage images from the same piece of luggage; similarity distribution of pairs of baggage images from different baggage. According to the three different similarity distributions, a discrimination threshold is determined, and the threshold is preferably set to 0.8.
S6: any two baggage images are input into the baggage re-identification model to predict whether the two baggage images are from the same baggage. As shown in fig. 5, the specific process of step S6 is:
s601: inputting any two luggage images into a luggage weight recognition model and outputting two eigenvectors;
s602: calculating the similarity of the two feature vectors;
s603: and if the similarity of the two feature vectors is smaller than or equal to a preset threshold value, judging that the two luggage images come from the same piece of luggage, otherwise, judging that the two luggage images do not come from the same piece of luggage.
It should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.

Claims (10)

1. A luggage heavy identification model training and luggage heavy identification method is characterized by comprising the following steps:
s1: acquiring video monitoring data, and extracting a luggage image of the video monitoring data by using a target detection model YOLOv 3;
s2: labeling the luggage image to obtain a luggage image data set;
s3: dividing a baggage image data set into a training set and a verification set;
s4: training the luggage weight recognition model by using a training set, evaluating the luggage weight recognition model by using a verification set, and screening out the luggage weight recognition model with the optimal prediction performance;
s5: obtaining similarity distribution among the luggage images by using the luggage re-identification model obtained in the step S4, and determining a discrimination threshold value according to the similarity distribution;
s6: any two baggage images are input into the baggage re-identification model to predict whether the two baggage images are from the same baggage.
2. The method according to claim 1, wherein the baggage image is labeled in step S2 to obtain a baggage image data set, wherein the labeling format is as follows: number of baggage ID + image in all images of the current baggage.
3. The method according to claim 1, wherein the proportions of the training set and the verification set are respectively as follows: 75% and 25%.
4. The method as claimed in claim 1, wherein the verification set is divided into a query set and a galery set, the query set is an image set to be queried, and the galery set is a library image set.
5. The method as claimed in claim 4, wherein, in the verification of the baggage re-identification model, all images in the galery set are sorted for each baggage image in the query set.
6. The method according to claim 5, wherein the model evaluation is performed by using an accumulated matching curve for the evaluation of the baggage re-recognition model, and the calculation process of the accumulated matching curve is as follows:
firstly, calculating the front K accuracy of each pair of luggage images in a query set, wherein K is a positive integer;
and adding the accuracy of K before each query set, and dividing the sum by the total number of the query sets to obtain an accumulated matching curve.
7. The method according to claim 6, wherein the front K accuracy is AccK, and the calculation formula is as follows:
Figure FDA0002846496660000021
8. the method for training the baggage re-recognition model and recognizing the baggage according to claim 1, wherein the obtaining of the similarity distribution among the baggage images by using the baggage re-recognition model obtained in step S4 specifically comprises: similarity distribution of all baggage image pairs; similarity distribution of pairs of luggage images from the same piece of luggage; similarity distribution of pairs of baggage images from different baggage.
9. The method for training the baggage re-recognition model and recognizing the baggage re-recognition model according to claim 1, wherein the specific process of inputting any two images of the baggage into the baggage re-recognition model to predict whether the two images of the baggage come from the same baggage is as follows:
s601: inputting any two luggage images into a luggage weight recognition model and outputting two eigenvectors;
s602: calculating the similarity of the two feature vectors;
s603: and if the similarity of the two feature vectors is smaller than or equal to a preset threshold value, judging that the two luggage images come from the same piece of luggage, otherwise, judging that the two luggage images do not come from the same piece of luggage.
10. The method as claimed in claim 9, wherein the predetermined threshold is 0.8.
CN202011511333.2A 2020-12-18 2020-12-18 Luggage weight recognition model training and luggage weight recognition method Pending CN112541453A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011511333.2A CN112541453A (en) 2020-12-18 2020-12-18 Luggage weight recognition model training and luggage weight recognition method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011511333.2A CN112541453A (en) 2020-12-18 2020-12-18 Luggage weight recognition model training and luggage weight recognition method

Publications (1)

Publication Number Publication Date
CN112541453A true CN112541453A (en) 2021-03-23

Family

ID=75019279

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011511333.2A Pending CN112541453A (en) 2020-12-18 2020-12-18 Luggage weight recognition model training and luggage weight recognition method

Country Status (1)

Country Link
CN (1) CN112541453A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112801050A (en) * 2021-03-29 2021-05-14 民航成都物流技术有限公司 Intelligent luggage tracking and monitoring method and system
CN113306136A (en) * 2021-04-20 2021-08-27 安徽工程大学 3D printer stacking confusion alarm system and method based on target detection

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007153511A (en) * 2005-12-02 2007-06-21 Advanced Airport Systems Technology Research Consortium Baggage tag recognizing system
CN109740541A (en) * 2019-01-04 2019-05-10 重庆大学 A kind of pedestrian weight identifying system and method
KR20190068000A (en) * 2017-12-08 2019-06-18 이의령 Person Re-identification System in Multiple Camera Environments
CN109948561A (en) * 2019-03-25 2019-06-28 广东石油化工学院 The method and system that unsupervised image/video pedestrian based on migration network identifies again
CN111639561A (en) * 2020-05-17 2020-09-08 西北工业大学 Unsupervised pedestrian re-identification method based on category self-adaptive clustering
CN111738143A (en) * 2020-06-19 2020-10-02 重庆邮电大学 Pedestrian re-identification method based on expectation maximization
CN111783576A (en) * 2020-06-18 2020-10-16 西安电子科技大学 Pedestrian re-identification method based on improved YOLOv3 network and feature fusion

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007153511A (en) * 2005-12-02 2007-06-21 Advanced Airport Systems Technology Research Consortium Baggage tag recognizing system
KR20190068000A (en) * 2017-12-08 2019-06-18 이의령 Person Re-identification System in Multiple Camera Environments
CN109740541A (en) * 2019-01-04 2019-05-10 重庆大学 A kind of pedestrian weight identifying system and method
CN109948561A (en) * 2019-03-25 2019-06-28 广东石油化工学院 The method and system that unsupervised image/video pedestrian based on migration network identifies again
CN111639561A (en) * 2020-05-17 2020-09-08 西北工业大学 Unsupervised pedestrian re-identification method based on category self-adaptive clustering
CN111783576A (en) * 2020-06-18 2020-10-16 西安电子科技大学 Pedestrian re-identification method based on improved YOLOv3 network and feature fusion
CN111738143A (en) * 2020-06-19 2020-10-02 重庆邮电大学 Pedestrian re-identification method based on expectation maximization

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112801050A (en) * 2021-03-29 2021-05-14 民航成都物流技术有限公司 Intelligent luggage tracking and monitoring method and system
CN112801050B (en) * 2021-03-29 2021-07-13 民航成都物流技术有限公司 Intelligent luggage tracking and monitoring method and system
CN113306136A (en) * 2021-04-20 2021-08-27 安徽工程大学 3D printer stacking confusion alarm system and method based on target detection

Similar Documents

Publication Publication Date Title
Bai et al. Group-sensitive triplet embedding for vehicle reidentification
Rios-Cabrera et al. Efficient multi-camera vehicle detection, tracking, and identification in a tunnel surveillance application
D'Angelo et al. People re-identification in camera networks based on probabilistic color histograms
Xie et al. A robust license plate detection and character recognition algorithm based on a combined feature extraction model and BPNN
Zhang et al. Mining semantic context information for intelligent video surveillance of traffic scenes
CN109558823B (en) Vehicle identification method and system for searching images by images
CN101510257B (en) Human face similarity degree matching method and device
Dlagnekov et al. Recognizing cars
CN111709311A (en) Pedestrian re-identification method based on multi-scale convolution feature fusion
CN111860291A (en) Multi-mode pedestrian identity recognition method and system based on pedestrian appearance and gait information
KR102089298B1 (en) System and method for recognizing multinational license plate through generalized character sequence detection
CN112541453A (en) Luggage weight recognition model training and luggage weight recognition method
US11948366B2 (en) Automatic license plate recognition (ALPR) and vehicle identification profile methods and systems
Yamauchi et al. Relational HOG feature with wild-card for object detection
Gu et al. Embedded and real-time vehicle detection system for challenging on-road scenes
Balali et al. Video-based detection and classification of US traffic signs and mile markers using color candidate extraction and feature-based recognition
Wang Vehicle image detection method using deep learning in UAV video
Choi et al. A variety of local structure patterns and their hybridization for accurate eye detection
Shoaib et al. Augmenting the Robustness and efficiency of violence detection systems for surveillance and non-surveillance scenarios
Špaňhel et al. Vehicle fine-grained recognition based on convolutional neural networks for real-world applications
Ilayarajaa et al. Text recognition in moving vehicles using deep learning neural networks
Fritz et al. Attentive object detection using an information theoretic saliency measure
Halima et al. A comprehensive method for Arabic video text detection, localization, extraction and recognition
CN115273100A (en) Semi-supervised Chinese character image generation method based on semantic guide discriminator
Tang et al. Robust vehicle detection based on cascade classifier in traffic surveillance system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination