CN109961051A - A kind of pedestrian's recognition methods again extracted based on cluster and blocking characteristic - Google Patents
A kind of pedestrian's recognition methods again extracted based on cluster and blocking characteristic Download PDFInfo
- Publication number
- CN109961051A CN109961051A CN201910243050.5A CN201910243050A CN109961051A CN 109961051 A CN109961051 A CN 109961051A CN 201910243050 A CN201910243050 A CN 201910243050A CN 109961051 A CN109961051 A CN 109961051A
- Authority
- CN
- China
- Prior art keywords
- cluster
- image
- pedestrian
- loss function
- feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
- G06F18/23213—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Probability & Statistics with Applications (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of pedestrian's recognition methods again extracted based on cluster and blocking characteristic, and (1) clusters image by K-means, the image of cluster is inputted respectively in DCGAN network, generates image respectively, expand original training set;(2) in deep learning, it is extracted by blocking characteristic and feature extraction is carried out to the data of truthful data and the not label generated, simultaneously using smooth normalization loss function (CLS) of cluster mark of invention to the training of data labeling, pedestrian's weight recognition effect is further promoted using resetting (Re-ranking) when test.The present invention has combined the smooth normalization loss function of cluster mark and blocking characteristic extracting method, solves that pedestrian's weight recognition training data are limited and label assignment problem, while the validity feature of image is extracted by way of piecemeal.
Description
Technical field
The invention belongs to Digital Image Processing, computer vision field, are related to a kind of pedestrian's recognition methods again, especially relate to
And it is a kind of based on the pedestrian's recognition methods again for generating confrontation network and convolutional neural networks.
Background technique
Pedestrian identifies again to be judged in image or video sequence using computer vision technique with the presence or absence of specific pedestrian
Technology, be extensively considered as the subproblem of an image retrieval, that is, give a monitoring pedestrian image, retrieval striding equipment under
The pedestrian image.Pedestrian identifies have broad application prospects again, including pedestrian retrieval, pedestrian tracking, street corner event detection, row
Human action behavioural analysis etc..
In computer vision field, the target that pedestrian identifies again is to specify a pedestrian image, existing other non-heavy
Pedestrian image library under folded camera angles, identifies the image with this person.In monitor video, due to camera resolution and bat
The reason for taking the photograph angle is not typically available the very high face picture of quality.In the case where recognition of face failure, pedestrian knows again
A very important substitute technology is not just become.Recent years, pedestrian identified acquirement again with the development of deep learning
Very huge breakthrough.Deep learning has become the research hotspot of computer vision field.Since convolutional neural networks have
How study extracts the ability of feature, therefore than conventional method more suitable for the practical application of engineering field.
Because depth model needs a large amount of training data, but pedestrian identifies that dataset acquisition needs artificially to carry out mesh again
Mark confines the calibration with ID, is a kind of more high-cost data acquisition, so being quickly generated more pedestrians by GAN
The method of weight recognition training data becomes popular research direction.
It is a more difficult project that pedestrian identifies again, solves this project and is faced with lot of challenges.These challenges can return
Receiving as two kinds: first challenges is demand to a large amount of training datas;Second challenge is nonideal scene.
Pedestrian's weight Study of recognition is broadly divided into: the method based on character representation and the method based on generation confrontation network.Base
It studies diagnostic characteristics of the extraction with robustness mainly in character representation method to indicate pedestrian, promotes pedestrian's weight identification model
Can another way be generate new pedestrian image by generating confrontation network (GAN), training data is expanded with this.
Currently, being primarily present following deficiency in progress pedestrian identifies again:
(1) limited training data;
From the point of view of the collection situation of current pedestrian weight recognition training data, the data that are collected into relative to truthful data when
Space division cloth is very limited, local.Meanwhile compared with other visual tasks, the data scale that pedestrian identifies again also right and wrong
It is often small.For example, its training data has 1,250,000 pictures, and goes for large-scale image identification data set ImageNet
People identifies that current common data set only has 30,000 multiple pedestrian's pictures again.
Pedestrian's weight recognition training, data acquisition are relatively difficult, we are very difficult to be collected into across the time, across weather and more scenes
Pedestrian's data.In addition, privacy concern also causes obstruction to data acquisition.
(2) pedestrian identifies that data mark is relatively difficult again;
It is great mark workload first, no matter from time or money, mark cost is all very big.
Secondly, mark itself is sometimes and very difficult, and it is in video that two ages, figure and features are similar, wear same clothes not
It is separately relatively difficult with pedestrian.
(3) nonideal scene;
Mainly different postures is presented in pedestrian, containing complicated background, different illumination conditions and different shootings
Visual angle, these can all be identified again to pedestrian brings very big puzzlement.There are pedestrians to be misaligned for pedestrian image, partial occlusion, image
The problems such as quality is low.
Summary of the invention
In order to solve the above-mentioned technical problems, the present invention provides one kind based on generation confrontation network and convolutional neural networks
Pedestrian's recognition methods again.
The technical scheme adopted by the invention is that: a kind of pedestrian's recognition methods again extracted based on cluster and blocking characteristic,
Characterized by comprising the following steps:
Step 1: the pedestrian image under collection monitoring camera obtains pedestrian image library Pedestrian01, right
Pedestrian01 carries out k-means cluster;
Step 2: the image of cluster is inputted respectively in DCGAN network, generates no label image respectively;
Step 3: being carried out to what is generated in step 2 without label image using the smooth normalization loss function CLS of cluster mark
Label distribution, obtains the generation image of label;
Step 4: the generation image co-registration that the pedestrian image library of collection and step 3 are obtained, to the pedestrian image library of collection
Expanded, obtains new pedestrian image library Pedestrian02;
Step 5: Pedestrian02 being divided horizontally into p block, each piece inputs progress feature in CNN network respectively and mention
It takes, obtains the local feature of image;
Step 6: using the smooth normalization loss function CLS of cluster mark and cross entropy loss function pair in training process
CNN network carries out joint training;
Step 7: when test, using Re-ranking optimization is reset, exporting pedestrian's weight recognition result.
Compared with existing algorithm, remarkable advantage of the invention is:
(1) present invention is for pedestrian weight recognition training, the relatively difficult problem of data acquisition, using DCGAN network expansion number
According to collection.The image with similar features is divided into original data set image clustering by same class using K-means method first
In, the K class image after cluster is then respectively fed to training in DCGAN network and obtains K class generation image.Made by clustering processing
It is more authentic and valid image must to be generated.
(2) smooth normalization loss function (CLS) of cluster proposed by the present invention, compared with the smooth normalization of mark, cluster
Smooth normalization has more adaptability to the mark for generating sample, because it eliminates the mark of the classification in different clusters, while right
Go to distribute with uniform probability with the classification in cluster, avoid concentrations in some classification, solve label distribution and
Excess smoothness problem.
(3) the problem of present invention is to non-ideal scene, primary solutions are that human part is detected and matched, and are adopted
The method extracted with blocking characteristic allows the network to acquire more potential factors, enhances robustness.
(4) the pedestrian's recognition methods again provided by the invention extracted based on cluster and blocking characteristic, can be applied to complicated field
The identification again of pedestrian under scape has portability strong scene changes, and algorithmic stability, fireballing advantage can effectively solve
Data set is small and label assignment problem, practical.
Detailed description of the invention
Fig. 1 is the flow chart of the embodiment of the present invention.
Specific embodiment
Understand for the ease of those of ordinary skill in the art and implement the present invention, with reference to the accompanying drawings and embodiments to this hair
It is bright to be described in further detail, it should be understood that implementation example described herein is merely to illustrate and explain the present invention, not
For limiting the present invention.
Method of the invention is divided into two parts: (1) being clustered by K-means to image, by the image of cluster point
Image Shu Ru not be generated respectively, original training set is expanded in DCGAN network;(2) in deep learning, pass through piecemeal
Feature extraction carries out feature extraction to the data of truthful data and the not label generated, while flat using the cluster of invention mark
Sliding normalization loss function (CLS) further promotes row using resetting (Re-ranking) when test to the training of data labeling
People's weight recognition effect.Method of the invention has combined the smooth normalization loss function of cluster mark and blocking characteristic extraction side
Method, solving pedestrian's weight, recognition training data are limited and label assignment problem, while image is extracted by way of piecemeal
Validity feature.
Referring to Fig.1, a kind of pedestrian's recognition methods again extracted based on cluster and blocking characteristic provided by the invention, including with
Lower step:
Step 1: the pedestrian image under collection monitoring camera obtains pedestrian image library Pedestrian01, right
Pedestrian01 carries out k-means cluster;
The specific implementation of step 1 includes following sub-step:
Step 1.1: the pedestrian image Pedestrian01 under input monitoring camera is into ResNet50 network, using friendship
Pitch entropy loss functionTraining obtains Feature Selection Model;Wherein, y is original reality
Output,For desired output;
Step 1.2: by the Feature Selection Model of training dataset input step 1.1, extracting the last one convolutional layer
Feature Mapping vector xn;
Step 1.3: randomly selecting K Feature Mapping object, each object represents the initial mean value of a cluster, also known as cluster
Center μj;Wherein, the value of K is the positive integer greater than 0;
Step 1.4: to each Feature Mapping x, calculating separately xnWith cluster center μjEuclidean distance, then obtain distance most
Close cluster center determines the cluster label of x, it may be assumed that Ci=argminj∈[1,m]||xn-μj| |, sample x is finally included into corresponding cluster
Ci,x=Ci∪x;Wherein, m indicates the number of cluster;
Step 1.5: cluster center is updated, to each cluster CiCalculate new cluster centerIf μ 'j≠μj, more
New μjFor μ 'j, otherwise μjIt is constant;
Step 1.6: repeating step 1.4 and step 1.5 to μjIt no longer updates, Feature Mapping vector is finally divided into cluster: C
={ C1,C2,...,Cm}。
Step 2: the image of cluster is inputted respectively in DCGAN network, generates no label image respectively;
Step 3: being carried out to what is generated in step 2 without label image using the smooth normalization loss function CLS of cluster mark
Label distribution, obtains the generation image of label;
Step 4: the generation image co-registration that the pedestrian image library of collection and step 3 are obtained, to the pedestrian image library of collection
Expanded, obtains new pedestrian image library Pedestrian02;
The specific implementation of step 4 includes following sub-step:
Step 4.1: the feature vector of every cluster being separately input into DCGAN network, DCGAN network is by a generation model
(obtaining data distribution) and differentiation model D (prediction input is generated in true or G) composition;Wherein, G is one
A simple neural network inputs a feature vector and then generates a figure as output;D is also a simple nerve net
Network inputs an image, then generates a confidence level;
Step 4.2: using loss function LGANDCGAN network is trained, obtains generating image;
LGAN=logD (x)+log (1-D (G (z)))
Wherein, D (x) is a confidence level, and value is [0-1], and G (z) is to generate image;
Step 4.3: all generation image blends are merged into work with the pedestrian image library Pedestrian01 of collection together
For new pedestrian image library Pedestrian02.
Step 5: Pedestrian02 being divided horizontally into p block, each piece inputs progress feature in CNN network respectively and mention
It takes, obtains the local feature of image;
The specific implementation of step 5 includes following sub-step:
Step 5.1: the new training set of generation is inputted in ResNet50 network, forms 3D tensor T to convolution by preceding,
Tensor T is divided horizontally into p horizontal bar;Wherein, the value of p is the positive integer greater than 0;
Step 5.2: being p horizontal bar by tensor T space down-sampling by pond layer, and by all column in same
One single part grade column vector g of vector average out to;
Step 5.3: subsequent vector g reduces dimension by the convolutional layer of 1 × 1 kernel size, finally drops each vector g
Column vector h is separately input in classifier after dimension;
Step 5.4: p h being connected to the final descriptor to form input picture, obtains image local feature;
Step 5.5: during the training period, the identity of each classifier prediction input picture is used using more loss optimisation strategies
The softmax classifier of p classification is trained, and loss function uses the cross entropy loss function in step 1.1;
Step 6: using the smooth normalization loss function CLS of cluster mark and cross entropy loss function pair in training process
CNN network carries out joint training;
The specific implementation of step 6 includes following sub-step:
Step 6.1: the smooth normalization loss function CLS of building cluster mark;
To solve new data label assignment problem, the present invention has invented cluster mark according to the similitude of cluster sample when training
Smooth normalization loss function (CLS) of note is trained CNN model.
The present invention constructs smooth normalization loss function (CLS) of cluster mark according to the similitude of cluster sample, first
The smooth regularization LSRO loss function of label of given exceptional value:
Wherein, p (k | x) represents the prediction probability that input x belongs to classification k, and K is sample class quantity;T is differentiation parameter,
For true training image, T=0;For the image of generation, T=1;
Wherein, zi, zkThe non-normalized probability of i-th, the k width image respectively generated with K cluster;
It gives by NiThe cluster C of a classification compositioniImage x, the i ∈ [1, m] of generation, qg(k | x) is the one-hot of image x
Classification Marking Probability, expression formula are as follows:
All k are encoded using 0-1, first when k belongs to cluster CiWhen, enable k=1, otherwise k=0, wherein k ∈ 1,
2,3 ..., K }, number is divided by with total classification K respectively then, finally obtains the cluster classification normalization mark for generating sample
Are as follows:
Consider effective correct (ground-truth) be distributed so thatSince sample is from cluster Ci,
And CiIn each classification have similar feature, therefore use be uniformly distributedIndicate that generating sample x belongs to CiIn each classification
Probability uses zk,xNot normalized log probability is indicated, if NiTo cluster CiTotal categorical measure, then network normalized output z'k's
Expression formula are as follows:
Obtain fractional prediction probability expressionBy qg' (k | x) and p'(k | x) substitute into LSRO damage
It loses and obtains CLS in function are as follows:
It is compared with LSRO loss function, CLS is write as:
By existing label smoothly normalize LSR obtain CLS authentic specimen mark q'(k) expression formula are as follows:
Wherein δk,t=qg(k/x);
Observation can be seen that ε=0.1 in LSR, as k ≠ t, the LSRO loss function mark given to generation sampleε value is 1;And in smooth normalization loss function (CLS) of cluster mark of the invention
Step 6.2: marking smooth normalization loss function CLS with cluster and combine with cross entropy loss function to CNN model
It is trained.
Step 6.3: when test, by trained CNN in the pedestrian image library Pedestrian01 input step 6.2 of acquisition
In model, initial results permutation table is obtained.
Step 7: when test, using Re-ranking optimization is reset, exporting pedestrian's weight recognition result.
The specific implementation of step 7 includes following sub-step:
Step 7.1: the present invention schemes the pedestrian for the acquisition that needs detect using having the method based on k order derivative coding
As the picture in Pedestrian01 reorders, so that recognition result is promoted.Our target is in step 6.3
The rearrangement of initial sorted lists so that more positive sample appears in the leading portion of list.Firstly, k rank neighbour (k-nn) is defined,
That is the preceding k sample of sorted lists:
WhereinThe mahalanobis distance of k is arrived for feature vector 0, p is query image, and k is the feature vector of k rank inverse coding, N
() is set.Then k rank arest neighbors (k-rnn) is defined
Wherein giFor the i-th width image in Pedestrian01,For p and giAll in this adjacent condition of other side k rank
Set.However, positive sample may be rejected to outside k-nn list due to a series of variations such as illumination, posture, visual angle, because
We use a more robust k-rnn set for this.
I.e. for the set of scriptEach of sample q, find they k-rnn set
Reach certain condition for being overlapped sample number, is then incorporated intoIn this way, will not exist originally
Positive sample in set is taken back again.
Step 7.2: gathered by the k-rnn that step 7.1 obtains, calculates the Jaccard distance between two images:
Wherein, p is query image, giFor the i-th width image in Pedestrian01, k be k rank inverse coding feature to
Amount.
Step 7.3: the resulting initial results of step 6.3 being reset according to 7.2 resulting Jaccard distances, finally
Export pedestrian's weight recognition result.
It should be understood that the part that this specification does not elaborate belongs to the prior art.
It should be understood that the above-mentioned description for preferred embodiment is more detailed, can not therefore be considered to this
The limitation of invention patent protection range, those skilled in the art under the inspiration of the present invention, are not departing from power of the present invention
Benefit requires to make replacement or deformation under protected ambit, fall within the scope of protection of the present invention, this hair
It is bright range is claimed to be determined by the appended claims.
Claims (6)
1. a kind of pedestrian's recognition methods again extracted based on cluster and blocking characteristic, which comprises the following steps:
Step 1: the pedestrian image under collection monitoring camera obtains pedestrian image library Pedestrian01, right
Pedestrian01 carries out k-means cluster;
Step 2: the image of cluster is inputted respectively in DCGAN network, generates no label image respectively;
Step 3: label is carried out without label image to what is generated in step 2 using the smooth normalization loss function CLS of cluster mark
Distribution, obtains the generation image of label;
Step 4: the generation image co-registration that the pedestrian image library of collection and step 3 are obtained carries out the pedestrian image library of collection
Expand, obtains new pedestrian image library Pedestrian02;
Step 5: Pedestrian02 being divided horizontally into p block, each piece inputs in CNN network carry out feature extraction respectively, obtains
Obtain the local feature of image;
Step 6: using the smooth normalization loss function CLS of cluster mark and cross entropy loss function to CNN in training process
Network carries out joint training;
Step 7: when test, using Re-ranking optimization is reset, exporting pedestrian's weight recognition result.
2. the pedestrian's recognition methods again according to claim 1 extracted based on cluster and blocking characteristic, which is characterized in that step
Rapid 1 specific implementation includes following sub-step:
Step 1.1: the pedestrian image library Pedestrian01 under input monitoring camera is into ResNet50 network, using intersection
Entropy loss functionTraining obtains Feature Selection Model;Wherein, y is that original reality is defeated
Out,For desired output;
Step 1.2: by the Feature Selection Model of training dataset input step 1.1, extracting the feature of the last one convolutional layer
Map vector xn;
Step 1.3: randomly selecting K Feature Mapping object, each object represents the initial mean value of a cluster, also known as cluster center
μj;Wherein, the value of K is the positive integer greater than 0;
Step 1.4: to each Feature Mapping x, calculating separately xnWith cluster center μjEuclidean distance, it is nearest then to obtain distance
Cluster center determines the cluster label of x, it may be assumed that Ci=argminj∈[1,m]||xn-μj| |, sample x is finally included into corresponding cluster Ci,x=
Ci∪x;Wherein, m indicates the number of cluster;
Step 1.5: cluster center is updated, to each cluster CiCalculate new cluster centerIf μ 'j≠μj, update μjFor
μ′j, otherwise μjIt is constant;
Step 1.6: repeating step 1.4 and step 1.5 to μjIt no longer updates, Feature Mapping vector is finally divided into cluster: C=
{C1,C2,...,Cm}。
3. the pedestrian's recognition methods again according to claim 1 extracted based on cluster and blocking characteristic, which is characterized in that step
Rapid 4 specific implementation includes following sub-step:
Step 4.1: the feature vector of every cluster is separately input into DCGAN network, DCGAN network by a generation model G and
One differentiation model D is constituted;Wherein, G is a simple neural network, inputs a feature vector and then generates a figure work
For output;D is also a simple neural network, inputs an image, then generates a confidence level;
Step 4.2: using loss function LGANDCGAN network is trained, obtains generating image;
LGAN=logD (x)+log (1-D (G (z)))
Wherein, D (x) is a confidence level, and value is [0-1], and G (z) is to generate image;
Step 4.3: all generation image blends are merged together with the pedestrian image library Pedestrian01 of collection as new
Pedestrian image library Pedestrian02.
4. the pedestrian's recognition methods again according to claim 1 extracted based on cluster and blocking characteristic, which is characterized in that step
Rapid 5 specific implementation includes following sub-step:
Step 5.1: the new pedestrian image library Pedestrian02 of generation is inputted in ResNet50 network, by preceding to convolution
3D tensor T is formed, tensor T is divided horizontally into p horizontal bar;Wherein, the value of p is the positive integer greater than 0;
Step 5.2: being p horizontal bar by tensor T space down-sampling by pond layer, and by all column vectors in same
One single part grade column vector g of average out to;
Step 5.3: subsequent vector g reduces dimension by the convolutional layer of 1 × 1 kernel size, finally will be after each vector g dimensionality reduction
Column vector h is separately input in classifier;
Step 5.4: p h being connected to the final descriptor to form input picture, obtains image local feature;
Step 5.5: during the training period, the identity of each classifier prediction input picture, using more loss optimisation strategies, with p
The softmax classifier of classification is trained, and loss function uses the cross entropy loss function in step 1.1.
5. the pedestrian's recognition methods again according to claim 1 extracted based on cluster and blocking characteristic, which is characterized in that step
Rapid 6 specific implementation includes following sub-step:
Step 6.1: the smooth normalization loss function CLS of building cluster mark;
The smooth regularization LSRO loss function of label of exceptional value given first:
Wherein, p (k | x) represents the prediction probability that input x belongs to classification k, and K is sample class quantity;T is differentiation parameter, for
The pedestrian image of collection, T=0;For the image of generation, T=1;
Wherein, zi, zkThe non-normalized probability of i-th, the k width image respectively generated with K cluster;
It gives by NiThe cluster C of a classification compositioniImage x, the i ∈ [1, m] of generation, qg(k | x) is the one-hot classification of image x
Marking Probability, expression formula are as follows:
All k are encoded using 0-1, first when k belongs to cluster CiWhen, enable k=1, otherwise k=0, wherein k ∈ 1,2,
3 ..., K }, number is divided by with total classification K respectively then, finally obtains the cluster classification normalization mark for generating sample are as follows:
Using being uniformly distributedIndicate that generating sample x belongs to CiIn each classification probability, use zk,xIndicate not normalized log
Probability, if NiTo cluster CiTotal categorical measure, then network normalized output z'kExpression formula are as follows:
Obtain fractional prediction probability expressionBy qg' (k | x) and p'(k | x) substitute into LSRO loss letter
CLS is obtained in number are as follows:
It is compared with LSRO loss function, CLS is write as:
By existing label smoothly normalize LSR obtain CLS authentic specimen mark q'(k) expression formula are as follows:
Wherein δk,t=qg(k/x), ε=0.1 in LSR, as k ≠ t, the LSRO loss function mark given to generation sampleε value is 1;
Step 6.2: being combined with the smooth normalization loss function CLS of cluster mark with cross entropy loss function and CNN model is carried out
Training;
Step 6.3: when test, by trained CNN model in the pedestrian image library Pedestrian01 input step 6.2 of acquisition
In, obtain initial results permutation table.
6. the pedestrian's recognition methods again extracted described in -5 any one based on cluster and blocking characteristic according to claim 1,
It is characterized in that, the specific implementation of step 7 includes following sub-step:
Step 7.1: by the way of based on k order derivative coding, to the pedestrian image Pedestrian01 for the acquisition that needs detect
In picture reorder so that recognition result is promoted;
Firstly, defining k rank neighbour k-nn, the i.e. preceding k sample of sorted lists:
WhereinThe mahalanobis distance of k is arrived for feature vector 0, p is query image, and k is the feature vector of k rank inverse coding, N ()
For set;
Then k rank arest neighbors k-rnn is defined:
Wherein giFor the i-th width image in Pedestrian01,For p and giAll in the collection of this adjacent condition of other side k rank
It closes;However, positive sample may be rejected to outside k-nn list, therefore be adopted due to a series of variations such as illumination, posture, visual angle
Gathered with a more robust k-rnn:
I.e. for the set of scriptEach of sample q, find they k-rnn setFor weight
It closes sample number and reaches certain condition, be then incorporated intoIn this way, will not exist originallyIn set
Positive sample take back again;
Step 7.2: gathered by the k-rnn that step 7.1 obtains, calculates the Jaccard distance between two images:
Wherein, p is query image, giFor the i-th width image in Pedestrian01, k is the feature vector of k rank inverse coding;
Step 7.3: the resulting initial results of step 6.3 being reset according to 7.2 resulting Jaccard distances, final output
Pedestrian's weight recognition result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910243050.5A CN109961051B (en) | 2019-03-28 | 2019-03-28 | Pedestrian re-identification method based on clustering and block feature extraction |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910243050.5A CN109961051B (en) | 2019-03-28 | 2019-03-28 | Pedestrian re-identification method based on clustering and block feature extraction |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109961051A true CN109961051A (en) | 2019-07-02 |
CN109961051B CN109961051B (en) | 2022-11-15 |
Family
ID=67025138
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910243050.5A Active CN109961051B (en) | 2019-03-28 | 2019-03-28 | Pedestrian re-identification method based on clustering and block feature extraction |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109961051B (en) |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110555390A (en) * | 2019-08-09 | 2019-12-10 | 厦门市美亚柏科信息股份有限公司 | pedestrian re-identification method, device and medium based on semi-supervised training mode |
CN110619264A (en) * | 2019-07-30 | 2019-12-27 | 长江大学 | UNet + + based microseism effective signal identification method and device |
CN110633751A (en) * | 2019-09-17 | 2019-12-31 | 上海眼控科技股份有限公司 | Training method of car logo classification model, car logo identification method, device and equipment |
CN110688966A (en) * | 2019-09-30 | 2020-01-14 | 华东师范大学 | Semantic-guided pedestrian re-identification method |
CN110728238A (en) * | 2019-10-12 | 2020-01-24 | 安徽工程大学 | Personnel re-detection method of fusion type neural network |
CN110796026A (en) * | 2019-10-10 | 2020-02-14 | 湖北工业大学 | Pedestrian re-identification method based on global feature stitching |
CN110968735A (en) * | 2019-11-25 | 2020-04-07 | 中国矿业大学 | Unsupervised pedestrian re-identification method based on spherical similarity hierarchical clustering |
CN111274992A (en) * | 2020-02-12 | 2020-06-12 | 北方工业大学 | Cross-camera pedestrian re-identification method and system |
CN111461002A (en) * | 2020-03-31 | 2020-07-28 | 华南理工大学 | Sample processing method for thermal imaging pedestrian detection |
CN111612100A (en) * | 2020-06-04 | 2020-09-01 | 商汤集团有限公司 | Object re-recognition method and device, storage medium and computer equipment |
CN111666843A (en) * | 2020-05-25 | 2020-09-15 | 湖北工业大学 | Pedestrian re-identification method based on global feature and local feature splicing |
CN112070010A (en) * | 2020-09-08 | 2020-12-11 | 长沙理工大学 | Pedestrian re-recognition method combining multi-loss dynamic training strategy to enhance local feature learning |
CN112488035A (en) * | 2020-12-14 | 2021-03-12 | 南京信息工程大学 | Cross-domain pedestrian re-identification method based on antagonistic neural network |
CN112597871A (en) * | 2020-12-18 | 2021-04-02 | 中山大学 | Unsupervised vehicle re-identification method and system based on two-stage clustering and storage medium |
CN112784674A (en) * | 2020-11-13 | 2021-05-11 | 北京航空航天大学 | Cross-domain identification method of key personnel search system based on class center self-adaption |
CN113032553A (en) * | 2019-12-09 | 2021-06-25 | 富士通株式会社 | Information processing apparatus, information processing method, and computer program |
CN113096080A (en) * | 2021-03-30 | 2021-07-09 | 四川大学华西第二医院 | Image analysis method and system |
CN113239782A (en) * | 2021-05-11 | 2021-08-10 | 广西科学院 | Pedestrian re-identification system and method integrating multi-scale GAN and label learning |
CN113378620A (en) * | 2021-03-31 | 2021-09-10 | 中交第二公路勘察设计研究院有限公司 | Cross-camera pedestrian re-identification method in surveillance video noise environment |
CN113420639A (en) * | 2021-06-21 | 2021-09-21 | 南京航空航天大学 | Method and device for establishing near-ground infrared target data set based on generation countermeasure network |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103258212A (en) * | 2013-04-03 | 2013-08-21 | 中国科学院东北地理与农业生态研究所 | Semi-supervised integrated remote-sensing image classification method based on attractor propagation clustering |
CN108764281A (en) * | 2018-04-18 | 2018-11-06 | 华南理工大学 | A kind of image classification method learning across task depth network based on semi-supervised step certainly |
EP3399465A1 (en) * | 2017-05-05 | 2018-11-07 | Dassault Systèmes | Forming a dataset for fully-supervised learning |
-
2019
- 2019-03-28 CN CN201910243050.5A patent/CN109961051B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103258212A (en) * | 2013-04-03 | 2013-08-21 | 中国科学院东北地理与农业生态研究所 | Semi-supervised integrated remote-sensing image classification method based on attractor propagation clustering |
EP3399465A1 (en) * | 2017-05-05 | 2018-11-07 | Dassault Systèmes | Forming a dataset for fully-supervised learning |
CN108764281A (en) * | 2018-04-18 | 2018-11-06 | 华南理工大学 | A kind of image classification method learning across task depth network based on semi-supervised step certainly |
Cited By (31)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110619264B (en) * | 2019-07-30 | 2023-06-16 | 长江大学 | Method and device for identifying microseism effective signals based on UNet++, and method and device for identifying microseism effective signals based on UNet++ |
CN110619264A (en) * | 2019-07-30 | 2019-12-27 | 长江大学 | UNet + + based microseism effective signal identification method and device |
CN110555390A (en) * | 2019-08-09 | 2019-12-10 | 厦门市美亚柏科信息股份有限公司 | pedestrian re-identification method, device and medium based on semi-supervised training mode |
CN110633751A (en) * | 2019-09-17 | 2019-12-31 | 上海眼控科技股份有限公司 | Training method of car logo classification model, car logo identification method, device and equipment |
CN110688966B (en) * | 2019-09-30 | 2024-01-09 | 华东师范大学 | Semantic guidance pedestrian re-recognition method |
CN110688966A (en) * | 2019-09-30 | 2020-01-14 | 华东师范大学 | Semantic-guided pedestrian re-identification method |
CN110796026A (en) * | 2019-10-10 | 2020-02-14 | 湖北工业大学 | Pedestrian re-identification method based on global feature stitching |
CN110728238A (en) * | 2019-10-12 | 2020-01-24 | 安徽工程大学 | Personnel re-detection method of fusion type neural network |
CN110968735A (en) * | 2019-11-25 | 2020-04-07 | 中国矿业大学 | Unsupervised pedestrian re-identification method based on spherical similarity hierarchical clustering |
CN113032553A (en) * | 2019-12-09 | 2021-06-25 | 富士通株式会社 | Information processing apparatus, information processing method, and computer program |
CN111274992A (en) * | 2020-02-12 | 2020-06-12 | 北方工业大学 | Cross-camera pedestrian re-identification method and system |
CN111461002A (en) * | 2020-03-31 | 2020-07-28 | 华南理工大学 | Sample processing method for thermal imaging pedestrian detection |
CN111461002B (en) * | 2020-03-31 | 2023-05-26 | 华南理工大学 | Sample processing method for thermal imaging pedestrian detection |
CN111666843A (en) * | 2020-05-25 | 2020-09-15 | 湖北工业大学 | Pedestrian re-identification method based on global feature and local feature splicing |
TWI780567B (en) * | 2020-06-04 | 2022-10-11 | 大陸商商湯集團有限公司 | Object re-recognition method, storage medium and computer equipment |
JP2022548187A (en) * | 2020-06-04 | 2022-11-17 | シャンハイ センスタイム インテリジェント テクノロジー カンパニー リミテッド | Target re-identification method and device, terminal and storage medium |
CN111612100A (en) * | 2020-06-04 | 2020-09-01 | 商汤集团有限公司 | Object re-recognition method and device, storage medium and computer equipment |
CN111612100B (en) * | 2020-06-04 | 2023-11-03 | 商汤集团有限公司 | Object re-identification method, device, storage medium and computer equipment |
WO2021243947A1 (en) * | 2020-06-04 | 2021-12-09 | 商汤集团有限公司 | Object re-identification method and apparatus, and terminal and storage medium |
CN112070010B (en) * | 2020-09-08 | 2024-03-22 | 长沙理工大学 | Pedestrian re-recognition method for enhancing local feature learning by combining multiple-loss dynamic training strategies |
CN112070010A (en) * | 2020-09-08 | 2020-12-11 | 长沙理工大学 | Pedestrian re-recognition method combining multi-loss dynamic training strategy to enhance local feature learning |
CN112784674A (en) * | 2020-11-13 | 2021-05-11 | 北京航空航天大学 | Cross-domain identification method of key personnel search system based on class center self-adaption |
CN112488035A (en) * | 2020-12-14 | 2021-03-12 | 南京信息工程大学 | Cross-domain pedestrian re-identification method based on antagonistic neural network |
CN112488035B (en) * | 2020-12-14 | 2024-04-26 | 南京信息工程大学 | Cross-domain pedestrian re-identification method based on antagonistic neural network |
CN112597871A (en) * | 2020-12-18 | 2021-04-02 | 中山大学 | Unsupervised vehicle re-identification method and system based on two-stage clustering and storage medium |
CN112597871B (en) * | 2020-12-18 | 2023-07-18 | 中山大学 | Unsupervised vehicle re-identification method, system and storage medium based on two-stage clustering |
CN113096080B (en) * | 2021-03-30 | 2024-01-16 | 四川大学华西第二医院 | Image analysis method and system |
CN113096080A (en) * | 2021-03-30 | 2021-07-09 | 四川大学华西第二医院 | Image analysis method and system |
CN113378620A (en) * | 2021-03-31 | 2021-09-10 | 中交第二公路勘察设计研究院有限公司 | Cross-camera pedestrian re-identification method in surveillance video noise environment |
CN113239782A (en) * | 2021-05-11 | 2021-08-10 | 广西科学院 | Pedestrian re-identification system and method integrating multi-scale GAN and label learning |
CN113420639A (en) * | 2021-06-21 | 2021-09-21 | 南京航空航天大学 | Method and device for establishing near-ground infrared target data set based on generation countermeasure network |
Also Published As
Publication number | Publication date |
---|---|
CN109961051B (en) | 2022-11-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109961051A (en) | A kind of pedestrian's recognition methods again extracted based on cluster and blocking characteristic | |
CN111126360B (en) | Cross-domain pedestrian re-identification method based on unsupervised combined multi-loss model | |
CN105574505B (en) | The method and system that human body target identifies again between a kind of multiple-camera | |
Johnson et al. | Clustered pose and nonlinear appearance models for human pose estimation. | |
Oliva et al. | Scene-centered description from spatial envelope properties | |
CN104572804B (en) | A kind of method and its system of video object retrieval | |
Mei et al. | Robust multitask multiview tracking in videos | |
CN110717411A (en) | Pedestrian re-identification method based on deep layer feature fusion | |
CN109101865A (en) | A kind of recognition methods again of the pedestrian based on deep learning | |
CN104268586B (en) | A kind of various visual angles action identification method | |
CN106897669B (en) | Pedestrian re-identification method based on consistent iteration multi-view migration learning | |
CN104966075B (en) | A kind of face identification method and system differentiating feature based on two dimension | |
Parde et al. | Face and image representation in deep CNN features | |
CN102147812A (en) | Three-dimensional point cloud model-based landmark building image classifying method | |
Dowson et al. | Simultaneous modeling and tracking (smat) of feature sets | |
CN106096528B (en) | A kind of across visual angle gait recognition method analyzed based on two-dimentional coupling edge away from Fisher | |
Zhang et al. | Joint discriminative representation learning for end-to-end person search | |
CN108830222A (en) | A kind of micro- expression recognition method based on informedness and representative Active Learning | |
CN105718934A (en) | Method for pest image feature learning and identification based on low-rank sparse coding technology | |
Wang et al. | Probabilistic nearest neighbor search for robust classification of face image sets | |
Sokolova et al. | Methods of gait recognition in video | |
US20120257819A1 (en) | Vision-Based Object Detection by Part-Based Feature Synthesis | |
Zhou et al. | Modeling perspective effects in photographic composition | |
CN108121970A (en) | A kind of recognition methods again of the pedestrian based on difference matrix and matrix measures | |
CN105404871B (en) | Low resolution method for pedestrian matching between no overlap ken camera based on multiple dimensioned combination learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |