CN109190513A - In conjunction with the vehicle of saliency detection and neural network again recognition methods and system - Google Patents
In conjunction with the vehicle of saliency detection and neural network again recognition methods and system Download PDFInfo
- Publication number
- CN109190513A CN109190513A CN201810921051.6A CN201810921051A CN109190513A CN 109190513 A CN109190513 A CN 109190513A CN 201810921051 A CN201810921051 A CN 201810921051A CN 109190513 A CN109190513 A CN 109190513A
- Authority
- CN
- China
- Prior art keywords
- vehicle
- image
- neural network
- feature
- external appearance
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 73
- 238000000034 method Methods 0.000 title claims abstract description 51
- 238000001514 detection method Methods 0.000 title claims abstract description 49
- 238000012549 training Methods 0.000 claims abstract description 29
- 239000000284 extract Substances 0.000 claims abstract description 12
- 238000004364 calculation method Methods 0.000 claims description 16
- 230000006870 function Effects 0.000 claims description 15
- 230000009466 transformation Effects 0.000 claims description 14
- 230000001174 ascending effect Effects 0.000 claims description 10
- 230000035945 sensitivity Effects 0.000 claims description 9
- 238000000605 extraction Methods 0.000 claims description 7
- 238000013507 mapping Methods 0.000 claims description 5
- 238000013519 translation Methods 0.000 claims description 5
- 238000012360 testing method Methods 0.000 claims description 4
- 235000013399 edible fruits Nutrition 0.000 claims 1
- 238000010586 diagram Methods 0.000 description 8
- 210000002569 neuron Anatomy 0.000 description 8
- 238000013527 convolutional neural network Methods 0.000 description 6
- 230000000694 effects Effects 0.000 description 5
- 238000012544 monitoring process Methods 0.000 description 4
- 238000007689 inspection Methods 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 230000004913 activation Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 230000001537 neural effect Effects 0.000 description 2
- 230000005856 abnormality Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000012886 linear function Methods 0.000 description 1
- 210000005036 nerve Anatomy 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 230000001144 postural effect Effects 0.000 description 1
- 230000003014 reinforcing effect Effects 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
- G06V20/54—Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/08—Detecting or categorising vehicles
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Cheminformatics (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses the vehicle of a kind of detection of combination saliency and neural network again recognition methods and system, method includes: to carry out conspicuousness detection using original image of the SIM algorithm to vehicle, obtains the significant external appearance characteristic image of vehicle;The significant external appearance characteristic image and original image of vehicle are inputted neural network together to be trained;According to the training result of neural network, extracts vehicle characteristics and vehicle is identified again;System includes conspicuousness detection module, training module and weight identification module;System further includes memory and processor.The present invention can identify again vehicle according to the significant external appearance characteristic image of vehicle, and the robustness for extracting feature is enhanced;In addition, the present invention enables neural network pointedly to carry out feature learning to the notable feature region of vehicle image, efficiency is higher, can be widely applied to image identification technical field.
Description
Technical field
The present invention relates to image identification technical field, especially a kind of vehicle of combination saliency detection and neural network
Again recognition methods and system.
Background technique
The available high definition vehicle image of the bayonets class video monitoring equipment such as security monitoring bayonet, electronic police, therefrom
To the various details of vehicle.Realize that the association of information of vehicles helps to analyze vehicle in the multi-point monitoring bayonet image of road network
The trip rule of driving trace, digging vehicle in road network, can also help to fast implement and relate to determining for thing vehicle in terms of security protection
Position and tracking, therefore, the vehicle in multiple spot bayonet image identifies that problem becomes one of research hotspot again.Know again in vehicle at present
Not Yan Jiu present in significant challenge with the presence of: (1) the vehicle pictures resolution ratio of more monitoring cameras shooting is inconsistent, illumination becomes
Situations such as change, vehicle angles postural change, influences identification;(2) brand, model of vehicle, year money it is numerous, many different model vehicles
Difference between is not obvious;(3) vehicle appearance of same brand and model is about the same, it is difficult to distinguish.
Vehicle weight Study of recognition belongs to target and identifies scope again, and existing target weight Study of recognition method can be divided into tolerance
It practises and two class Research Thinking of feature learning.Method based on metric learning is to pass through using one measure of sample training
A suitable distance metric is practised, under this distance metric the similarity of similar sample data can increase, non-similar sample
This similarity reduces, i.e., finds a reasonable feature space by metric learning and map, so that the sample in new space
Between feature distribution it is more reasonable, such as method and sequence support vector machines (RankSVM) based on mahalanobis distance study
Method etc..Such methods usually require artificial design features, and recognition effect is somewhat dependent on the extraction of feature, extensive
Ability is poor.Method based on feature learning be the various features method such as color characteristic, scale invariant feature is combined with
Obtain preferably weight recognition effect.Due to identifying that problem is very high to the robustness requirement for extracting feature again, especially in no vehicle
Under board information and the unconspicuous situation of vehicle characteristics difference, traditional feature learning method is difficult to obtain good recognition effect.
Therefore many research methods all combine training and extraction that convolutional Neural method carries out feature at this stage, are such as damaged using triple
The neural network method that (Triplet Loss) makees loss function is lost, through the positive negative sample of training at a distance from central point sample,
Achieve the purpose that inter-class variance maximization and minimum interclass variance.
Vehicle is identified again and can generally be realized by Car license recognition.But license plate occur alter, forge, vehicle it is unlicensed or
In the case that mistake occurs in person's licence plate recognition method, then need to realize correct vehicle using some unique external appearance characteristics of vehicle
It identifies again.In road gate vehicle image, since the brand and model of vehicle is numerous, the same vehicle of model, color is everywhere
As it can be seen that the difference in appearance between vehicle under same brand vehicle, year money is unobvious, the case where not depending on vehicle license plate information
Under, weight is carried out to vehicle according to color, scale invariant feature of vehicle etc. by traditional metric learning and feature learning method
Identification has certain difficulty.At this stage in conjunction in the method for neural network, the network training based on triple loss function is difficult
Degree is big, it is more difficult to restrain;In addition, the study of feature loss function optimization neural network based, rather than targetedly to important
Feature carries out selective learning, and operand is big and inefficient.
Summary of the invention
In order to solve the above technical problems, it is an object of the invention to: a kind of strong robustness and high-efficient is provided, in conjunction with figure
As the vehicle of conspicuousness detection and neural network again recognition methods and system.
First technical solution adopted by the present invention is:
In conjunction with the vehicle recognition methods again of saliency detection and neural network, comprising the following steps:
Conspicuousness detection is carried out using original image of the SIM algorithm to vehicle, obtains the significant external appearance characteristic image of vehicle;
The significant external appearance characteristic image and original image of vehicle are inputted neural network together to be trained;
According to the training result of neural network, extracts vehicle characteristics and vehicle is identified again.
Further, described that conspicuousness detection is carried out using original image of the SIM algorithm to vehicle, obtain the significant outer of vehicle
The step for seeing characteristic image, comprising the following steps:
Wavelet transformation is made to three Color Channels of original image respectively;
According to the wavelet transform result of three Color Channels, the small of each Color Channel is calculated separately using two value filters
The central point of wave conversion result enlivens coefficient and coefficient is enlivened in neighborhood region;
Coefficient is enlivened according to central point and coefficient is enlivened in neighborhood region, calculates separately central point and the neighbour of each Color Channel
Contrast is enlivened between the region of domain;
Contrast is enlivened between the central point and neighborhood region of each Color Channel by extension comparison sensitivity function
Carry out weight adjustment;
Inverse wavelet transform is carried out to the result of weight adjustment, obtains the significant external appearance characteristic image of vehicle.
Further, the result to weight adjustment carries out inverse wavelet transform, obtains the significant external appearance characteristic image of vehicle
The step for, comprising the following steps:
Inverse wavelet transform is carried out to the weight adjustment result of each Color Channel, obtains the conspicuousness inspection of each Color Channel
Survey result;
Operation is normalized to the conspicuousness testing result of each Color Channel, obtains conspicuousness gray level image;
The original image of conspicuousness gray level image and vehicle is carried out pixel to be multiplied calculating, obtains the significant appearance spy of vehicle
Levy image.
Further, described the significant external appearance characteristic image and original image of vehicle are inputted into neural network to be together trained
The step for, comprising the following steps:
According to the license board information of vehicle, classify to vehicle, and one vehicle ID is set for each classification;
The significant external appearance characteristic image and original image of vehicle are obtained, and according to the significant external appearance characteristic image and original of vehicle
Beginning image generates tensor, and there are six channels for the tensor tool;
Using nonlinear mapping method, the significant external appearance characteristic image of vehicle is converted to the semanteme of vehicle by convolutional layer
Characteristic image;
Using down-sampled method, handled by maximum pond layer come the semantic feature image to vehicle, to ensure vehicle
Semantic feature image keep geometry and translation invariance;
Feature extraction and feature combination are carried out by full articulamentum, obtain the feature vector of vehicle.
Further, the training result according to neural network extracts vehicle characteristics and is identified this again to vehicle
Step, comprising the following steps:
It obtains the training result of neural network and removes Softmax layers;
By in query set and Candidate Set image input neural network, obtain target vehicle feature vector and vehicle to be identified
Feature vector;Wherein, the original image of the query set storage target vehicle, the Candidate Set store vehicle to be identified
Original image;
The feature vector of feature vector and vehicle to be identified to target vehicle carries out match query, realizes the weight to vehicle
Identification.
Further, described eigenvector is the vector of 1024 dimensions.
Further, described that match query is carried out to the feature vector of target vehicle and the feature vector of vehicle to be identified, it is real
Now to the identification again of vehicle the step for, comprising the following steps:
Calculate the Euclidean distance between the feature vector of target vehicle and the feature vector of vehicle to be identified;
Ascending sort is carried out to the Euclidean distance being calculated;
According to ascending sort as a result, match to the vehicle ID of corresponding vehicle-to-target vehicle to be identified, vehicle is obtained
Weight recognition result.
Second technical solution adopted by the present invention is:
In conjunction with the vehicle weight identifying system of saliency detection and neural network, comprising:
Conspicuousness detection module obtains vehicle for carrying out conspicuousness detection using original image of the SIM algorithm to vehicle
Significant external appearance characteristic image;
Training module is instructed for the significant external appearance characteristic image and original image of vehicle to be inputted neural network together
Practice;
Weight identification module extracts vehicle characteristics and is identified again to vehicle for the training result according to neural network.
Further, the conspicuousness detection module includes:
Wavelet transform unit, for making wavelet transformation to three Color Channels of original image respectively;
Coefficient calculation unit is enlivened, for the wavelet transform result according to three Color Channels, using two value filters point
The central point for not calculating the wavelet transform result of each Color Channel enlivens coefficient and coefficient is enlivened in neighborhood region;
Contrast computing unit is enlivened, for enlivening coefficient according to central point and coefficient is enlivened in neighborhood region, is calculated separately
Contrast is enlivened between the central point and neighborhood region of each Color Channel;
Weight adjustment unit, for comparing central point and neighborhood region of the sensitivity function to each Color Channel by extension
Between enliven contrast carry out weight adjustment;
Inverse wavelet transform, the result for adjusting to weight carry out inverse wavelet transform, obtain the significant appearance of vehicle
Characteristic image.
Third technical solution adopted by the present invention is:
In conjunction with the vehicle weight identifying system of saliency detection and neural network, comprising:
Memory, for storing program;
Processor is used for loading procedure, to execute the detection of combination saliency and the mind as described in the first technical solution
The recognition methods again of vehicle through network.
The beneficial effects of the present invention are: the present invention is based on SIM algorithms to realize the conspicuousness detection to vehicle, outside vehicle
It sees in the lesser situation of difference, vehicle can be identified again according to the significant external appearance characteristic image of vehicle, enhance and mention
Take the robustness of feature;In addition, the significant external appearance characteristic image and original image of vehicle are inputted neural network by the present invention together
It is trained, neural network is enable pointedly to carry out feature learning to the notable feature region of vehicle image, efficiency is higher.
Detailed description of the invention
Fig. 1 is that the present invention combines saliency to detect the step flow chart with the vehicle of neural network again recognition methods;
Fig. 2 is the conspicuousness detecting step flow chart based on SIM algorithm in the embodiment of the present invention;
Fig. 3 is the schematic diagram of two value filter of horizontal neighbors region in the embodiment of the present invention;
Fig. 4 is the schematic diagram of two value filter of vertical neighborhood region in the embodiment of the present invention;
Fig. 5 is the schematic diagram in the embodiment of the present invention to two value filter of angular neighborhood region;
Fig. 6 is the schematic diagram of two value filter of horizontal center point in the embodiment of the present invention;
Fig. 7 is the schematic diagram of two value filter of vertical centre point in the embodiment of the present invention;
Fig. 8 is the schematic diagram of two value filter of diagonal central point in the embodiment of the present invention;
Fig. 9 is that original image and conspicuousness gray level image are carried out pixel in the embodiment of the present invention to be multiplied the signal of calculating
Figure;
Figure 10 is the neural network structure schematic diagram in the embodiment of the present invention.
Specific embodiment
The present invention is further explained and is illustrated with specific embodiment with reference to the accompanying drawings of the specification.For of the invention real
The step number in example is applied, is arranged only for the purposes of illustrating explanation, any restriction is not done to the sequence between step, is implemented
The execution sequence of each step in example can be adaptively adjusted according to the understanding of those skilled in the art.
Referring to Fig.1, the vehicle recognition methods again of present invention combination saliency detection and neural network, including following step
It is rapid:
Conspicuousness detection is carried out using original image of the SIM algorithm to vehicle, obtains the significant external appearance characteristic image of vehicle;
The significant external appearance characteristic image and original image of vehicle are inputted neural network together to be trained;
According to the training result of neural network, extracts vehicle characteristics and vehicle is identified again.
It is further used as preferred embodiment, it is described that conspicuousness inspection is carried out using original image of the SIM algorithm to vehicle
The step for surveying, obtaining the significant external appearance characteristic image of vehicle, comprising the following steps:
Wavelet transformation is made to three Color Channels of original image respectively;
According to the wavelet transform result of three Color Channels, the small of each Color Channel is calculated separately using two value filters
The central point of wave conversion result enlivens coefficient and coefficient is enlivened in neighborhood region;
Coefficient is enlivened according to central point and coefficient is enlivened in neighborhood region, calculates separately central point and the neighbour of each Color Channel
Contrast is enlivened between the region of domain;
Contrast is enlivened between the central point and neighborhood region of each Color Channel by extension comparison sensitivity function
Carry out weight adjustment;
Inverse wavelet transform is carried out to the result of weight adjustment, obtains the significant external appearance characteristic image of vehicle.
It is further used as preferred embodiment, the result to weight adjustment carries out inverse wavelet transform, obtains vehicle
Significant external appearance characteristic image the step for, comprising the following steps:
Inverse wavelet transform is carried out to the weight adjustment result of each Color Channel, obtains the conspicuousness inspection of each Color Channel
Survey result;
Operation is normalized to the conspicuousness testing result of each Color Channel, obtains conspicuousness gray level image;
The original image of conspicuousness gray level image and vehicle is carried out pixel to be multiplied calculating, obtains the significant appearance spy of vehicle
Levy image.
It is further used as preferred embodiment, the significant external appearance characteristic image and original image by vehicle is defeated together
Enter the step for neural network is trained, comprising the following steps:
According to the license board information of vehicle, classify to vehicle, and one vehicle ID is set for each classification;
The significant external appearance characteristic image and original image of vehicle are obtained, and according to the significant external appearance characteristic image and original of vehicle
Beginning image generates tensor, and there are six channels for the tensor tool;
Using nonlinear mapping method, the significant external appearance characteristic image of vehicle is converted to the semanteme of vehicle by convolutional layer
Characteristic image;
Using down-sampled method, handled by maximum pond layer come the semantic feature image to vehicle, to ensure vehicle
Semantic feature image keep geometry and translation invariance;
Feature extraction and feature combination are carried out by full articulamentum, obtain the feature vector of vehicle.
It is further used as preferred embodiment, it is simultaneously right to extract vehicle characteristics for the training result according to neural network
The step for vehicle is identified again, comprising the following steps:
It obtains the training result of neural network and removes Softmax layers;
By in query set and Candidate Set image input neural network, obtain target vehicle feature vector and vehicle to be identified
Feature vector;Wherein, the original image of the query set storage target vehicle, the Candidate Set store vehicle to be identified
Original image;
The feature vector of feature vector and vehicle to be identified to target vehicle carries out match query, realizes the weight to vehicle
Identification.
It is further used as preferred embodiment, described eigenvector is the vector of 1024 dimensions.
Be further used as preferred embodiment, the feature of the feature vector to target vehicle and vehicle to be identified to
The step for amount carries out match query, realizes the identification again to vehicle, comprising the following steps:
Calculate the Euclidean distance between the feature vector of target vehicle and the feature vector of vehicle to be identified;
Ascending sort is carried out to the Euclidean distance being calculated;
According to ascending sort as a result, match to the vehicle ID of corresponding vehicle-to-target vehicle to be identified, vehicle is obtained
Weight recognition result.
It is corresponding with the method for Fig. 1, the vehicle weight identifying system of the detection of present invention combination saliency and neural network,
Include:
Conspicuousness detection module obtains vehicle for carrying out conspicuousness detection using original image of the SIM algorithm to vehicle
Significant external appearance characteristic image;
Training module is instructed for the significant external appearance characteristic image and original image of vehicle to be inputted neural network together
Practice;
Weight identification module extracts vehicle characteristics and is identified again to vehicle for the training result according to neural network.
It is further used as preferred embodiment, the conspicuousness detection module includes:
Wavelet transform unit, for making wavelet transformation to three Color Channels of original image respectively;
Coefficient calculation unit is enlivened, for the wavelet transform result according to three Color Channels, using two value filters point
The central point for not calculating the wavelet transform result of each Color Channel enlivens coefficient and coefficient is enlivened in neighborhood region;
Contrast computing unit is enlivened, for enlivening coefficient according to central point and coefficient is enlivened in neighborhood region, is calculated separately
Contrast is enlivened between the central point and neighborhood region of each Color Channel;
Weight adjustment unit, for comparing central point and neighborhood region of the sensitivity function to each Color Channel by extension
Between enliven contrast carry out weight adjustment;
Inverse wavelet transform, the result for adjusting to weight carry out inverse wavelet transform, obtain the significant appearance of vehicle
Characteristic image.
It is corresponding with the method for Fig. 1, the vehicle weight identifying system of the detection of present invention combination saliency and neural network,
Include:
Memory, for storing program;
Processor is used for loading procedure, to execute the vehicle of combination saliency detection and neural network of the invention
Recognition methods again.
Below by taking the vehicle image that the camera of public security bayonet is shot as an example, a kind of the present invention will be described in detail combination figure
As the specific implementation step of the vehicle of conspicuousness detection and neural network again recognition methods:
S1, vehicular traffic is shot by being mounted on the camera of public security bayonet, obtains the original image of vehicle;
S2, conspicuousness detection is carried out using original image of the SIM algorithm to vehicle, obtains the significant external appearance characteristic figure of vehicle
Picture;
As shown in Fig. 2, step S2 specifically includes the following steps:
S21, wavelet transformation is made to three Color Channels of original image respectively;The step S21 specifically: for each
The colored vehicle image of a input, it includes there is tri- Color Channels (Channel) of RGB, the present embodiment remembers that each channel is ci
(i=1,2,3) then makees wavelet transformation, the calculation formula of the wavelet transformation to each Color Channel respectively are as follows:
{ws,o}=WT (c), s=1,2 ..., n;O=h, v, d,
Wherein, any one Color Channel of c representative image;WT (*) represents Wavelet transformation;ws,oRepresent s-th of decomposition layer
The wavelet transform result in several lower directions o;H, v, d respectively represent horizontal, vertical and diagonal direction;The layer of behalf wavelet transformation
Several namely wavelet transformation scale.Wherein, the calculation formula of total number of plies of wavelet transformation are as follows: n=log2Min (W, H), W × H
The resolution ratio of representative image.
S22, according to the wavelet transform result of three Color Channels, calculate separately each Color Channel using two value filters
The central point of wavelet transform result enliven coefficient and coefficient is enlivened in neighborhood region;Wherein, the central point enlivens coefficient and neighbour
Enliven the calculation formula of coefficient in domain region are as follows:Wherein,WithCentered on respectively
Point enlivens coefficient and coefficient is enlivened in neighborhood region;WithRespectively two value filter of central point in the direction o and neighborhood region
Two value filters;Two value filter of horizontal neighbors region of the invention, vertical two value filter of neighborhood region, to angular neighborhood region
Two value filters, two value filter of horizontal center point, two value filter of vertical centre point and two value filter of diagonal central point this 6
Two value filters of kind are respectively as shown in Fig. 3, Fig. 4, Fig. 5, Fig. 6, Fig. 7 and Fig. 8;Wherein,Indicate convolution algorithm.
S23, coefficient is enlivened according to central point and coefficient is enlivened in neighborhood region, calculate separately the central point of each Color Channel
Contrast is enlivened between neighborhood region;The calculation formula for enlivening contrast are as follows:
Wherein, zs,oIt is that center-neighborhood on the direction o of scale s enlivens contrast (i.e. between central point and neighborhood region
Enliven contrast), what it was reflected is the relativity in some region and surrounding on image, if zs,oValue it is bigger, then
The central area of image has higher liveness than peripheral region, that is to say, that for the image, it is believed that central area tool
There is higher conspicuousness;rs,oCoefficient is enlivened at the center of representative and neighborhood enlivens the duplicate ratio of coefficient.
S24, sensitivity function is compared by extension to active pair between the central point and neighborhood region of each Color Channel
Weight adjustment is carried out than degree;The step S24 specifically: z is being calculateds,oAfterwards, it need to further be adjusted by weighting function
zs,o, SIM algorithm, which utilizes, extends comparison sensitivity function (Extended Constrast Sensitivity Function, ECSF)
Contrast is enlivened to adjust center-neighborhood.ECSF is a simple linear function, coefficient according to the level of wavelet transformation and
Variation, the calculation formula of the ECSF are as follows: ECSF (zs,o, s) and=zs,oG (s)+k (s),
Wherein, g (s) and k (s) are the coefficient of ECSF function, they are to generate the variable of decaying with the variation of scale s,
Its calculation formula is respectively as follows:
In the present embodiment, the result α of weight adjustments,oCalculation formula are as follows: αs,o=ECSF (zs,o,s)。
S25, inverse wavelet transform is carried out to the result of weight adjustment, obtains the significant external appearance characteristic image of vehicle.The present invention
It is adjusted to obtain α according to weights,o, then to αs,oCarry out inverse wavelet transform, the calculation formula of the inverse wavelet transform are as follows:
Sc=WT-1{αs,o, s=1,2 ... n, o=h, v, d,
Wherein, ScFor the conspicuousness testing result of channel c;WT-1Represent inverse wavelet transform.
The present invention carries out same conspicuousness detection operation to all channels (i.e. tri- channels RGB), obtains final
Conspicuousness gray level image, the calculation formula of the final conspicuousness gray level image are as follows:Wherein, SmapFor final conspicuousness gray level image, normalize (*) is indicated
Normalization operation, so that striked conspicuousness gray level image is grayscale image of the pixel number in section [0,1].
As shown in figure 9, the present invention is after obtaining conspicuousness grayscale image, by the original graph of conspicuousness gray level image and vehicle
It is calculated as carrying out pixel multiplication, obtains the significant external appearance characteristic image of vehicle, wherein the original image of I expression vehicle;SmapTable
The conspicuousness gray level image of diagram picture;Indicate the operation for carrying out pixel multiplication between image array by corresponding position;IsalIt indicates
Result (i.e. the significant external appearance characteristic image of vehicle) after multiplication.The present invention passes through the significant external appearance characteristic image of vehicle, can
The region in original image with conspicuousness is obtained, that is, each pixel in original image is multiplied by a specific power
Weight, if pixel belongs to the pixel of salient region, is multiplied by the big weight of numerical value, is otherwise multiplied by the small power of numerical value
Weight.
As shown in Figure 10, S3, the significant external appearance characteristic image and original image of vehicle are inputted to neural network progress together
Training;The present invention extracts the feature of image using convolutional neural networks, and the basic network of neural network of the invention is
The structure of VGG16, the VGG16 network are as shown in table 1, and convolutional neural networks shown in table 1 include convolutional layer
(convolution), maximum pond layer (max pooling) and full articulamentum (fully connected).
Table 1VGG16 convolutional neural networks structure chart
Convolutional neural networks are maximum is not both with tradition for convolutional neural networks framework of the invention, proposed by the invention
There are two importations for network model, and first importation is the original image of vehicle, and second importation is original graph
As corresponding significant external appearance characteristic image.The present invention is connected in series to original image using significant external appearance characteristic image as auxiliary information
In, then one tensor with 6 channels of composition inputs in subsequent network layer, together so that containing simultaneously in importation
The significant characteristics information of image, and the significant characteristics information of image has the function of reinforcing feature, enables neural network
It reaches more maximum probability and more effectively extracts robust features.
In addition, the neuron number of the full articulamentum of neural network is changed to 1024 by the present invention, neural network is carried out
The training of classification task, the network obtained after the completion of training have the ability for extracting feature, and therefore, the output of full articulamentum is
The feature of the extracted image of neural network.In original VGG16 neural network, the neuron number of full articulamentum is
4096, this is the vector of a very higher-dimension, and it is sparse that this high dimension vector not only results in the feature abnormalities extracted, and can also be reduced
The matched efficiency of subsequent characteristics, so, the present invention selects 1024 dimension as neural metwork training, improves feature extraction
Robustness.
Specifically, step S3 the following steps are included:
S31, the license board information according to vehicle, classify to vehicle, and a vehicle ID is arranged for each classification;
The step S31 specifically: the present invention is with original image and the significant appearance obtained by conspicuousness detection module
The tensor in 6 channels of characteristic image composition is trained as the input of neural network, and the vehicle of each license plate is denoted as respectively
Different ID (Identify), each vehicle ID are considered as one kind, and problem that vehicle is identified again is integrated into a classification task and goes to train
Network.
S32, the significant external appearance characteristic image and original image for obtaining vehicle, and according to the significant external appearance characteristic image of vehicle
Tensor is generated with original image, there are six channels for the tensor tool;
S33, using nonlinear mapping method, the significant external appearance characteristic image of vehicle is converted to by vehicle by convolutional layer
Semantic feature image;
The step S33 specifically: the effect of convolutional layer is using a nonlinear mapping that the image of low level is special
Sign is converted to high-level semantic feature.The input of convolutional layer of the invention is a three-dimensional matrice X, size be s1 × s2 ×
S3, wherein s3 is the quantity of the two dimensional character figure of input, and s1 × s2 is two dimensional character figure xiSize.The output of convolutional layer is one
A three-dimensional matrice Y, size are t1 × t2 × t3, and wherein t3 is the quantity of the two dimensional character figure of output, and t1 × t2 is the two of output
The y of dimensional feature figurejSize, the yjCalculation formula are as follows:
Wherein xiIt is the two dimensional character figure of i-th of convolutional layer input;yjIt is the two dimensional character figure of j-th of convolutional layer output;
Represent convolution operation;kijWhat is indicated is the two-dimensional convolution of corresponding i-th of input two dimensional character figure of j-th of output two dimensional character figure
The parameter of core, the two-dimensional convolution core is obtained by network training;F (x) is activation primitive, the definition of the activation primitive
Formula are as follows: f (x)=max (0, x).
S34, using down-sampled method, handled by maximum pond layer come the semantic feature image to vehicle, with true
The semantic feature image for protecting vehicle keeps geometry and translation invariance;
Wherein, the step S34 specifically: the effect of maximum pond layer is by down-sampled mode feature to be had
The calculation formula of geometry and translation invariance, maximum pond layer is as follows:
yi,j,k=max (bi-p,j-q,k,bi-p+1,j-q+1,k,...,bi+p,j+q,k),
Wherein yi,j,kIndicate pixel value of the coordinate for (i, j) in k-th of output characteristic pattern, bi+p,j+q,kWhat is indicated is k-th
Coordinate is the pixel value of (i+p, j+q) in input feature vector figure.
S35, feature extraction and feature combination are carried out by full articulamentum, obtain the feature vector of vehicle.
The step S35 specifically: after the alternate treatment of multiple convolutional layers and pond layer, nerve of the invention
Network can be set according to actual needs one or more full articulamentums to carry out the group merging output of feature and finally extract
Feature.Each of full articulamentum neuron is all connected with neuron all in input layer, i.e., each neuron is to defeated
Enter feature all in layer and carry out a weighted statistical, the calculation formula of the weighted statistical are as follows:
Wherein, full articulamentum is represented for l layers;It is l layers of j-th of neuron;J-th of neuron of l layer with
The parameter of all neuron connections in l-1 layers of i-th of input feature vector figure;It is bias term.
S4, the training result according to neural network extract vehicle characteristics and are identified again to vehicle.
Wherein, the step S4 the following steps are included:
S41, the training result for obtaining neural network simultaneously remove Softmax layers: the present invention is in the training for completing neural network
Later, Softmax layers are removed, and using the output result of the full articulamentum of the last layer as the feature extracted, this feature is
The vector of one 1024 dimension.
S42, the image in query set and Candidate Set is inputted into neural network, obtains the feature vector of target vehicle and wait know
The feature vector of other vehicle;Wherein, the original image of the query set storage target vehicle, the Candidate Set store vehicle to be identified
Original image;
S43, match query is carried out to the feature vector of target vehicle and the feature vector of vehicle to be identified, realized to vehicle
Identification again;
Wherein, the step S43 specifically includes the following steps:
Euclidean distance between S431, the feature vector for calculating target vehicle and the feature vector of vehicle to be identified;It is described
The calculation formula of Euclidean distance are as follows:
Dist=| | featurequery-featuregallery| |, wherein dist indicates the distance between feature vector;
featurequeryIndicate the feature vector of the image of target vehicle;featuregalleryIndicate the feature of the image of vehicle to be identified
Vector;| | * | | indicate vector field homoemorphism.
S432, ascending sort is carried out to the Euclidean distance being calculated;Wherein, the feature of the target vehicle of same vehicle ID
Euclidean distance between vector and the feature vector of vehicle to be identified is smaller.
S433, according to ascending sort as a result, matched to the vehicle ID of corresponding vehicle-to-target vehicle to be identified, obtain
To vehicle weight recognition result.
The present invention is carried out ascending sort after the Euclidean distance being calculated between feature vector.It is arranged according to ascending order
Sequence is as a result, check whether the vehicle to be identified of sequence forward (i.e. Euclidean distance is smaller) is having the same with corresponding target vehicle
Vehicle ID, and then calculating vehicle discrimination and obtain the heavy recognition result of vehicle.
In conclusion recognition methods has the vehicle of the detection of present invention combination saliency and neural network with system again
Following advantages:
1), the present invention is based on SIM algorithms to realize the conspicuousness detection to vehicle, in the lesser situation of vehicle appearance difference
Under, vehicle can be identified again according to the significant external appearance characteristic image of vehicle, enhance the robustness for extracting feature;
2), the significant external appearance characteristic image and original image of vehicle are inputted neural network together and are trained by the present invention,
Neural network is set pointedly to carry out feature learning to the notable feature region of vehicle image, efficiency is higher;
3), the present invention is identified again using based on method progress vehicle of the conspicuousness detection in conjunction with convolutional neural networks,
Accurately bayonet vehicle image can be identified again;
4), the present invention can not depend on license board information in the lesser situation of vehicle appearance difference, according to unique look spy
The existing identification again to vehicle of levies in kind;
5), present invention incorporates neural network methods, are not necessarily to engineer, and efficiency is higher.
It is to be illustrated to preferable implementation of the invention, but the present invention is not limited to the embodiment above, it is ripe
Various equivalent deformation or replacement can also be made on the premise of without prejudice to spirit of the invention by knowing those skilled in the art, this
Equivalent deformation or replacement are all included in the scope defined by the claims of the present application a bit.
Claims (10)
1. combining the vehicle recognition methods again of saliency detection and neural network, it is characterised in that: the following steps are included:
Conspicuousness detection is carried out using original image of the SIM algorithm to vehicle, obtains the significant external appearance characteristic image of vehicle;
The significant external appearance characteristic image and original image of vehicle are inputted neural network together to be trained;
According to the training result of neural network, extracts vehicle characteristics and vehicle is identified again.
2. the vehicle recognition methods again of combination saliency detection and neural network according to claim 1, feature
It is: it is described that conspicuousness detection is carried out using original image of the SIM algorithm to vehicle, obtain the significant external appearance characteristic image of vehicle
The step for, comprising the following steps:
Wavelet transformation is made to three Color Channels of original image respectively;
According to the wavelet transform result of three Color Channels, become using the small echo that two value filters calculate separately each Color Channel
The central point for changing result enlivens coefficient and coefficient is enlivened in neighborhood region;
Coefficient is enlivened according to central point and coefficient is enlivened in neighborhood region, calculates separately central point and the neighborhood area of each Color Channel
Contrast is enlivened between domain;
The contrast of enlivening between the central point and neighborhood region of each Color Channel is carried out by extension comparison sensitivity function
Weight adjustment;
Inverse wavelet transform is carried out to the result of weight adjustment, obtains the significant external appearance characteristic image of vehicle.
3. the vehicle recognition methods again of combination saliency detection and neural network according to claim 2, feature
It is: the step for result to weight adjustment carries out inverse wavelet transform, obtains the significant external appearance characteristic image of vehicle, packet
Include following steps:
Inverse wavelet transform is carried out to the weight adjustment result of each Color Channel, obtains the conspicuousness detection knot of each Color Channel
Fruit;
Operation is normalized to the conspicuousness testing result of each Color Channel, obtains conspicuousness gray level image;
The original image of conspicuousness gray level image and vehicle is carried out pixel to be multiplied calculating, obtains the significant external appearance characteristic figure of vehicle
Picture.
4. the vehicle recognition methods again of combination saliency detection and neural network according to claim 1, feature
It is: it is described that the significant external appearance characteristic image and original image of vehicle are inputted into the step for neural network is trained together,
The following steps are included:
According to the license board information of vehicle, classify to vehicle, and one vehicle ID is set for each classification;
The significant external appearance characteristic image and original image of vehicle are obtained, and according to the significant external appearance characteristic image and original graph of vehicle
As generating tensor, there are six channels for the tensor tool;
Using nonlinear mapping method, the significant external appearance characteristic image of vehicle is converted to the semantic feature of vehicle by convolutional layer
Image;
Using down-sampled method, handled by maximum pond layer come the semantic feature image to vehicle, to ensure vehicle
Semantic feature image keeps geometry and translation invariance;
Feature extraction and feature combination are carried out by full articulamentum, obtain the feature vector of vehicle.
5. the vehicle recognition methods again of combination saliency detection and neural network according to claim 4, feature
It is: the training result according to neural network, the step for extracting vehicle characteristics and identified to vehicle again, including with
Lower step:
It obtains the training result of neural network and removes Softmax layers;
Image in query set and Candidate Set is inputted into neural network, obtain target vehicle feature vector and vehicle to be identified
Feature vector;Wherein, the original image of the query set storage target vehicle, the Candidate Set store the original of vehicle to be identified
Image;
The feature vector of feature vector and vehicle to be identified to target vehicle carries out match query, realizes the knowledge again to vehicle
Not.
6. the vehicle recognition methods again of combination saliency detection and neural network according to claim 5, feature
Be: described eigenvector is the vector of 1024 dimensions.
7. the vehicle recognition methods again of combination saliency detection and neural network according to claim 5, feature
It is: it is described that match query is carried out to the feature vector of target vehicle and the feature vector of vehicle to be identified, it realizes to vehicle
The step for identifying again, comprising the following steps:
Calculate the Euclidean distance between the feature vector of target vehicle and the feature vector of vehicle to be identified;
Ascending sort is carried out to the Euclidean distance being calculated;
According to ascending sort as a result, match to the vehicle ID of corresponding vehicle-to-target vehicle to be identified, vehicle weight is obtained
Recognition result.
8. combining the vehicle weight identifying system of saliency detection and neural network, it is characterised in that: include:
Conspicuousness detection module obtains the aobvious of vehicle for carrying out conspicuousness detection using original image of the SIM algorithm to vehicle
Write external appearance characteristic image;
Training module is trained for the significant external appearance characteristic image and original image of vehicle to be inputted neural network together;
Weight identification module extracts vehicle characteristics and is identified again to vehicle for the training result according to neural network.
9. the vehicle weight identifying system of combination saliency detection according to claim 8 and neural network, feature
Be: the conspicuousness detection module includes:
Wavelet transform unit, for making wavelet transformation to three Color Channels of original image respectively;
Coefficient calculation unit is enlivened, for the wavelet transform result according to three Color Channels, is counted respectively using two value filters
The central point for calculating the wavelet transform result of each Color Channel enlivens coefficient and coefficient is enlivened in neighborhood region;
Contrast computing unit is enlivened, for enlivening coefficient according to central point and coefficient is enlivened in neighborhood region, is calculated separately each
Contrast is enlivened between the central point and neighborhood region of Color Channel;
Weight adjustment unit, for comparing sensitivity function between the central point and neighborhood region of each Color Channel by extension
Enliven contrast carry out weight adjustment;
Inverse wavelet transform, the result for adjusting to weight carry out inverse wavelet transform, obtain the significant external appearance characteristic of vehicle
Image.
10. combining the vehicle weight identifying system of saliency detection and neural network, it is characterised in that: include:
Memory, for storing program;
Processor is used for loading procedure, to execute such as the described in any item combination saliency detections of claim 1-7 and mind
The recognition methods again of vehicle through network.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810921051.6A CN109190513A (en) | 2018-08-14 | 2018-08-14 | In conjunction with the vehicle of saliency detection and neural network again recognition methods and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810921051.6A CN109190513A (en) | 2018-08-14 | 2018-08-14 | In conjunction with the vehicle of saliency detection and neural network again recognition methods and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109190513A true CN109190513A (en) | 2019-01-11 |
Family
ID=64921408
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810921051.6A Pending CN109190513A (en) | 2018-08-14 | 2018-08-14 | In conjunction with the vehicle of saliency detection and neural network again recognition methods and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109190513A (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110321949A (en) * | 2019-06-29 | 2019-10-11 | 天津大学 | A kind of distributed car tracing method and system based on observed terminals network |
CN110619280A (en) * | 2019-08-23 | 2019-12-27 | 长沙千视通智能科技有限公司 | Vehicle heavy identification method and device based on deep joint discrimination learning |
CN110909785A (en) * | 2019-11-18 | 2020-03-24 | 西北工业大学 | Multitask Triplet loss function learning method based on semantic hierarchy |
CN111292530A (en) * | 2020-02-04 | 2020-06-16 | 浙江大华技术股份有限公司 | Method, device, server and storage medium for processing violation pictures |
CN111429484A (en) * | 2020-03-31 | 2020-07-17 | 电子科技大学 | Multi-target vehicle track real-time construction method based on traffic monitoring video |
CN111428688A (en) * | 2020-04-16 | 2020-07-17 | 成都旸谷信息技术有限公司 | Intelligent vehicle driving lane identification method and system based on mask matrix |
CN111540217A (en) * | 2020-04-16 | 2020-08-14 | 成都旸谷信息技术有限公司 | Mask matrix-based intelligent average vehicle speed monitoring method and system |
CN111738048A (en) * | 2020-03-10 | 2020-10-02 | 重庆大学 | Pedestrian re-identification method |
CN111738362A (en) * | 2020-08-03 | 2020-10-02 | 成都睿沿科技有限公司 | Object recognition method and device, storage medium and electronic equipment |
CN111881922A (en) * | 2020-07-28 | 2020-11-03 | 成都工业学院 | Insulator image identification method and system based on significance characteristics |
CN113723232A (en) * | 2021-08-16 | 2021-11-30 | 绍兴市北大信息技术科创中心 | Vehicle weight recognition method based on channel cooperative attention |
CN116503914A (en) * | 2023-06-27 | 2023-07-28 | 华东交通大学 | Pedestrian re-recognition method, system, readable storage medium and computer equipment |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105023008A (en) * | 2015-08-10 | 2015-11-04 | 河海大学常州校区 | Visual saliency and multiple characteristics-based pedestrian re-recognition method |
CN106529578A (en) * | 2016-10-20 | 2017-03-22 | 中山大学 | Vehicle brand model fine identification method and system based on depth learning |
-
2018
- 2018-08-14 CN CN201810921051.6A patent/CN109190513A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105023008A (en) * | 2015-08-10 | 2015-11-04 | 河海大学常州校区 | Visual saliency and multiple characteristics-based pedestrian re-recognition method |
CN106529578A (en) * | 2016-10-20 | 2017-03-22 | 中山大学 | Vehicle brand model fine identification method and system based on depth learning |
Non-Patent Citations (4)
Title |
---|
NAILA MURRAY ET AL: "Low-Level Spatiochromatic Grouping for Saliency Estimation", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 * |
NAILA MURRAY ET AL: "Saliency Estimation Using a Non-Parametric Low-Level Vision Model", 《CVPR 2011》 * |
NIKI MARTINEL,ET AL: "Kernelized Saliency-Based Person Re-Identification Through Multiple Metric Learning", 《IEEE TRANSACTIONS ON IMAGE PROCESSING》 * |
XIYING LI ET AL: "VRID-1: A Basic Vehicle Re-identification Dataset for Similar Vehicles", 《2017 ITSC》 * |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110321949A (en) * | 2019-06-29 | 2019-10-11 | 天津大学 | A kind of distributed car tracing method and system based on observed terminals network |
CN110619280A (en) * | 2019-08-23 | 2019-12-27 | 长沙千视通智能科技有限公司 | Vehicle heavy identification method and device based on deep joint discrimination learning |
CN110619280B (en) * | 2019-08-23 | 2022-05-24 | 长沙千视通智能科技有限公司 | Vehicle re-identification method and device based on deep joint discrimination learning |
CN110909785B (en) * | 2019-11-18 | 2021-09-14 | 西北工业大学 | Multitask Triplet loss function learning method based on semantic hierarchy |
CN110909785A (en) * | 2019-11-18 | 2020-03-24 | 西北工业大学 | Multitask Triplet loss function learning method based on semantic hierarchy |
CN111292530A (en) * | 2020-02-04 | 2020-06-16 | 浙江大华技术股份有限公司 | Method, device, server and storage medium for processing violation pictures |
CN111738048B (en) * | 2020-03-10 | 2023-08-22 | 重庆大学 | Pedestrian re-identification method |
CN111738048A (en) * | 2020-03-10 | 2020-10-02 | 重庆大学 | Pedestrian re-identification method |
CN111429484A (en) * | 2020-03-31 | 2020-07-17 | 电子科技大学 | Multi-target vehicle track real-time construction method based on traffic monitoring video |
CN111429484B (en) * | 2020-03-31 | 2022-03-15 | 电子科技大学 | Multi-target vehicle track real-time construction method based on traffic monitoring video |
CN111428688B (en) * | 2020-04-16 | 2022-07-26 | 成都旸谷信息技术有限公司 | Intelligent vehicle driving lane identification method and system based on mask matrix |
CN111540217A (en) * | 2020-04-16 | 2020-08-14 | 成都旸谷信息技术有限公司 | Mask matrix-based intelligent average vehicle speed monitoring method and system |
CN111428688A (en) * | 2020-04-16 | 2020-07-17 | 成都旸谷信息技术有限公司 | Intelligent vehicle driving lane identification method and system based on mask matrix |
CN111881922A (en) * | 2020-07-28 | 2020-11-03 | 成都工业学院 | Insulator image identification method and system based on significance characteristics |
CN111881922B (en) * | 2020-07-28 | 2023-12-15 | 成都工业学院 | Insulator image recognition method and system based on salient features |
CN111738362B (en) * | 2020-08-03 | 2020-12-01 | 成都睿沿科技有限公司 | Object recognition method and device, storage medium and electronic equipment |
CN111738362A (en) * | 2020-08-03 | 2020-10-02 | 成都睿沿科技有限公司 | Object recognition method and device, storage medium and electronic equipment |
CN113723232A (en) * | 2021-08-16 | 2021-11-30 | 绍兴市北大信息技术科创中心 | Vehicle weight recognition method based on channel cooperative attention |
CN116503914A (en) * | 2023-06-27 | 2023-07-28 | 华东交通大学 | Pedestrian re-recognition method, system, readable storage medium and computer equipment |
CN116503914B (en) * | 2023-06-27 | 2023-09-01 | 华东交通大学 | Pedestrian re-recognition method, system, readable storage medium and computer equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109190513A (en) | In conjunction with the vehicle of saliency detection and neural network again recognition methods and system | |
Zhang et al. | Deep-IRTarget: An automatic target detector in infrared imagery using dual-domain feature extraction and allocation | |
CN110378381B (en) | Object detection method, device and computer storage medium | |
US10198689B2 (en) | Method for object detection in digital image and video using spiking neural networks | |
CN111274916B (en) | Face recognition method and face recognition device | |
US7853072B2 (en) | System and method for detecting still objects in images | |
US10445602B2 (en) | Apparatus and method for recognizing traffic signs | |
Daniel Costea et al. | Word channel based multiscale pedestrian detection without image resizing and using only one classifier | |
CN107016357A (en) | A kind of video pedestrian detection method based on time-domain convolutional neural networks | |
CN105404886A (en) | Feature model generating method and feature model generating device | |
CN111709313B (en) | Pedestrian re-identification method based on local and channel combination characteristics | |
CN108764096B (en) | Pedestrian re-identification system and method | |
CN112580480B (en) | Hyperspectral remote sensing image classification method and device | |
CN107818299A (en) | Face recognition algorithms based on fusion HOG features and depth belief network | |
Yoo et al. | Fast training of convolutional neural network classifiers through extreme learning machines | |
CN108229434A (en) | A kind of vehicle identification and the method for careful reconstruct | |
Prasad et al. | Passive copy-move forgery detection using SIFT, HOG and SURF features | |
CN111639697B (en) | Hyperspectral image classification method based on non-repeated sampling and prototype network | |
Li et al. | Multi-view vehicle detection based on fusion part model with active learning | |
CN110348434A (en) | Camera source discrimination method, system, storage medium and calculating equipment | |
CN112396036A (en) | Method for re-identifying blocked pedestrians by combining space transformation network and multi-scale feature extraction | |
CN117157679A (en) | Perception network, training method of perception network, object recognition method and device | |
Mao et al. | Semi-dense stereo matching using dual CNNs | |
Ye et al. | Robust optical and SAR image matching using attention-enhanced structural features | |
CN117456325A (en) | Rice disease and pest detection method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190111 |