CN108805016A - A kind of head and shoulder method for detecting area and device - Google Patents
A kind of head and shoulder method for detecting area and device Download PDFInfo
- Publication number
- CN108805016A CN108805016A CN201810391398.4A CN201810391398A CN108805016A CN 108805016 A CN108805016 A CN 108805016A CN 201810391398 A CN201810391398 A CN 201810391398A CN 108805016 A CN108805016 A CN 108805016A
- Authority
- CN
- China
- Prior art keywords
- image
- detected
- training
- network layer
- head
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/59—Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
- G06V20/597—Recognising the driver's state or behaviour, e.g. attention or drowsiness
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/29—Graphical models, e.g. Bayesian networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The present invention relates to technical field of image detection, more particularly to a kind of head and shoulder method for detecting area and device.This method is:Obtain image to be detected;It is completed based on training, it include feature extraction network layer, candidate frame generates network layer and the network model of target detection network layer is detected above-mentioned image to be detected, obtain corresponding testing result, wherein, features described above, which extracts network layer, has extraction fusion feature, retains primitive character information and adjusts the function of model size;Judge that there are when head and shoulder region, determine position of the above-mentioned head and shoulder region in above-mentioned image to be detected in the figure to be detected based on above-mentioned testing result.Using the above method, more comprehensively to the feature extraction of artwork to be detected, so that candidate frame generates network layer can carry out the generation of candidate frame from Analysis On Multi-scale Features, it is poor to shooting quality to ensure that, the accuracy of the testing result of the original image of fogging image.
Description
Technical field
The present invention relates to technical field of image detection, more particularly to a kind of head and shoulder region method and device.
Background technology
With the rapid development of computer technology, big data artificial intelligence technology, safe with image detecting technique realization,
Efficient management becomes the main direction of development of intelligent transportation.By detecting the vehicle image taken, driver's head is detected
Shoulder region, and driver's driving behavior is further judged by the driver's head and shoulder region detected, to effectively supervise
Driver's driving behavior promotes the conscious custom formed security civilization and driven of driver, is finally reached reduction due to uneasiness
The driving behavior of full specification and cause traffic accident odds.
Currently, driver's head and shoulder method for detecting area is detected using traditional image detecting method, for example, using
ACF algorithms or DPM algorithms determine driver's head and shoulder region, and need the method manually checked and approved to calibrate the driving determined
Member's head and shoulder region, coordinate and dimension information of record driver's head and shoulder region in original image.
However, current driver's head and shoulder method for detecting area is only preferable to the image collected quality, image is more visible
Original image detection result it is good, testing result is accurate, and for the image collected it is second-rate, the artwork of fogging image
The detection result of picture is bad, and testing result is inaccurate.
Invention content
The purpose of the embodiment of the present invention is to provide a kind of head and shoulder method for detecting area and device, to solve in the prior art
Existing second-rate for the image collected, the detection result of the original image of fogging image is bad, and testing result is inaccurate
True problem.
The specific technical solution provided in the embodiment of the present invention is as follows:
In a first aspect, the present invention provides a kind of head and shoulder method for detecting area, this method includes:Obtain image to be detected;Base
It is completed in training, includes feature extraction network layer, candidate frame generates the network model of network layer and target detection network layer
Above-mentioned image to be detected is detected, corresponding testing result is obtained, wherein there is features described above extraction network layer extraction to melt
Feature is closed, primitive character information is retained and adjusts the function of model size;Judge above-mentioned mapping to be checked based on above-mentioned testing result
There are when head and shoulder region, determine position of the above-mentioned head and shoulder region in above-mentioned image to be detected in shape.
Using head and shoulder method for detecting area provided by the invention, image to be detected is acquired, and image to be detected is inputted and is wrapped
Containing feature extraction network layer, candidate frame generates in the network model of network layer and target detection network layer, the spy of network model
Sign extraction network layer has extraction fusion feature, retains primitive character information and adjusts the function of model size, this feature extraction
The characteristic point of image to be detected that network layer extracts more comprehensively, after Fusion Features are handled so that head and shoulder in image to be detected
The verification and measurement ratio higher in region, testing result is more acurrate, improves head and shoulder region detection precision.
Optionally, features described above extraction network layer is to be based on ION networks, C-RELU activation primitives and Inception-
The characteristic design of Resnet networks.
Optionally, it is RPN networks that above-mentioned candidate frame, which generates network, is used for from the corresponding characteristic pattern of above-mentioned image to be detected
Middle 15-20 anchor point of extraction, and carry out process of convolution;Above-mentioned target detection network is RCNN, using globel-pooling layers
Pond processing is carried out to the characteristic pattern of convolutional layer output, to reduce over-fitting degree.
Optionally, before above-mentioned acquisition image to be detected, further comprise:
It is trained using common data set pair pre-training network, obtains corresponding prescheme, wherein the prescheme is used
In the parameter for initializing the feature extraction network layer;And above-mentioned prescheme is instructed using preset image pattern set
Practice, obtains the network model.
Above-mentioned optional mode characterizes, and needs to use common data sets (e.g., Imagenet data sets) to pre-training in advance
Network is trained, and obtains corresponding prescheme, then, then preset image pattern set is used to instruct above-mentioned prescheme
Practice, obtains the network model that finally training is completed, pre-training network is trained using common data, pre-training can be accelerated
The network convergence ability of network promotes the generalization ability of pre-training network.
Optionally, above-mentioned to be trained using common data set pair pre-training network, including:
Network layer and above-mentioned target detection network layer are extracted according to features described above, generates corresponding pre-training network;And it adopts
It is trained with the above-mentioned pre-training network of common data set pair.
Above-mentioned optional mode characterizes, a kind of specific side being trained using common data set pair pre-training network
Formula, firstly, it is necessary to which fusion feature extraction network layer and target detection network layer, constitute pre-training network;Then, using public number
It is trained according to the above-mentioned pre-training network of set pair, is equivalent to and is characterized extraction network layer initiation parameter.
Optionally, above-mentioned that prescheme is trained using preset image pattern set, including:
The sample image in above-mentioned image pattern set is inputted into above-mentioned prescheme successively;According to preset first training time
Number is directed to each sample image in above-mentioned image pattern set and is trained, and reaches first in the total frequency of training of determination successively
When given threshold, according to preset second frequency of training, it is directed to each sample image in above-mentioned image pattern set successively and carries out
Training, wherein preset first frequency of training is equal to the sum of the second frequency of training and constant N, and N is the positive integer more than or equal to 1;
Until when the frequency of training of current preset is less than or equal to N, according to the frequency of training of above-mentioned current preset, it is directed to above-mentioned image successively
Each sample image is trained in sample set, completes the training to above-mentioned prescheme.
Above-mentioned optional mode characterizes, and is trained to prescheme using multiple images sample, and sets corresponding training
Rule is respectively trained each image pattern using larger circuit training number, specifically, at training initial stage true
Surely it completes predetermined number of times after training, reduces circuit training number, and using the circuit training number after reducing respectively to every
One image pattern is trained, until when circuit training number meets setting condition, completes prescheme training, certainly, prescheme
In the training process, the parameter of each network layer is adaptively adjusted according to training result.
Optionally, above-mentioned based on training completion, include feature extraction network layer, candidate frame generates network layer and target
The network model of detection network layer is detected above-mentioned image to be detected, obtains corresponding testing result, including:
Above-mentioned image to be detected is inputted in above-mentioned network model;Features described above extracts network layer and extracts above-mentioned mapping to be checked
The characteristic point of picture, and generate network layer by above-mentioned candidate frame is inputted comprising the characteristic pattern of features described above point;Above-mentioned candidate frame generates
Network layer carries out process of convolution with the convolution kernel for presetting size to features described above figure, obtains corresponding multi-C vector, and generate more
It is a various sizes of, include the candidate frame of classification weights and/or zone position information, maps in above-mentioned image to be detected.
Above-mentioned optional mode characterizes, in the specifically used mistake for having trained the network model completed to carry out head and shoulder region detection
Cheng Zhong, using based on ION networks, the feature extraction of the characteristic design of C-RELU activation primitives and Inception-Resnet networks
Network layer extracts the characteristic point of image to be detected, and the characteristic pattern input candidate frame comprising characteristic point is generated network layer, candidate
Frame generates the convolution kernel that network layer is 1 × 1 with convolution kernel size and first handles features described above figure progress convolution, obtains corresponding more
Dimensional vector, and multiple and different scales are generated, include the candidate frame of classification and/or location information, and map in artwork.Base
Multiple candidate frames are generated in multiple dimensioned, more length-width ratios, meet the needs of head shoulder images are multiple dimensioned.
Optionally, above-mentioned to be based on above-mentioned testing result, it determines position of the head and shoulder region in above-mentioned image to be detected, wraps
It includes:
According to above-mentioned multiple and different sizes, includes the candidate frame of classification weights and/or zone position information, count respectively
Calculate the registration in head and shoulder region and corresponding candidate frame in each candidate frame;The registration of head and shoulder region and corresponding candidate frame is more than pre-
If the regional location of the candidate frame of value, as position of the head and shoulder region in above-mentioned image to be detected.
Above-mentioned optional mode characterizes, and a practical head and shoulder region and corresponding candidate are selected from above-mentioned multiple candidate frames
The candidate frame that the registration of frame is greater than the set value, and using the regional location for the candidate frame selected as the head and shoulder region detected
Regional location in artwork.Certainly, if it is two or more to meet condition, by the region of the candidate frame of maximal degree of coincidence
Regional location of the position as the head and shoulder region detected in artwork.
Optionally, above-mentioned head and shoulder method for detecting area further comprises:
Classification and/or the position in above-mentioned head and shoulder region are marked in above-mentioned image to be monitored.
Above-mentioned optional mode characterizes, and can also mark out head and shoulder region in artwork using tool according to testing result
Classification and coordinate.
Second aspect, the present invention provide a kind of head and shoulder regional detection device, which includes:Acquiring unit, for obtaining
Image to be detected;Detection unit includes feature extraction network layer for what is completed based on training, and candidate frame generates network layer
Described image to be detected is detected with the network model of target detection network layer, obtains corresponding testing result, wherein institute
Stating feature extraction network layer has extraction fusion feature, retains primitive character information and adjusts the function of model size;It determines single
Member judges that there are when head and shoulder region, determine that above-mentioned head and shoulder region exists in above-mentioned figure to be detected for being based on the testing result
Position in described image to be detected.
Optionally, features described above extraction network layer is to be based on ION networks, C-RELU activation primitives and Inception-
The characteristic design of Resnet networks.
Optionally, it is RPN networks that above-mentioned candidate frame, which generates network, is used for from the corresponding characteristic pattern of above-mentioned image to be detected
Middle 15-20 anchor point of extraction, and carry out process of convolution;Above-mentioned target detection network is RCNN, using globel-pooling layers
Pond processing is carried out to the characteristic pattern of convolutional layer output, to reduce over-fitting degree.
Optionally, above-mentioned head and shoulder regional detection device further comprises:
Training unit is trained for use common data set pair pre-training network, obtains corresponding prescheme,
In, the prescheme is used to initialize the parameter of the feature extraction network layer;And using preset image pattern set to upper
It states prescheme to be trained, obtains the network model.
Optionally, when above-mentioned use common data set pair pre-training network is trained, above-mentioned training unit is used for:
Network layer and above-mentioned target detection network layer are extracted according to features described above, generates corresponding pre-training network;And it adopts
It is trained with the above-mentioned pre-training network of common data set pair.
Optionally, when the preset image pattern set of above-mentioned use is trained prescheme, above-mentioned training unit is used
In:
The sample image in above-mentioned image pattern set is inputted into above-mentioned prescheme successively;According to preset first training time
Number is directed to each sample image in above-mentioned image pattern set and is trained, and reaches first in the total frequency of training of determination successively
When given threshold, according to preset second frequency of training, it is directed to each sample image in above-mentioned image pattern set successively and carries out
Training, wherein preset first frequency of training is equal to the sum of the second frequency of training and constant N, and N is the positive integer more than or equal to 1;
Until when the frequency of training of current preset is less than or equal to N, according to the frequency of training of above-mentioned current preset, it is directed to above-mentioned image successively
Each sample image is trained in sample set, completes the training to above-mentioned prescheme.
Optionally, include feature extraction network layer above-mentioned based on completion is trained, candidate frame generates network layer and mesh
The network model of mark detection network layer is detected above-mentioned image to be detected, when obtaining corresponding testing result, above-mentioned detection
Unit is used for:
Above-mentioned image to be detected is inputted in above-mentioned network model;Features described above extracts network layer and extracts above-mentioned mapping to be checked
The characteristic point of picture, and generate network layer by above-mentioned candidate frame is inputted comprising the characteristic pattern of features described above point;Above-mentioned candidate frame generates
Network layer carries out process of convolution with the convolution kernel for presetting size to features described above figure, obtains corresponding multi-C vector, and generate more
It is a various sizes of, include the candidate frame of classification weights and/or zone position information, maps in above-mentioned image to be detected.
Optionally, it is based on above-mentioned testing result above-mentioned, when determining position of the head and shoulder region in above-mentioned image to be detected,
Above-mentioned determination unit is used for:
According to above-mentioned multiple and different sizes, includes the candidate frame of classification weights and/or zone position information, count respectively
Calculate the registration in head and shoulder region and corresponding candidate frame in each candidate frame;The registration of head and shoulder region and corresponding candidate frame is more than pre-
If the regional location of the candidate frame of value, as position of the head and shoulder region in above-mentioned image to be detected.
Optionally, above-mentioned head and shoulder regional detection device further comprises:
Mark unit, classification and/or position for marking above-mentioned head and shoulder region in above-mentioned image to be monitored.
The third aspect, the present invention provide a kind of computing device, which includes:Memory refers to for storing program
It enables;Processor is executed according to the program of acquisition in above-mentioned first aspect for calling the program instruction stored in the memory
Any one method.
Fourth aspect, the present invention provide a kind of computer readable storage medium, the computer-readable recording medium storage
There are computer executable instructions, the computer executable instructions any in above-mentioned first aspect for making the computer execute
Item method.
The present invention has the beneficial effect that:
In conclusion in the embodiment of the present invention, image to be detected is obtained;It is completed based on training, includes feature extraction
Network layer, candidate frame generates network layer and the network model of target detection network layer is detected above-mentioned image to be detected, obtains
To corresponding testing result, wherein features described above, which extracts network layer, has extraction fusion feature, retains primitive character information and tune
The function of integral mould size;Judge to determine above-mentioned there are when head and shoulder region in above-mentioned figure to be detected based on above-mentioned testing result
Position of the head and shoulder region in above-mentioned image to be detected.
Using the above method, the feature extraction network layer of the feature extraction network layer network model in network model, which has, to be carried
Fusion feature is taken, primitive character information is retained and adjusts the function of model size, when carrying out feature extraction to image to be detected,
Feature extraction more comprehensively, so that candidate frame generates network layer can carry out the generation of candidate frame from Analysis On Multi-scale Features, is protected
It is poor to shooting quality to have demonstrate,proved, the accuracy of the testing result of the original image of fogging image, avoids since original image itself is clapped
It is bad and cause verification and measurement ratio not high to take the photograph quality, the problem of testing result inaccuracy.
Description of the drawings
Fig. 1 is a kind of flow diagram of head and shoulder method for detecting area in the embodiment of the present invention;
Fig. 2 is a kind of flow diagram of network model training method in the embodiment of the present invention;
Fig. 3 is the schematic diagram for multiple candidate frames that network model generates in the embodiment of the present invention;
Fig. 4 is a kind of structural schematic diagram of head and shoulder regional detection device in the embodiment of the present invention.
Specific implementation mode
The technical solution introduced in embodiment to facilitate the understanding of the present invention now provides the definition of part term:
1, image to be detected refers to the original video frame or image that are directly shot by camera.In practical application, by
In weather, the factors such as environment and illumination may result in the shooting quality shakiness of the original image directly shot by camera
It is fixed.
2, head and shoulder region refers in the original video frame or image that camera is shot, vehicle driver head and shoulder
Region where portion.In practical application, violation driving behavior can be whether there is according to driver head and shoulder region decision driver.
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation describes, it is clear that described embodiments are only a part of the embodiments of the present invention, is not whole embodiment.It is based on
Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other
Embodiment shall fall within the protection scope of the present invention.
First, term in the embodiment of the present invention " and ", a kind of only incidence relation of description affiliated partner, expression can be with
There are three kinds of relationships, for example, A and B, can indicate:Individualism A exists simultaneously A and B, these three situations of individualism B.Separately
Outside, character "/" herein, it is a kind of relationship of "or" to typically represent forward-backward correlation object.
The solution of the present invention will be described in detail by specific embodiment below, certainly, the present invention is not limited to
Lower embodiment.
As shown in fig.1, in the embodiment of the present invention, a kind of detailed process of head and shoulder method for detecting area is as follows:
Step 100:Obtain image to be detected.
In practical application, in order to monitor vehicle behavior, record driver drives behavior in violation of rules and regulations, on existing road, many places
It is provided with camera, when vehicle passes through, shoots video or image, then, it, can be by camera on road in the embodiment of the present invention
Collected video frame or image are as image to be detected.Include vehicle in the image to be detected taken due to camera,
So, in order to determine that the driver of the vehicle with the presence or absence of violation driving behavior (e.g., smoke, do not fasten the safety belt etc.), just needs
The position that driver's head and shoulder region is determined from image to be detected, further according to the position in the driver's head and shoulder region determined,
Judge that driver whether there is violation driving behavior.
Step 110:It is completed based on training, includes feature extraction network layer, candidate frame generates network layer and target inspection
The network model of survey grid network layers is detected above-mentioned image to be detected, obtains corresponding testing result, wherein features described above carries
It takes network layer that there is extraction fusion feature, retain primitive character information and adjusts the function of model size.
Specifically, in the embodiment of the present invention, video frame that camera on road is shot or image as image to be detected,
After getting image to be detected, image to be detected is inputted in the detection network model completed with training, and by detection net
Network model is detected the detection image, obtains corresponding testing result.
In the embodiment of the present invention, when executing step 110, above-mentioned image to be detected is inputted in above-mentioned network model;On
The characteristic point that feature extraction network layer extracts above-mentioned image to be detected is stated, and above-mentioned by being inputted comprising the characteristic pattern of features described above point
Candidate frame generates network layer;Above-mentioned candidate frame is generated network layer and is carried out at convolution to features described above figure with the convolution kernel for presetting size
Reason, obtains corresponding multi-C vector, and generate multiple and different sizes, includes classification weights and/or zone position information
Candidate frame maps in above-mentioned image to be detected.
For example, it is assumed that image to be detected 1 is inputted in network model, the feature extraction network layer extraction in network model waits for
The characteristic point of detection image 1, and the characteristic pattern 1 corresponding with image to be detected 1 for including features described above point is exported, by characteristic pattern
1 generates the input of network layer as candidate frame in network model, and candidate frame generates network layer based on characteristic pattern 1, based on spy
Characteristic point in sign Fig. 1 generates the candidate frame of multiple and different sizes, e.g., length-width ratio 1:2, length-width ratio 1:1 and length-width ratio be
2:1 candidate frame, it is of course also possible to for the candidate frame of other length-width ratios, further, by the time of multiple and different sizes of generation
Frame is selected to map in image to be detected 1.
Seen from the above description, it in the embodiment of the present invention, needs to be pre-designed corresponding detection network model, the present invention is real
It applies in example, detection network model includes at least feature extraction network layer, and candidate frame generates network layer and target detection network layer,
In, feature extraction network layer is used to extract the characteristic point in image to be detected, and candidate frame generates network for generating candidate frame, mesh
Mark detection network, for determine to meet the requirements from the candidate frame of generation, the candidate frame where head and shoulder region.
Further, in the embodiment of the present invention, the feature extraction network layer for detecting network model is carried in conjunction with fusion feature
Modulus block, the characteristic design of primitive character reservation module and model size adjustment module, certainly, in the embodiment of the present invention,
Fusion feature extraction module can be ION networks and Inception-Resnet networks, primitive character reservation module
Can be C-RELU activation primitives, it can be the depth-wise and point- for reducing model size that model size, which adjusts module,
Wise modules, so-called ION networks, i.e. Inside-Outside Net, ION networks are also based on Region Proposal,
On the basis of obtaining candidate region, in order to further increase the precision of prediction in each candidate region of interest ROI, ION is examined
Consider in conjunction with the information inside ROI and the information other than ROI, there are two innovative points:First, use space recurrent neural network
(spatial recurrent neural network) combines context (context) feature, rather than only uses in ROI
Local feature be used for predicting as an Analysis On Multi-scale Features second is that connecting the feature that different convolutional layer convolution obtain.
RNN is used independently in upper and lower, left and right four direction in ION, and their output is connected to be combined into a feature defeated
Go out, the process feature that such process obtains twice is as contextual feature, then the output feature with several convolutional layers before
It connects, obtains not only including contextual information, but also the feature including multi-scale information.
The candidate frame for detecting network model generates network layer (Region Proposal Net, RPN), and input is characterized
The output (i.e. the corresponding characteristic pattern of image to be detected) for extracting network layer in the embodiment of the present invention, is used for from described to be detected
15-20 anchor point is extracted in the corresponding characteristic pattern of image, and carries out process of convolution, specifically, in order to reduce calculation amount, will be waited
It selects the convolution kernel size that frame generates network layer to be set as 1 × 1, process of convolution is carried out to the characteristic pattern for including characteristic point, is obtained
The vector of respective dimensions, and export the weights and zone position information of the classification of multiple candidate frames.
The target detection network layer for detecting network model can be RCNN networks, and the input of target detection network is each candidate
Convolutional layer information after frame maps in artwork to be detected is obtained using a convolutional layer and globel-pooling layers
Testing result.
In practical application, in order to reduce the over-fitting degree of network training parameter and training pattern, need defeated to convolutional layer
The characteristic pattern gone out carries out pond (Pooling) processing.Common pond mode has maximum value pond (Max Pooling) and is averaged
Pond (Average Pooling), wherein maximum value pond is to select the maximum value in the window of pond as the value of Chi Huahou,
Average pond is using the average value in the region of pond as the value of Chi Huahou.In the embodiment of the present invention, RCNN networks use one
After convolutional layer and a globel-pooling layers of processing, the testing result in head and shoulder region in image to be detected is exported.
In the embodiment of the present invention, after pre-training network design is complete, need to be trained the pre-training network, specifically
, as shown in fig.2, in the embodiment of the present invention, the detailed process of training pre-training network is as follows:
Step 200:It is trained using common data set pair pre-training network, obtains corresponding prescheme.
Specifically, in the embodiment of the present invention, when executing step 200, network layer and above-mentioned mesh are extracted according to features described above
Mark detection network layer, generates corresponding pre-training network;And be trained using the above-mentioned pre-training network of common data set pair, it obtains
To corresponding prescheme.
In practical application, common data sets are Imagenet data sets, and Imagenet data sets are current deep learning figures
It applies to obtain a very more data sets as field, this data is mostly based on about research work such as image classification, positioning, detections
Collection expansion.
In the embodiment of the present invention, using Imagenet data sets, constituted for by feature extraction network layer and R-CNN networks
Pre-training network be trained, input as the training set comprising several labels, export as the prescheme with classification feature,
When the prescheme is integrally trained for subsequent network model, the parameter of feature extraction network layer in network model is initialized, i.e.,
Fine-turned can accelerate network convergence ability and promote network generalization.
Step 210:Above-mentioned prescheme is trained using preset image pattern set, obtains above-mentioned network model.
Specifically, in the embodiment of the present invention, when executing step 210, successively by the sample in above-mentioned image pattern set
Image inputs above-mentioned prescheme;According to preset first frequency of training, it is directed to each sample in above-mentioned image pattern set successively
Image is trained, and when the total frequency of training of determination reaches the first given threshold, according to preset second frequency of training, successively
It is trained for each sample image in above-mentioned image pattern set, wherein preset first frequency of training is equal to the second instruction
Practice the sum of number and constant N, N is the positive integer more than or equal to 1;Until when the frequency of training of current preset is less than or equal to N, according to
The frequency of training of above-mentioned current preset is directed to each sample image in above-mentioned image pattern set and is trained successively, completion pair
The training of above-mentioned prescheme.
In the embodiment of the present invention, a kind of preferable embodiment is, using preset image pattern set to prescheme
When being trained, trained 20-30 times for each image pattern, when iterations reach 5000-7000 times, learning rate is submitted
An order of magnitude completes the training to prescheme until when learning rate reaches given threshold, obtains the network mould of training completion
Type.
For example, it is assumed that including 1000 image patterns in image pattern set, the size of each image pattern can be identical,
Can be different, then, you can it sets training rules to:First frequency of training is 10 times, and the second frequency of training is 6, third instruction
It is 2 to practice number, and the first given threshold is 10000 times, and the second given threshold is 20000 times, and third given threshold is 30000 times,
So, first, it is respectively trained 10 times for each image pattern;Then, when the total frequency of training of determination reaches 10000 times, needle
Each image pattern is respectively trained 6 times;Finally, when the total frequency of training of determination reaches 20000 times, for each image pattern
It is respectively trained 2 times, until training total degree reaches 30000 times, completes the training to network model.
Certainly, above-mentioned training method is a kind of specific embodiment, and training method of the present invention is basis
The accumulation of total frequency of training, gradually decreases the frequency of training to same sample, is to ensure that network model can be good at receiving
It holds back, in the embodiment of the present invention, includes in such a way that preset image pattern set is trained network model but not only limit
In with upper type.
Step 120:Judge to determine above-mentioned head there are when head and shoulder region in the figure to be detected based on above-mentioned testing result
Position of the shoulder region in above-mentioned image to be detected.
Include class according to above-mentioned multiple and different sizes when executing step 120 specifically, in the embodiment of the present invention
The candidate frame of other weights and/or zone position information calculates separately overlapping for head and shoulder region and corresponding candidate frame in each candidate frame
Degree;The registration of head and shoulder region and corresponding candidate frame is more than to the regional location of the candidate frame of preset value, is existed as head and shoulder region
Position in above-mentioned image to be detected.
In practical application, since the candidate frame of above-mentioned multiple and different sizes includes classification weights and zone position information,
So, you can be characterized as in multiple candidate frames in head and shoulder region from classification weights, select head and shoulder regional location and most accurately wait
Frame is selected, specifically, can be by calculating separately the registration in head and shoulder region and corresponding candidate frame in each candidate frame, according to each candidate
The registration of head and shoulder region and corresponding candidate frame determines the more specific location information in head and shoulder region in frame.
In the embodiment of the present invention, a kind of preferable embodiment is that generating network layer feature based figure in candidate frame generates
After multiple candidate frames, the classification weights for including according to candidate frame determine the corresponding several candidate frames in head and shoulder region, and respectively
Calculate the registration in each candidate frame corresponding head and shoulder region and corresponding candidate frame in several candidate frames, and by head and shoulder region
It is greater than or equal to the location information of the candidate frame of preset value (for example, setting value is 0.8) with the registration of corresponding candidate frame, determines
The specific location in head and shoulder region.
For example, it is illustrative, as shown in fig.3, assuming that the candidate frame that classification weights are characterized as head and shoulder region is candidate frame
1, the length-width ratio of candidate frame 2 and candidate frame 3, candidate frame 1 is 1:2, the length-width ratio of candidate frame 2 is 1:1, the length-width ratio of candidate frame 3
It is 2:1, if head and shoulder region as shown, it follows that head and shoulder region and candidate frame 1 overlap that compare be 0.5, head and shoulder region with wait
It is 0.8 to select the coincidence of frame 2 to compare, and it is 0.5 that the overlapping of head and shoulder region and candidate frame 1, which is compared, then can determine the location information of candidate money 3
For the location information in head and shoulder region.
Further, in the embodiment of the present invention, marked in above-mentioned image to be monitored above-mentioned head and shoulder region classification and/or
Position.
In practical application, after the location information that head and shoulder region is determined in image to be detected, mark work can also be used
Tool marks out the classification and coordinate information in head and shoulder region in image to be detected.
Based on above-described embodiment, as shown in fig.4, in the embodiment of the present invention, a kind of head and shoulder regional detection device at least wraps
Include acquiring unit 40, detection unit 41 and determination unit 42, wherein
Acquiring unit 40, for obtaining image to be detected;
Detection unit 41 includes feature extraction network layer for what is completed based on training, candidate frame generate network layer and
The network model of target detection network layer is detected described image to be detected, obtains corresponding testing result, wherein described
Feature extraction network layer has extraction fusion feature, retains primitive character information and adjusts the function of model size;
Determination unit 42 judges in above-mentioned figure to be detected there are when head and shoulder region, really for being based on the testing result
Fixed position of the above-mentioned head and shoulder region in above-mentioned image to be detected.
Optionally, features described above extraction network layer is to be based on ION networks, C-RELU activation primitives and Inception-
The characteristic design of Resnet networks.
Optionally, it is RPN networks that above-mentioned candidate frame, which generates network, is used for from the corresponding characteristic pattern of above-mentioned image to be detected
Middle 15-20 anchor point of extraction, and carry out process of convolution;Above-mentioned target detection network is RCNN, using globel-pooling layers
Pond processing is carried out to the characteristic pattern of convolutional layer output, to reduce over-fitting degree.
Optionally, above-mentioned head and shoulder regional detection device further comprises:
Training unit is trained for use common data set pair pre-training network, obtains corresponding prescheme,
In, the prescheme is used to initialize the parameter of the feature extraction network layer;And using preset image pattern set to upper
It states prescheme to be trained, obtains the network model.
Optionally, when above-mentioned use common data set pair pre-training network is trained, above-mentioned training unit is used for:
Network layer and above-mentioned target detection network layer are extracted according to features described above, generates corresponding pre-training network;And it adopts
It is trained with the above-mentioned pre-training network of common data set pair.
Optionally, when the preset image pattern set of above-mentioned use is trained prescheme, above-mentioned training unit is used
In:
The sample image in above-mentioned image pattern set is inputted into above-mentioned prescheme successively;According to preset first training time
Number is directed to each sample image in above-mentioned image pattern set and is trained, and reaches first in the total frequency of training of determination successively
When given threshold, according to preset second frequency of training, it is directed to each sample image in above-mentioned image pattern set successively and carries out
Training, wherein preset first frequency of training is equal to the sum of the second frequency of training and constant N, and N is the positive integer more than or equal to 1;
Until when the frequency of training of current preset is less than or equal to N, according to the frequency of training of above-mentioned current preset, it is directed to above-mentioned image successively
Each sample image is trained in sample set, completes the training to above-mentioned prescheme.
Optionally, include feature extraction network layer above-mentioned based on completion is trained, candidate frame generates network layer and mesh
The network model of mark detection network layer is detected above-mentioned image to be detected, when obtaining corresponding testing result, above-mentioned detection
Unit 41 is used for:
Above-mentioned image to be detected is inputted in above-mentioned network model;Features described above extracts network layer and extracts above-mentioned mapping to be checked
The characteristic point of picture, and generate network layer by above-mentioned candidate frame is inputted comprising the characteristic pattern of features described above point;Above-mentioned candidate frame generates
Network layer carries out process of convolution with the convolution kernel for presetting size to features described above figure, obtains corresponding multi-C vector, and generate more
It is a various sizes of, include the candidate frame of classification weights and/or zone position information, maps in above-mentioned image to be detected.
Optionally, it is based on above-mentioned testing result above-mentioned, when determining position of the head and shoulder region in above-mentioned image to be detected,
Above-mentioned determination unit 42 is used for:
According to above-mentioned multiple and different sizes, includes the candidate frame of classification weights and/or zone position information, count respectively
Calculate the registration in head and shoulder region and corresponding candidate frame in each candidate frame;The registration of head and shoulder region and corresponding candidate frame is more than pre-
If the regional location of the candidate frame of value, as position of the head and shoulder region in above-mentioned image to be detected.
Optionally, above-mentioned head and shoulder regional detection device further comprises:
Mark unit, classification and/or position for marking above-mentioned head and shoulder region in above-mentioned image to be monitored.
In conclusion in the embodiment of the present invention, image to be detected is obtained;It is completed based on training, includes feature extraction
Network layer, candidate frame generates network layer and the network model of target detection network layer is detected above-mentioned image to be detected, obtains
To corresponding testing result, wherein features described above, which extracts network layer, has extraction fusion feature, retains primitive character information and tune
The function of integral mould size;Judge to determine above-mentioned there are when head and shoulder region in above-mentioned figure to be detected based on above-mentioned testing result
Position of the head and shoulder region in above-mentioned image to be detected.
Using the above method, the feature extraction network layer of the feature extraction network layer network model in network model, which has, to be carried
Fusion feature is taken, primitive character information is retained and adjusts the function of model size, when carrying out feature extraction to image to be detected,
Feature extraction more comprehensively, so that candidate frame generates network layer can carry out the generation of candidate frame from Analysis On Multi-scale Features, is protected
It is poor to shooting quality to have demonstrate,proved, the accuracy of the testing result of the original image of fogging image, avoids since original image itself is clapped
It is bad and cause verification and measurement ratio not high to take the photograph quality, the problem of testing result inaccuracy.
It should be understood by those skilled in the art that, the embodiment of the present invention can be provided as method, system or computer program
Product.Therefore, complete hardware embodiment, complete software embodiment or reality combining software and hardware aspects can be used in the present invention
Apply the form of example.Moreover, the present invention can be used in one or more wherein include computer usable program code computer
The computer program production implemented in usable storage medium (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.)
The form of product.
The present invention be with reference to according to the method for the embodiment of the present invention, the flow of equipment (system) and computer program product
Figure and/or block diagram describe.It should be understood that can be realized by computer program instructions every first-class in flowchart and/or the block diagram
The combination of flow and/or box in journey and/or box and flowchart and/or the block diagram.These computer programs can be provided
Instruct the processor of all-purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices to produce
A raw machine so that the instruction executed by computer or the processor of other programmable data processing devices is generated for real
The device for the function of being specified in present one flow of flow chart or one box of multiple flows and/or block diagram or multiple boxes.
These computer program instructions, which may also be stored in, can guide computer or other programmable data processing devices with spy
Determine in the computer-readable memory that mode works so that instruction generation stored in the computer readable memory includes referring to
Enable the manufacture of device, the command device realize in one flow of flow chart or multiple flows and/or one box of block diagram or
The function of being specified in multiple boxes.
These computer program instructions also can be loaded onto a computer or other programmable data processing device so that count
Series of operation steps are executed on calculation machine or other programmable devices to generate computer implemented processing, in computer or
The instruction executed on other programmable devices is provided for realizing in one flow of flow chart or multiple flows and/or block diagram one
The step of function of being specified in a box or multiple boxes.
Although preferred embodiments of the present invention have been described, it is created once a person skilled in the art knows basic
Property concept, then additional changes and modifications may be made to these embodiments.So it includes excellent that the following claims are intended to be interpreted as
It selects embodiment and falls into all change and modification of the scope of the invention.
Obviously, those skilled in the art can carry out the embodiment of the present invention various modification and variations without departing from this hair
The spirit and scope of bright embodiment.In this way, if these modifications and variations of the embodiment of the present invention belong to the claims in the present invention
And its within the scope of equivalent technologies, then the present invention is also intended to include these modifications and variations.
Claims (12)
1. a kind of head and shoulder method for detecting area, which is characterized in that including:
Obtain image to be detected;
It is completed based on training, includes feature extraction network layer, candidate frame generates the net of network layer and target detection network layer
Network model is detected described image to be detected, obtains corresponding testing result, wherein the feature extraction network layer has
Fusion feature is extracted, primitive character information is retained and adjusts the function of model size;
Judge that there are when head and shoulder region, determine the head and shoulder region described in the figure to be detected based on the testing result
Position in image to be detected.
2. the method as described in claim 1, which is characterized in that the feature extraction network layer is to be based on ION networks, C-RELU
The design of the characteristic of activation primitive and Inception-Resnet networks.
3. the method as described in claim 1, which is characterized in that it is RPN networks that the candidate frame, which generates network, is used for from described
15-20 anchor point is extracted in the corresponding characteristic pattern of image to be detected, and carries out process of convolution;
The target detection network is RCNN, and pond Hua Chu is carried out to the characteristic pattern of convolutional layer output using globel-pooling layers
Reason, to reduce over-fitting degree.
4. method as claimed in claim 2, which is characterized in that before described acquisition image to be detected, further comprise:
It is trained using common data set pair pre-training network, obtains corresponding prescheme, wherein the prescheme is for just
The parameter of the beginningization feature extraction network layer;
The prescheme is trained using preset image pattern set, obtains the network model.
5. method as claimed in claim 4, which is characterized in that described to be instructed using common data set pair pre-training network
Practice, including:
According to the feature extraction network layer and the target detection network layer, corresponding pre-training network is generated;And
It is trained using pre-training network described in common data set pair.
6. method as claimed in claim 5, which is characterized in that described to use preset image pattern set to the prescheme
It is trained, including:
The sample image in described image sample set is inputted into the prescheme successively;
According to preset first frequency of training, it is directed to each sample image in described image sample set successively and is trained, and
When the total frequency of training of determination reaches the first given threshold, according to preset second frequency of training, it is directed to described image sample successively
Each sample image is trained in this set, wherein preset first frequency of training be equal to the second frequency of training and constant N it
It is the positive integer more than or equal to 1 with, N;
Until when the frequency of training of current preset is less than or equal to N, according to the frequency of training of the current preset, it is directed to successively described
Each sample image is trained in image pattern set, completes the training to the prescheme.
7. method as claimed in any one of claims 1 to 6, which is characterized in that it is described based on training completion, include feature
Network layer is extracted, candidate frame generates network layer and the network model of target detection network layer examines described image to be detected
It surveys, obtains corresponding testing result, including:
Described image to be detected is inputted in the network model;
The feature extraction network layer extracts the characteristic point of described image to be detected, and the characteristic pattern comprising the characteristic point is defeated
Enter the candidate frame and generates network layer;
The candidate frame generates network layer and carries out process of convolution to the characteristic pattern with the convolution kernel for presetting size, obtains corresponding
Multi-C vector, and multiple and different sizes are generated, include the candidate frame of classification weights and/or zone position information, maps to
In described image to be detected.
8. the method for claim 7, which is characterized in that it is described to be based on the testing result, determine head and shoulder region in institute
The position in image to be detected is stated, including:
Include the candidate frame of classification weights and/or zone position information according to the multiple various sizes of, calculates separately each
The registration in head and shoulder region and corresponding candidate frame in candidate frame;
The registration of head and shoulder region and corresponding candidate frame is more than to the regional location of the candidate frame of preset value, is existed as head and shoulder region
Position in described image to be detected.
9. the method as described in claim 1, which is characterized in that further comprise:
Classification and/or the position in the head and shoulder region are marked in the image to be monitored.
10. a kind of head and shoulder regional detection device, which is characterized in that including:
Acquiring unit, for obtaining image to be detected;
Detection unit includes feature extraction network layer for what is completed based on training, and candidate frame generates network layer and target inspection
The network model of survey grid network layers is detected described image to be detected, obtains corresponding testing result, wherein the feature carries
It takes network layer that there is extraction fusion feature, retain primitive character information and adjusts the function of model size;
Determination unit judges that there are described in when head and shoulder region, determining in the figure to be detected for being based on the testing result
Position of the head and shoulder region in described image to be detected.
11. a kind of computing device, which is characterized in that including:
Memory, for storing program instruction;
Processor, for calling the program instruction stored in the memory, according to acquisition program execute as claim 1 to
9 any one of them methods.
12. a kind of computer readable storage medium, which is characterized in that the computer-readable recording medium storage has computer can
It executes instruction, the computer executable instructions are for making the computer execute side as described in any one of claim 1 to 9
Method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810391398.4A CN108805016B (en) | 2018-04-27 | 2018-04-27 | Head and shoulder area detection method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810391398.4A CN108805016B (en) | 2018-04-27 | 2018-04-27 | Head and shoulder area detection method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108805016A true CN108805016A (en) | 2018-11-13 |
CN108805016B CN108805016B (en) | 2022-02-08 |
Family
ID=64094003
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810391398.4A Active CN108805016B (en) | 2018-04-27 | 2018-04-27 | Head and shoulder area detection method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108805016B (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109902610A (en) * | 2019-02-22 | 2019-06-18 | 杭州飞步科技有限公司 | Traffic sign recognition method and device |
CN110084184A (en) * | 2019-04-25 | 2019-08-02 | 浙江吉利控股集团有限公司 | A kind of safety belt based on image processing techniques is not detection system and method |
CN111007734A (en) * | 2019-12-12 | 2020-04-14 | 广东美的白色家电技术创新中心有限公司 | Control method of household appliance, control device and storage device |
CN111062249A (en) * | 2019-11-11 | 2020-04-24 | 北京百度网讯科技有限公司 | Vehicle information acquisition method and device, electronic equipment and storage medium |
CN111191501A (en) * | 2019-11-20 | 2020-05-22 | 恒大智慧科技有限公司 | Automatic early warning method, device and medium for tourist gathering behavior in intelligent scenic spot |
CN111353342A (en) * | 2018-12-21 | 2020-06-30 | 浙江宇视科技有限公司 | Shoulder recognition model training method and device, and people counting method and device |
CN111428875A (en) * | 2020-03-11 | 2020-07-17 | 北京三快在线科技有限公司 | Image recognition method and device and corresponding model training method and device |
CN112784244A (en) * | 2019-11-11 | 2021-05-11 | 北京君正集成电路股份有限公司 | Method for improving overall efficiency of target detection by utilizing target verification |
CN113297910A (en) * | 2021-04-25 | 2021-08-24 | 云南电网有限责任公司信息中心 | Distribution network field operation safety belt identification method |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2924611A1 (en) * | 2014-03-28 | 2015-09-30 | Xerox Corporation | Extending data-driven detection to the prediction of object part locations |
CN105844234A (en) * | 2016-03-21 | 2016-08-10 | 商汤集团有限公司 | People counting method and device based on head shoulder detection |
CN106845406A (en) * | 2017-01-20 | 2017-06-13 | 深圳英飞拓科技股份有限公司 | Head and shoulder detection method and device based on multitask concatenated convolutional neutral net |
CN106874894A (en) * | 2017-03-28 | 2017-06-20 | 电子科技大学 | A kind of human body target detection method based on the full convolutional neural networks in region |
CN107145845A (en) * | 2017-04-26 | 2017-09-08 | 中山大学 | The pedestrian detection method merged based on deep learning and multi-characteristic points |
-
2018
- 2018-04-27 CN CN201810391398.4A patent/CN108805016B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2924611A1 (en) * | 2014-03-28 | 2015-09-30 | Xerox Corporation | Extending data-driven detection to the prediction of object part locations |
CN105844234A (en) * | 2016-03-21 | 2016-08-10 | 商汤集团有限公司 | People counting method and device based on head shoulder detection |
CN106845406A (en) * | 2017-01-20 | 2017-06-13 | 深圳英飞拓科技股份有限公司 | Head and shoulder detection method and device based on multitask concatenated convolutional neutral net |
CN106874894A (en) * | 2017-03-28 | 2017-06-20 | 电子科技大学 | A kind of human body target detection method based on the full convolutional neural networks in region |
CN107145845A (en) * | 2017-04-26 | 2017-09-08 | 中山大学 | The pedestrian detection method merged based on deep learning and multi-characteristic points |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111353342A (en) * | 2018-12-21 | 2020-06-30 | 浙江宇视科技有限公司 | Shoulder recognition model training method and device, and people counting method and device |
CN111353342B (en) * | 2018-12-21 | 2023-09-19 | 浙江宇视科技有限公司 | Shoulder recognition model training method and device, and people counting method and device |
CN109902610A (en) * | 2019-02-22 | 2019-06-18 | 杭州飞步科技有限公司 | Traffic sign recognition method and device |
CN110084184A (en) * | 2019-04-25 | 2019-08-02 | 浙江吉利控股集团有限公司 | A kind of safety belt based on image processing techniques is not detection system and method |
CN110084184B (en) * | 2019-04-25 | 2021-06-11 | 浙江吉利控股集团有限公司 | Safety belt unfastening detection system and method based on image processing technology |
CN111062249A (en) * | 2019-11-11 | 2020-04-24 | 北京百度网讯科技有限公司 | Vehicle information acquisition method and device, electronic equipment and storage medium |
CN112784244A (en) * | 2019-11-11 | 2021-05-11 | 北京君正集成电路股份有限公司 | Method for improving overall efficiency of target detection by utilizing target verification |
CN111191501A (en) * | 2019-11-20 | 2020-05-22 | 恒大智慧科技有限公司 | Automatic early warning method, device and medium for tourist gathering behavior in intelligent scenic spot |
CN111007734A (en) * | 2019-12-12 | 2020-04-14 | 广东美的白色家电技术创新中心有限公司 | Control method of household appliance, control device and storage device |
CN111428875A (en) * | 2020-03-11 | 2020-07-17 | 北京三快在线科技有限公司 | Image recognition method and device and corresponding model training method and device |
CN113297910A (en) * | 2021-04-25 | 2021-08-24 | 云南电网有限责任公司信息中心 | Distribution network field operation safety belt identification method |
CN113297910B (en) * | 2021-04-25 | 2023-04-18 | 云南电网有限责任公司信息中心 | Distribution network field operation safety belt identification method |
Also Published As
Publication number | Publication date |
---|---|
CN108805016B (en) | 2022-02-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108805016A (en) | A kind of head and shoulder method for detecting area and device | |
CN104166841B (en) | The quick detection recognition methods of pedestrian or vehicle is specified in a kind of video surveillance network | |
US9754192B2 (en) | Object detection utilizing geometric information fused with image data | |
CN106874894A (en) | A kind of human body target detection method based on the full convolutional neural networks in region | |
CN103390164B (en) | Method for checking object based on depth image and its realize device | |
CN109102547A (en) | Robot based on object identification deep learning model grabs position and orientation estimation method | |
CN108830188A (en) | Vehicle checking method based on deep learning | |
CN110738101A (en) | Behavior recognition method and device and computer readable storage medium | |
Šegvić et al. | A computer vision assisted geoinformation inventory for traffic infrastructure | |
CN106295601A (en) | A kind of Safe belt detection method of improvement | |
CN109214366A (en) | Localized target recognition methods, apparatus and system again | |
CN104794737B (en) | A kind of depth information Auxiliary Particle Filter tracking | |
CN106934795A (en) | The automatic testing method and Forecasting Methodology of a kind of glue into concrete beam cracks | |
CN107153819A (en) | A kind of queue length automatic testing method and queue length control method | |
CN113822247B (en) | Method and system for identifying illegal building based on aerial image | |
WO2020134102A1 (en) | Article recognition method and device, vending system, and storage medium | |
EP3702957A1 (en) | Target detection method and apparatus, and computer device | |
CN109325456A (en) | Target identification method, device, target identification equipment and storage medium | |
CN109800682A (en) | Driver attributes' recognition methods and Related product | |
CN109506628A (en) | Object distance measuring method under a kind of truck environment based on deep learning | |
CN107368790B (en) | Pedestrian detection method, system, computer-readable storage medium and electronic device | |
CN107180228A (en) | A kind of grad enhancement conversion method and system for lane detection | |
CN109902576B (en) | Training method and application of head and shoulder image classifier | |
CN109670517A (en) | Object detection method, device, electronic equipment and target detection model | |
CN114926747A (en) | Remote sensing image directional target detection method based on multi-feature aggregation and interaction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |