CN110287782A - Pedestrian's parted pattern training method and device - Google Patents
Pedestrian's parted pattern training method and device Download PDFInfo
- Publication number
- CN110287782A CN110287782A CN201910414408.6A CN201910414408A CN110287782A CN 110287782 A CN110287782 A CN 110287782A CN 201910414408 A CN201910414408 A CN 201910414408A CN 110287782 A CN110287782 A CN 110287782A
- Authority
- CN
- China
- Prior art keywords
- pedestrian
- convolutional neural
- neural networks
- training
- parted pattern
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
- G06V20/53—Recognition of crowd images, e.g. recognition of crowd congestion
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Multimedia (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Molecular Biology (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Computing Systems (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Biomedical Technology (AREA)
- Image Analysis (AREA)
Abstract
The present disclosure discloses a kind of pedestrian's parted pattern training method, device, electronic equipment and computer readable storage mediums, wherein the described method includes: obtaining training sample set;Training sample set is made of the multiple sample images for being marked cut zone, and the cut zone that first kind label is marked in multiple sample images shares N class and the shared M class of cut zone for being marked the second class label;It include the first convolutional neural networks of at least one sub- convolutional neural networks by the input of training sample set, the corresponding pedestrian of a sub- convolutional neural networks divides branch model;Each pedestrian divides branch model parallel training and obtains pedestrian's parted pattern.The sample image that the disclosure passes through the different classes of cut zone of mark, and it is trained to obtain pedestrian's parted pattern according to the sample image being marked, allow pedestrian's parted pattern in cut zone, different classes of cut zone can be distinguished, it can be used for dividing and be difficult to separated region, promote pedestrian's segmentation precision.
Description
Technical field
This disclosure relates to pedestrian detection technology field more particularly to a kind of pedestrian's parted pattern training method, device, electronics
Equipment and computer readable storage medium.
Background technique
In many applications of video structural, pedestrian's analysis is most important, and the identification especially for people is being pacified
The various fields such as anti-, video frequency searching play central role.
Pedestrian's segmentation, which refers to, separates the regions such as jacket, lower clothing, shoes, cap, hair, packet, skin, background in human body
A kind of technology.It is limited in many practical scenes, the picture quality of pedestrian is poor, and pixel is low, and is influenced by light, posture etc.
Larger, pedestrian's segmentation is the difficult point in video structural.It is such as specifically adjacent and bad separated in particular for difficult sample
Region, such as cap and hair, the two is adjacent, and is difficult to separate.
Summary of the invention
To overcome the problems in correlation technique, the disclosure provides a kind of pedestrian's parted pattern training method, device, electricity
Sub- equipment and computer readable storage medium.
According to the first aspect of the embodiments of the present disclosure, a kind of pedestrian's parted pattern training method is provided, comprising:
Obtain training sample set;Wherein, the training sample set is by being marked multiple sample images of cut zone
Composition, wherein the cut zone that first kind label is marked in the multiple sample image shares N class and is marked the second category
The cut zone of note shares M class, wherein the M and N is positive integer;
The training sample set is inputted into the first convolutional neural networks;Wherein, first convolutional neural networks include
At least one sub- convolutional neural networks, the corresponding pedestrian of a sub- convolutional neural networks divide branch model;
Each pedestrian divides branch model according to training sample set merging rows training until meeting preset convergence item
Part obtains the pedestrian's parted pattern for dividing branch model comprising at least one pedestrian;Wherein, pedestrian's parted pattern is for dividing
Cut the cut zone in pedestrian image.
Further, the convolutional neural networks include the first sub- convolutional neural networks, wherein the first sub- convolution mind
Through network, corresponding first pedestrian divides branch model;
Correspondingly, each pedestrian divides branch model according to training sample set merging rows training until meeting pre-
If the condition of convergence, obtain comprising at least one pedestrian divide branch model pedestrian's parted pattern, comprising:
First pedestrian divides branch model using the M class cut zone as M class training sample, and will be described
N class cut zone is classified as a kind of training sample, is trained using the described first sub- convolutional neural networks to M+1 class training sample
Until meeting the preset condition of convergence.
Further, the convolutional neural networks also include that second son parallel with the described first sub- convolutional neural networks is rolled up
Product neural network, wherein corresponding second pedestrian of the second sub- convolutional neural networks divides branch model;
Correspondingly, described the step of obtaining pedestrian's parted pattern, further includes:
Second pedestrian divides branch model and the M class cut zone is classified as a kind of training sample, and N class is divided
Region N+1 class training sample is trained respectively as N class training sample, using the described second sub- convolutional neural networks until
Meet the preset condition of convergence.
Further, the convolutional neural networks also include to roll up with the described first sub- convolutional neural networks and second son
The sub- convolutional neural networks of third of product neural network concurrent, wherein the sub- convolutional neural networks of third correspond to third pedestrian point
Cut branch model;
Correspondingly, described the step of obtaining pedestrian's parted pattern, further includes:
The third pedestrian divides branch model using N+M class cut zone as N+M class training sample, using described
The sub- convolutional neural networks of third are trained the N+M class training sample until meeting the preset condition of convergence.
Further, the method also includes:
First pedestrian is calculated during first pedestrian divides branch model training and divides branch's mould
The loss function of type;
Second pedestrian is calculated during second pedestrian divides branch model training and divides branch's mould
The loss function of type;
The third pedestrian is calculated during the third pedestrian divides branch model training and divides branch's mould
The loss function of type;
First pedestrian is divided into the loss function of branch model, the loss letter of second pedestrian segmentation branch model
The loss function that the several and third pedestrian divides branch model is weighted;
Using the loss function after weighting as the loss function of pedestrian's parted pattern, and by pedestrian's parted pattern
Loss function the condition of convergence as the preset condition of convergence.
It is further, described that the training sample set is inputted into the first convolutional neural networks, comprising:
The training sample set is inputted into the second convolutional neural networks, by second convolutional neural networks to described
Sample image in training sample set carries out feature extraction respectively, obtains the feature set of graphs comprising characteristic information;
The feature set of graphs is inputted into the first convolutional neural networks.
According to the second aspect of an embodiment of the present disclosure, a kind of pedestrian's dividing method is provided, comprising:
Obtain pedestrian image;
Pedestrian image input is obtained using pedestrian's parted pattern training method described in any of the above embodiments training
Pedestrian's parted pattern;
The pedestrian image is split by pedestrian's parted pattern, obtains cut zone.
Further, described that the pedestrian image is split by pedestrian's parted pattern, cut zone is obtained,
Include:
Divide branch model by the third pedestrian of pedestrian's parted pattern to be split the pedestrian image, obtain
Cut zone.
Further, described that the pedestrian image is split by pedestrian's parted pattern, cut zone is obtained,
Include:
The pedestrian image is inputted into the second convolutional neural networks, image is carried out by second convolutional neural networks
Feature extraction obtains the characteristic pattern comprising characteristic information;
The characteristic pattern is inputted into pedestrian's parted pattern, obtains cut zone.
According to the third aspect of an embodiment of the present disclosure, a kind of pedestrian's parted pattern training device is provided, comprising:
Sample acquisition module, for obtaining training sample set;Wherein, the training sample set is by being marked cut section
Multiple sample images in domain form, wherein the cut zone that first kind label is marked in the multiple sample image shares N class
And it is marked the shared M class of cut zone of the second class label, wherein the M and N is positive integer;
Sample input module, for the training sample set to be inputted the first convolutional neural networks;Wherein, described first
Convolutional neural networks include at least one sub- convolutional neural networks, and the corresponding pedestrian of a sub- convolutional neural networks divides branch
Model;
Model training module, for each pedestrian divide branch model according to the training sample set merging rows training until
Meet the preset condition of convergence, obtains the pedestrian's parted pattern for dividing branch model comprising at least one pedestrian;Wherein, the row
People's parted pattern is used to divide the cut zone in pedestrian image.
Further, the convolutional neural networks include the first sub- convolutional neural networks, wherein the first sub- convolution mind
Through network, corresponding first pedestrian divides branch model;
Correspondingly, the model training module is specifically used for: first pedestrian divides branch model and divides the M class
Region is classified as a kind of training sample respectively as M class training sample, and by the N class cut zone, is rolled up using first son
Product neural network is trained M+1 class training sample until meeting the preset condition of convergence.
Further, the convolutional neural networks also include that second son parallel with the described first sub- convolutional neural networks is rolled up
Product neural network, wherein corresponding second pedestrian of the second sub- convolutional neural networks divides branch model;
Correspondingly, the model training module is specifically used for: second pedestrian divides branch model and divides the M class
Region is classified as a kind of training sample, and using N class cut zone as N class training sample, using the described second sub- convolution mind
N+1 class training sample is trained until meeting the preset condition of convergence through network.
Further, the convolutional neural networks also include to roll up with the described first sub- convolutional neural networks and second son
The sub- convolutional neural networks of third of product neural network concurrent, wherein the sub- convolutional neural networks of third correspond to third pedestrian point
Cut branch model;
Correspondingly, the model training module is specifically used for: the third pedestrian divides branch model for N+M class cut section
Domain is trained the N+M class training sample respectively as N+M class training sample, using the sub- convolutional neural networks of the third
Until meeting the preset condition of convergence.
Further, described device further include:
Loss function computing module, for institute to be calculated during first pedestrian divides branch model training
State the loss function that the first pedestrian divides branch model;It is calculated during second pedestrian divides branch model training
Divide the loss function of branch model to second pedestrian;It is counted during the third pedestrian divides branch model training
It calculates and obtains the loss function that the third pedestrian divides branch model;First pedestrian is divided to the loss letter of branch model
The loss function that several, described second pedestrian divides the loss function of branch model and the third pedestrian divides branch model carries out
Weighting;Using the loss function after weighting as the loss function of pedestrian's parted pattern, and by pedestrian's parted pattern
The condition of convergence of loss function is as the preset condition of convergence.
Further, the sample input module is specifically used for: the training sample set is inputted the second convolutional Neural
Network carries out feature extraction to the sample image in the training sample set by second convolutional neural networks respectively,
Obtain the feature set of graphs comprising characteristic information;The feature set of graphs is inputted into the first convolutional neural networks.
According to a fourth aspect of embodiments of the present disclosure, a kind of pedestrian's segmenting device is provided, comprising:
Image collection module, for obtaining pedestrian image;
Image input module is instructed for inputting the pedestrian image using pedestrian's parted pattern described in any of the above embodiments
Practice pedestrian's parted pattern that method training obtains;
Image segmentation module is divided for being split by pedestrian's parted pattern to the pedestrian image
Region.
Further, described image segmentation module is specifically used for: being divided by the third pedestrian of pedestrian's parted pattern
Branch model is split the pedestrian image, obtains cut zone.
Further, described image segmentation module is specifically used for: the pedestrian image is inputted into the second convolutional neural networks,
Feature extraction is carried out to image by second convolutional neural networks, obtains the characteristic pattern comprising characteristic information;By the spy
Sign figure input pedestrian's parted pattern, obtains cut zone.
According to a fifth aspect of the embodiments of the present disclosure, a kind of electronic equipment is provided, comprising:
Processor;
Memory for storage processor executable instruction;Wherein, the processor is configured to executing above-mentioned any
One pedestrian's parted pattern training method, or execute pedestrian's dividing method described in any of the above embodiments.
According to a sixth aspect of an embodiment of the present disclosure, a kind of non-transitorycomputer readable storage medium is provided, when described
When instruction in storage medium is executed by the processor of electronic equipment, so that electronic equipment is able to carry out above-mentioned any one pedestrian
Parted pattern training method, or execute pedestrian's dividing method described in any of the above embodiments.
The technical scheme provided by this disclosed embodiment can include the following benefits: by marking different classes of point
The sample image in region is cut, and is trained to obtain pedestrian's parted pattern according to the sample image being marked, so that pedestrian is divided
Model can distinguish different classes of cut zone in cut zone, can be used for segmentation and be difficult to separated region, mention
Rise pedestrian's segmentation precision.
It should be understood that above general description and following detailed description be only it is exemplary and explanatory, not
The disclosure can be limited.
Detailed description of the invention
The drawings herein are incorporated into the specification and forms part of this specification, and shows the implementation for meeting the disclosure
Example, and together with specification for explaining the principles of this disclosure.
Fig. 1 a is a kind of flow chart for pedestrian's parted pattern training method that the embodiment of the present disclosure one provides.
Fig. 1 b is the convolution mistake of the convolutional layer in a kind of pedestrian's parted pattern training method that the embodiment of the present disclosure one provides
Journey schematic diagram;
Fig. 1 c is the convolution knot of the convolutional layer in a kind of pedestrian's parted pattern training method that the embodiment of the present disclosure one provides
Fruit schematic diagram;
Fig. 2 is a kind of flow chart for pedestrian's parted pattern training method that the embodiment of the present disclosure two provides.
Fig. 3 is a kind of structural block diagram for pedestrian's parted pattern training device that the embodiment of the present disclosure three provides.
Fig. 4 is a kind of structural block diagram for pedestrian's parted pattern training device that the embodiment of the present disclosure four provides.
Fig. 5 is the structural block diagram for a kind of electronic equipment that the embodiment of the present disclosure five provides.
Specific embodiment
Example embodiments are described in detail here, and the example is illustrated in the accompanying drawings.Following description is related to
When attached drawing, unless otherwise indicated, the same numbers in different drawings indicate the same or similar elements.Following exemplary embodiment
Described in embodiment do not represent all implementations consistent with this disclosure.On the contrary, they be only with it is such as appended
The example of the consistent device and method of some aspects be described in detail in claims, the disclosure.
Embodiment one
Fig. 1 a is a kind of flow chart for pedestrian's parted pattern training method that the embodiment of the present disclosure one provides, and the present embodiment mentions
The executing subject of pedestrian's parted pattern training method of confession can be pedestrian's parted pattern training cartridge that the embodiment of the present disclosure provides
It sets, which can integrate in mobile terminal (for example, smart phone, tablet computer etc.), notebook or fixed terminal (desktop
Brain) in, which can use hardware or software realization.As shown in Figure 1a, comprising the following steps:
Step S11 obtains training sample set;Wherein, the training sample set is by being marked the multiple of cut zone
Sample image composition, wherein the cut zone that first kind label is marked in the multiple sample image shares N class and is marked
The cut zone of second class label shares M class, wherein the M and N is positive integer.
Wherein, the label of cut zone includes but is not limited to following at least one: jacket, lower clothing, shoes, cap, hair,
Packet, skin, background.
Wherein, sample image can be pedestrian image, and the cut zone of first kind label can get up for segmentation in image
Relatively easy image-region, such as pedestrian and background;The cut zone of second class label can be adjacent in image and be difficult to
The image-region of segmentation, such as hair and cap.
Wherein, the numerical value of N and M is determined by the cut zone contained in all sample images, is divided on each sample image
It is cut into different regions, the cut zone on each sample image is labeled with the first label less than or equal to N class, each sample
Cut zone on this image is labeled with the second label less than or equal to M class, has N and M class in set.
It include lower clothing, shoes and packet, sample in sample image 2 for example, including cap, hair and jacket in sample image 1
Include jacket and packet in image 3, altogether includes cap, hair, jacket, lower clothing, shoes and packet in above-mentioned sample image by counting
Deng 6 class cut zone, can further be divided jacket, lower clothing, shoes and packet as one kind according to the complexity of segmentation
Region, that is, first kind label cut zone, using cap and hair as the cut section of a kind of cut zone i.e. the second class label
Domain, in this case, the numerical value of N correspond to 4, M numerical value and correspond to 2.
The training sample set is inputted the first convolutional neural networks by step S12;Wherein, first convolutional Neural
Network includes at least one sub- convolutional neural networks, and the corresponding pedestrian of a sub- convolutional neural networks divides branch model.
Wherein, convolutional neural networks (Convolutional Neural Networks, CNN) are a kind of comprising convolution meter
The feedforward neural network of depth structure is calculated and had, mainly includes input layer, convolutional layer, pond layer, full articulamentum and output layer.
Also, a convolutional neural networks may include multiple convolutional layers.Herein, convolutional neural networks can be straight barrel type convolution
Neural network, or deep learning convolutional neural networks are not specifically limited here.
Wherein, convolutional layer includes convolution kernel, and convolution kernel can be a matrix, for carrying out convolution, tool to input picture
Body calculation method is the element multiplication to the difference of the image of input local matrix and each position of convolution nuclear matrix, then phase
Add.Herein, each trained channel corresponds to different convolution kernels.
For example, as shown in Figure 1 b, input is the matrix of a two-dimensional 3*4, and convolution kernel is the square of a 2*2
Battle array.It is assumed that convolution is that a primary mobile pixel carrys out convolution, then first to the upper left corner part 2*2 of input and convolution
The element multiplication of nuclear convolution, i.e., each position is added again, and the element of the S00 of obtained output matrix S is worth for aw+bx+ey+
fzaw+bx+ey+fz.It then is that (b, c, f, g) four elements are constituted now by the part of input to one pixel of right translation
Matrix and convolution kernel carry out convolution, this results in the element of the S01 of output matrix S, same method, and available output matrix
The S02 of S, S10, S11, S12, S10, S11, the element of S12.As illustrated in figure 1 c, the matrix for finally obtaining convolution output is one
The matrix S of 2*3.
Wherein, the parameter includes the corresponding parameter of convolution kernel of convolutional layer, such as the size of convolution matrix, such as can be with
It is set as the matrix of 3*3, different convolution kernels can be set in different convolutional layers.In addition, it can include the parameter of pond layer, example
It can be the pond matrix of 3*3 or the parameter of output layer, such as linear coefficient matrix and bias such as the size of pond matrix
Vector etc..
Herein, the first convolutional neural networks are overall network, and it includes at least one sub- convolutional neural networks, sub- volumes
Product neural network is the branch of overall network.
Step S13, each pedestrian divide branch model according to training sample set merging rows training until meeting default
The condition of convergence, obtain comprising at least one pedestrian divide branch model pedestrian's parted pattern;Wherein, the pedestrian divides mould
Type is used to divide the cut zone in pedestrian image.
Wherein, each pedestrian divides branch model parallel processing, each to have the corresponding condition of convergence by oneself or meet same
The condition of convergence terminates training process, obtains at least one pedestrian and divides branch model, divides branch model by least one pedestrian
Form pedestrian's parted pattern.When carrying out pedestrian's segmentation, one of pedestrian's segmentation branch model can be chosen and be split.
The present embodiment by marking the sample image of different classes of cut zone, and according to the sample image being marked into
Row training obtains pedestrian's parted pattern, allows pedestrian's parted pattern in cut zone, can distinguish different classes of point
Region is cut, can be used for segmentation and be difficult to separated region, promote pedestrian's segmentation precision.
In an alternative embodiment, the convolutional neural networks include the first sub- convolutional neural networks, wherein described
Corresponding first pedestrian of first sub- convolutional neural networks divides branch model;
Correspondingly, step S13 includes:
First pedestrian divides branch model using the M class cut zone as M class training sample, and will be described
N class cut zone is classified as a kind of training sample, is trained using the described first sub- convolutional neural networks to M+1 class training sample
Until meeting the preset condition of convergence.
In the present embodiment, by using the M class cut zone as M class training sample, by the N class cut zone
It is classified as a kind of training sample, carries out classification based training in that way, the first pedestrian can be allowed to divide branch model and do not consider N class
Differentiation between cut zone, it is more preferable that emphasis goes between study M class cut zone how to divide.
It is exemplified below, if M class cut zone is that two classes are adjacent and be difficult to the difficult cut zone divided, such as head
Remaining N class cut zone is classified as one kind using hair and cap as two class training samples by hair and cap, the present embodiment
Training sample, allow in this way the first pedestrian divide branch model emphasis go study hair and cap between how to divide it is more preferable, from
And the cut zone that preferably handle more complicated bad segmentation can be reached by so that the first pedestrian is divided branch model, i.e., difficult point
Cut region.
In an alternative embodiment, the convolutional neural networks also include with the described first sub- convolutional neural networks simultaneously
The sub- convolutional neural networks of capable second, wherein corresponding second pedestrian of the second sub- convolutional neural networks divides branch model;
Correspondingly, step S13, further includes:
Second pedestrian divides branch model and the M class cut zone is classified as a kind of training sample, and N class is divided
Region N+1 class training sample is trained respectively as N class training sample, using the described second sub- convolutional neural networks until
Meet the preset condition of convergence.
In the present embodiment, M class cut zone and N class cut zone are distinguished into training, makes the second pedestrian point in this way
Relationship between M class cut zone need not be considered further that by cutting branch model, and first M class cut zone, this major class is trained, emphasis
Learn N class cut zone between segmentation, and by combine the first pedestrian divide branch model, both considered M class cut zone it
Between segmentation, it is further contemplated that the segmentation between N class cut zone, the complexity that each pedestrian divides branch model is small, and calculation amount is small,
The better pedestrian of segmentation effect can also be accessed and divide branch model.
It is exemplified below, if M is 2, corresponding 2 class cut zone is divided for the difficulty that two classes are adjacent and are difficult to divide
Region, such as hair and cap, the present embodiment divide remaining N class using hair and cap as a kind of training sample
Region is trained respectively respectively as N class training sample, makes model that need not consider further that the pass between hair and cap in this way
System first trains hair and cap this major class, the segmentation between selective learning N class cut zone, and by conjunction with first
Pedestrian divides branch model, has both considered the segmentation between M class cut zone, it is further contemplated that the segmentation between N class cut zone, makes every
The complexity that a pedestrian divides branch model is small, and calculation amount is small, additionally it is possible to the preferably segmentation of the more complicated bad segmentation of processing
Region, i.e., difficult cut zone.
In an alternative embodiment, the convolutional neural networks also include with the described first sub- convolutional neural networks and
The parallel sub- convolutional neural networks of third of the second sub- convolutional neural networks, wherein the sub- convolutional neural networks pair of third
Third pedestrian is answered to divide branch model;
Correspondingly, step S13 further include:
The third pedestrian divides branch model using N+M class cut zone as N+M class training sample, using described
The sub- convolutional neural networks of third are trained the N+M class training sample until meeting the preset condition of convergence.
The present embodiment divides branch model by the first pedestrian, using the M class cut zone as M class training sample,
The N class cut zone is classified as a kind of training sample, is trained alone respectively, the first pedestrian can be allowed to divide branch in this way
It is more preferable that model emphasis goes between study M class cut zone how to divide, so that so that the first pedestrian is divided branch model can be with
The cut zone of the more complicated bad segmentation of enough preferably processing, i.e., difficult cut zone;And by combining the second pedestrian to divide
The M class cut zone is classified as a kind of training sample, and trains sample for N class cut zone as N class by branch model
This, distinguishes training, makes the second pedestrian segmentation branch model that need not consider further that the relationship between M class cut zone in this way, first
M class cut zone, this major class is trained, and the segmentation between selective learning N class cut zone had both considered M class cut section in this way
Segmentation between domain, it is further contemplated that the segmentation between N class cut zone, the complexity for making each pedestrian divide branch model is small, meter
Calculation amount is small, additionally it is possible to obtain the better pedestrian of segmentation effect and divide branch model;Further by combining third pedestrian segmentation point
The classification of branch model training whole is obtained using the method that major class group is trained together comprising above-mentioned 3 branch pedestrians point in this way
Model is cut, the precision that pedestrian's parted pattern divides difficult classification can be promoted.
In an alternative embodiment, the method also includes:
First pedestrian is calculated during first pedestrian divides branch model training and divides branch's mould
The loss function of type;
Second pedestrian is calculated during second pedestrian divides branch model training and divides branch's mould
The loss function of type;
The third pedestrian is calculated during the third pedestrian divides branch model training and divides branch's mould
The loss function of type;
First pedestrian is divided into the loss function of branch model, the loss letter of second pedestrian segmentation branch model
The loss function that the several and third pedestrian divides branch model is weighted;
Using the loss function after weighting as the loss function of pedestrian's parted pattern, and by pedestrian's parted pattern
Loss function the condition of convergence as the preset condition of convergence.
In an alternative embodiment, step S12 includes:
Step S121: the training sample set is inputted into the second convolutional neural networks, passes through second convolutional Neural
Network carries out feature extraction to the sample image in the training sample set respectively, obtains the feature atlas comprising characteristic information
It closes;
Wherein, the second convolutional neural networks are that above-mentioned first pedestrian divides branch model, the second pedestrian divides branch model
Divide the shared convolutional neural networks of branch model with third pedestrian.
Classical network structure (such as GoogleNet or VGG or ResNet) can be used in second convolutional neural networks
One sample image is first entered into basic convolutional neural networks as basic convolutional neural networks, basis volume
The trained basic convolutional neural networks model initialization of the parameter of product neural network.
Step S122: the feature set of graphs is inputted into the first convolutional neural networks.
Specifically, training sample set can be inputted in the second convolutional neural networks, to extract corresponding characteristic information, this
The feature for the cut zone in sample image that sample obtains is more obvious, inputs the first convolutional Neural as feature set of graphs
Network is trained, so that obtained pedestrian's parted pattern is more accurate, further increases segmentation precision.
Embodiment two
Fig. 2 is a kind of flow chart for pedestrian's dividing method that the embodiment of the present disclosure two provides, pedestrian provided in this embodiment
The executing subject of parted pattern training method can be pedestrian's parted pattern training device that the embodiment of the present disclosure provides, the device
It can integrate in mobile terminal (for example, smart phone, tablet computer etc.), notebook or fixed terminal (desktop computer), it should
Pedestrian's parted pattern training device can use hardware or software realization.As shown in Fig. 2, specifically including:
Step S21 obtains pedestrian image.
Specifically, pedestrian image can be obtained in real time by camera, or the row pre-saved is obtained from local data base
People's image.
The pedestrian image is inputted pedestrian's parted pattern by step S22.
Wherein, pedestrian's parted pattern is obtained using the training of pedestrian's parted pattern training method described in above-described embodiment one.
Step S23 is split the pedestrian image by pedestrian's parted pattern, obtains cut zone.
Since pedestrian's parted pattern is obtained according to the sample image training for being marked different classes of cut zone, so that row
People's parted pattern can distinguish different classes of cut zone in cut zone, can be used for segmentation and be difficult to separated region,
Promote pedestrian's segmentation precision.
In an alternative embodiment, step S23 is specifically included:
Divide branch model by the third pedestrian of pedestrian's parted pattern to be split the pedestrian image, obtain
Cut zone.
Wherein, pedestrian's parted pattern includes that three pedestrians divide branch models, respectively the first pedestrian divide branch model,
Second pedestrian divides branch model and third pedestrian divides branch model, these three pedestrians divide the definition of branch model particularly
Embodiment one is stated, which is not described herein again.
The present embodiment only needs to take out third pedestrian and divides branch model use in segmentation, therefore this method is being promoted
The complexity of pedestrian's parted pattern is not will increase while precision.
In an alternative embodiment, step S23 is specifically included:
The pedestrian image is inputted into the second convolutional neural networks, image is carried out by second convolutional neural networks
Feature extraction obtains the characteristic pattern comprising characteristic information;The characteristic pattern is inputted into pedestrian's parted pattern, obtains cut zone.
Specifically, dividing in pedestrian image can be made to extract corresponding characteristic information by the second convolutional neural networks of pedestrian image
The feature for cutting region is more obvious, further increases segmentation precision.The available tool of pedestrian's parted pattern through this embodiment
The cut zone and position for having the first kind to mark are adjacent and are difficult to the cut zone of the second class divided label.
Embodiment three
Fig. 3 is a kind of pedestrian's parted pattern training device block diagram that the embodiment of the present disclosure three provides.The device can integrate
In mobile terminal (for example, smart phone, tablet computer etc.), notebook or fixed terminal (desktop computer), pedestrian segmentation
Model training apparatus can use hardware or software realization.Referring to Fig. 3, which includes sample acquisition module 31, sample input
Module 32 and model training module 33;Wherein,
Sample acquisition module 31 is for obtaining training sample set;Wherein, the training sample set is by being marked segmentation
Multiple sample images in region form, wherein the cut zone that first kind label is marked in the multiple sample image shares N
Class and the cut zone for being marked the second class label share M class, wherein the M and N is positive integer;
Sample input module 32 is used to the training sample set inputting the first convolutional neural networks;Wherein, described
One convolutional neural networks include at least one sub- convolutional neural networks, the corresponding pedestrian segmentation point of a sub- convolutional neural networks
Branch model;
Model training module 33 is straight according to training sample set merging rows training for each pedestrian segmentation branch model
To the preset condition of convergence is met, the pedestrian's parted pattern for dividing branch model comprising at least one pedestrian is obtained;Wherein, described
Pedestrian's parted pattern is used to divide the cut zone in pedestrian image.
Further, the convolutional neural networks include the first sub- convolutional neural networks, wherein the first sub- convolution mind
Through network, corresponding first pedestrian divides branch model;
Correspondingly, the model training module 33 is specifically used for: first pedestrian divides branch model for the M class point
Region is cut respectively as M class training sample, and the N class cut zone is classified as a kind of training sample, using first son
Convolutional neural networks are trained M+1 class training sample until meeting the preset condition of convergence.
Further, the convolutional neural networks also include that second son parallel with the described first sub- convolutional neural networks is rolled up
Product neural network, wherein corresponding second pedestrian of the second sub- convolutional neural networks divides branch model;
Correspondingly, the model training module 33 is specifically used for: second pedestrian divides branch model for the M class point
It cuts region and is classified as a kind of training sample, and using N class cut zone as N class training sample, using the described second sub- convolution
Neural network is trained N+1 class training sample until meeting the preset condition of convergence.
Further, the convolutional neural networks also include to roll up with the described first sub- convolutional neural networks and second son
The sub- convolutional neural networks of third of product neural network concurrent, wherein the sub- convolutional neural networks of third correspond to third pedestrian point
Cut branch model;
Correspondingly, the model training module 33 is specifically used for: the third pedestrian divides branch model and divides N+M class
The N+M class training sample is instructed respectively as N+M class training sample, using the third sub- convolutional neural networks in region
Practice until meeting the preset condition of convergence.
Further, described device further include: loss function computing module 34;Wherein,
Loss function computing module 34 is used to be calculated during first pedestrian divides branch model training
First pedestrian divides the loss function of branch model;It is calculated during second pedestrian divides branch model training
Obtain the loss function that second pedestrian divides branch model;During the third pedestrian divides branch model training
The loss function that the third pedestrian divides branch model is calculated;First pedestrian is divided to the loss letter of branch model
The loss function that several, described second pedestrian divides the loss function of branch model and the third pedestrian divides branch model carries out
Weighting;Using the loss function after weighting as the loss function of pedestrian's parted pattern, and by pedestrian's parted pattern
The condition of convergence of loss function is as the preset condition of convergence.
Further, the sample input module 32 is specifically used for: the training sample set is inputted the second convolution mind
Through network, by second convolutional neural networks feature is carried out to the sample image in the training sample set respectively and mentioned
It takes, obtains the feature set of graphs comprising characteristic information;The feature set of graphs is inputted into the first convolutional neural networks.
About the device in above-described embodiment, wherein modules execute the concrete mode of operation in related this method
Embodiment in be described in detail, no detailed explanation will be given here.
Example IV
Fig. 4 is a kind of pedestrian's segmenting device block diagram that the embodiment of the present disclosure four provides.The device can integrate in mobile whole
In end (for example, smart phone, tablet computer etc.), notebook or fixed terminal (desktop computer), pedestrian's parted pattern training
Device can use hardware or software realization.Referring to Fig. 4, which includes image collection module 41,42 and of image input module
Image segmentation module 43;Wherein,
Image collection module 41 is for obtaining pedestrian image;
Image input module 42, which is used to input the pedestrian image, uses pedestrian's parted pattern described in any of the above embodiments
Pedestrian's parted pattern that training method training obtains;
Image segmentation module 43 is divided for being split by pedestrian's parted pattern to the pedestrian image
Region.
Further, described image segmentation module 43 is specifically used for: passing through the third pedestrian point of pedestrian's parted pattern
It cuts branch model to be split the pedestrian image, obtains cut zone.
Further, described image segmentation module 43 is specifically used for: the pedestrian image is inputted the second convolution nerve net
Network carries out feature extraction to image by second convolutional neural networks, obtains the characteristic pattern comprising characteristic information;It will be described
Characteristic pattern inputs pedestrian's parted pattern, obtains cut zone.
About the device in above-described embodiment, wherein modules execute the concrete mode of operation in related this method
Embodiment in be described in detail, no detailed explanation will be given here.
Embodiment five
The embodiment of the present disclosure provides a kind of electronic equipment, comprising:
Processor;
Memory for storage processor executable instruction;Wherein, processor is configured as:
Obtain training sample set;Wherein, the training sample set is by being marked multiple sample images of cut zone
Composition, wherein the cut zone that first kind label is marked in the multiple sample image shares N class and is marked the second category
The cut zone of note shares M class, wherein the M and N is positive integer;
The training sample set is inputted into the first convolutional neural networks;Wherein, first convolutional neural networks include
At least one sub- convolutional neural networks, the corresponding pedestrian of a sub- convolutional neural networks divide branch model;
Each pedestrian divides branch model according to training sample set merging rows training until meeting preset convergence item
Part obtains the pedestrian's parted pattern for dividing branch model comprising at least one pedestrian;Wherein, pedestrian's parted pattern is for dividing
Cut the cut zone in pedestrian image.
Further, the convolutional neural networks include the first sub- convolutional neural networks, wherein the first sub- convolution mind
Through network, corresponding first pedestrian divides branch model;
Correspondingly, each pedestrian divides branch model according to training sample set merging rows training until meeting pre-
If the condition of convergence, obtain comprising at least one pedestrian divide branch model pedestrian's parted pattern, comprising:
First pedestrian divides branch model using the M class cut zone as M class training sample, and will be described
N class cut zone is classified as a kind of training sample, is trained using the described first sub- convolutional neural networks to M+1 class training sample
Until meeting the preset condition of convergence.
Further, the convolutional neural networks also include that second son parallel with the described first sub- convolutional neural networks is rolled up
Product neural network, wherein corresponding second pedestrian of the second sub- convolutional neural networks divides branch model;
Correspondingly, described the step of obtaining pedestrian's parted pattern, further includes:
Second pedestrian divides branch model and the M class cut zone is classified as a kind of training sample, and N class is divided
Region N+1 class training sample is trained respectively as N class training sample, using the described second sub- convolutional neural networks until
Meet the preset condition of convergence.
Further, the convolutional neural networks also include to roll up with the described first sub- convolutional neural networks and second son
The sub- convolutional neural networks of third of product neural network concurrent, wherein the sub- convolutional neural networks of third correspond to third pedestrian point
Cut branch model;
Correspondingly, described the step of obtaining pedestrian's parted pattern, further includes:
The third pedestrian divides branch model using N+M class cut zone as N+M class training sample, using described
The sub- convolutional neural networks of third are trained the N+M class training sample until meeting the preset condition of convergence.
Further, the method also includes:
First pedestrian is calculated during first pedestrian divides branch model training and divides branch's mould
The loss function of type;
Second pedestrian is calculated during second pedestrian divides branch model training and divides branch's mould
The loss function of type;
The third pedestrian is calculated during the third pedestrian divides branch model training and divides branch's mould
The loss function of type;
First pedestrian is divided into the loss function of branch model, the loss letter of second pedestrian segmentation branch model
The loss function that the several and third pedestrian divides branch model is weighted;
Using the loss function after weighting as the loss function of pedestrian's parted pattern, and by pedestrian's parted pattern
Loss function the condition of convergence as the preset condition of convergence.
It is further, described that the training sample set is inputted into the first convolutional neural networks, comprising:
The training sample set is inputted into the second convolutional neural networks, by second convolutional neural networks to described
Sample image in training sample set carries out feature extraction respectively, obtains the feature set of graphs comprising characteristic information;
The feature set of graphs is inputted into the first convolutional neural networks.
Wherein, Fig. 5 is the block diagram for a kind of electronic equipment that the embodiment of the present disclosure provides.For example, the electronic equipment can be
Mobile phone, computer, digital broadcasting terminal, messaging device, game console, tablet device, Medical Devices, body-building are set
It is standby, personal digital assistant etc..Referring to Fig. 5, which may include following one or more components: processing component 502 is deposited
Reservoir 504, electric power assembly 506, multimedia component 508, audio component 510, the interface 512 of input/output (I/O), sensor
Component 514 and communication component 516.
The integrated operation of the usual controlling electronic devices of processing component 502, such as with display, telephone call, data communication, phase
Machine operation and record operate associated operation.Processing component 502 may include that one or more processors 520 refer to execute
It enables, to perform all or part of the steps of the methods described above.In addition, processing component 502 may include one or more modules, just
Interaction between processing component 502 and other assemblies.For example, processing component 502 may include multi-media module, it is more to facilitate
Interaction between media component 508 and processing component 502.
Memory 504 is configured as storing various types of data to support the operation in electronic equipment.These data
Example includes the instruction of any application or method for operating on an electronic device, contact data, telephone book data,
Message, picture, video etc..Memory 504 can by any kind of volatibility or non-volatile memory device or they
Combination is realized, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), it is erasable can
Program read-only memory (EPROM), programmable read only memory (PROM), read-only memory (ROM), magnetic memory, flash memory
Reservoir, disk or CD.
Power supply module 506 provides electric power for the various assemblies of electronic equipment.Power supply module 506 may include power management system
System, one or more power supplys and other with for electronic equipment generate, manage, and distribute the associated component of electric power.
Multimedia component 508 includes the screen of one output interface of offer between the electronic equipment and user.?
In some embodiments, screen may include liquid crystal display (LCD) and touch panel (TP).If screen includes touch panel,
Screen may be implemented as touch screen, to receive input signal from the user.Touch panel includes that one or more touch passes
Sensor is to sense the gesture on touch, slide, and touch panel.The touch sensor can not only sense touch or sliding is dynamic
The boundary of work, but also detect duration and pressure associated with the touch or slide operation.In some embodiments, more
Media component 508 includes a front camera and/or rear camera.When electronic equipment is in operation mode, as shot mould
When formula or video mode, front camera and/or rear camera can receive external multi-medium data.Each preposition camera shooting
Head and rear camera can be a fixed optical lens system or have focusing and optical zoom capabilities.
Audio component 510 is configured as output and/or input audio signal.For example, audio component 510 includes a Mike
Wind (MIC), when electronic equipment is in operation mode, when such as call mode, recording mode, and voice recognition mode, microphone is matched
It is set to reception external audio signal.The received audio signal can be further stored in memory 504 or via communication set
Part 516 is sent.In some embodiments, audio component 510 further includes a loudspeaker, is used for output audio signal.
I/O interface 512 provides interface between processing component 502 and peripheral interface module, and above-mentioned peripheral interface module can
To be keyboard, click wheel, button etc..These buttons may include, but are not limited to: home button, volume button, start button and lock
Determine button.
Sensor module 514 includes one or more sensors, and the state for providing various aspects for electronic equipment is commented
Estimate.For example, sensor module 514 can detecte the state that opens/closes of electronic equipment, the relative positioning of component, such as institute
The display and keypad that component is electronic equipment are stated, sensor module 514 can also detect electronic equipment or electronic equipment one
The position change of a component, the existence or non-existence that user contacts with electronic equipment, electronic equipment orientation or acceleration/deceleration and electricity
The temperature change of sub- equipment.Sensor module 514 may include proximity sensor, be configured to connect in not any physics
It is detected the presence of nearby objects when touching.Sensor module 514 can also include optical sensor, such as CMOS or ccd image sensor,
For being used in imaging applications.In some embodiments, which can also include acceleration transducer, top
Spiral shell instrument sensor, Magnetic Sensor, pressure sensor or temperature sensor.
Communication component 516 is configured to facilitate the communication of wired or wireless way between electronic equipment and other equipment.Electricity
Sub- equipment can access the wireless network based on communication standard, such as WiFi, carrier network (such as 2G, 3G, 4G or 5G) or they
Combination.In one exemplary embodiment, communication component 516 is received via broadcast channel from external broadcasting management system
Broadcast singal or broadcast related information.In one exemplary embodiment, the communication component 516 further includes near-field communication
(NFC) module, to promote short range communication.For example, radio frequency identification (RFID) technology, Infrared Data Association can be based in NFC module
(IrDA) technology, ultra wide band (UWB) technology, bluetooth (BT) technology and other technologies are realized.
In the exemplary embodiment, electronic equipment can be by one or more application specific integrated circuit (ASIC), number
Signal processor (DSP), digital signal processing appts (DSPD), programmable logic device (PLD), field programmable gate array
(FPGA), controller, microcontroller, microprocessor or other electronic components are realized, for executing the above method.
In the exemplary embodiment, a kind of non-transitorycomputer readable storage medium including instruction, example are additionally provided
It such as include the memory 504 of instruction, above-metioned instruction can be executed by the processor 520 of electronic equipment to complete the above method.For example,
The non-transitorycomputer readable storage medium can be ROM, random access memory (RAM), CD-ROM, tape, floppy disk
With optical data storage devices etc..
In the exemplary embodiment, a kind of application program is additionally provided, the memory 504 for example including instruction, above-mentioned finger
Enabling can be executed by the processor 520 of electronic equipment to complete the above method.
Embodiment six
The embodiment of the present disclosure provides a kind of electronic equipment, comprising:
Processor;
Memory for storage processor executable instruction;Wherein, processor is configured as:
Obtain pedestrian image;
Pedestrian image input is obtained using pedestrian's parted pattern training method described in any of the above embodiments training
Pedestrian's parted pattern;
The pedestrian image is split by pedestrian's parted pattern, obtains cut zone.
Further, described that the pedestrian image is split by pedestrian's parted pattern, cut zone is obtained,
Include:
Divide branch model by the third pedestrian of pedestrian's parted pattern to be split the pedestrian image, obtain
Cut zone.
Further, described that the pedestrian image is split by pedestrian's parted pattern, cut zone is obtained,
Include:
The pedestrian image is inputted into the second convolutional neural networks, image is carried out by second convolutional neural networks
Feature extraction obtains the characteristic pattern comprising characteristic information;The characteristic pattern is inputted into pedestrian's parted pattern, obtains cut zone.
The structural block diagram of the electronic equipment of the present embodiment is referring to above-described embodiment five, and which is not described herein again.
Those skilled in the art after considering the specification and implementing the invention disclosed here, will readily occur to its of the disclosure
Its embodiment.This application is intended to cover any variations, uses, or adaptations of the disclosure, these modifications, purposes or
Person's adaptive change follows the general principles of this disclosure and including the undocumented common knowledge in the art of the disclosure
Or conventional techniques.The description and examples are only to be considered as illustrative, and the true scope and spirit of the disclosure are by following
Claim is pointed out.
It should be understood that the present disclosure is not limited to the precise structures that have been described above and shown in the drawings, and
And various modifications and changes may be made without departing from the scope thereof.The scope of the present disclosure is only limited by the accompanying claims.
Claims (13)
1. a kind of pedestrian's parted pattern training method characterized by comprising
Obtain training sample set;Wherein, the training sample set is made of the multiple sample images for being marked cut zone,
Wherein, the cut zone that first kind label is marked in the multiple sample image shares N class and is marked the second class label
Cut zone shares M class, wherein the M and N is positive integer;
The training sample set is inputted into the first convolutional neural networks;Wherein, first convolutional neural networks include at least
One sub- convolutional neural networks, the corresponding pedestrian of a sub- convolutional neural networks divide branch model;
Each pedestrian divides branch model according to training sample set merging rows training until meeting the preset condition of convergence, obtains
To the pedestrian's parted pattern for dividing branch model comprising at least one pedestrian;Wherein, pedestrian's parted pattern is for dividing row
Cut zone in people's image.
2. pedestrian's parted pattern training method according to claim 1, which is characterized in that the convolutional neural networks include
First sub- convolutional neural networks, wherein corresponding first pedestrian of the first sub- convolutional neural networks divides branch model;
Correspondingly, each pedestrian divides branch model according to training sample set merging rows training until meeting preset
The condition of convergence obtains the pedestrian's parted pattern for dividing branch model comprising at least one pedestrian, comprising:
First pedestrian divides branch model using the M class cut zone as M class training sample, and by the N class
Cut zone is classified as a kind of training sample, is trained directly using the described first sub- convolutional neural networks to M+1 class training sample
To meeting the preset condition of convergence.
3. pedestrian's parted pattern training method according to claim 2, which is characterized in that the convolutional neural networks also wrap
Containing the second sub- convolutional neural networks parallel with the described first sub- convolutional neural networks, wherein the second sub- convolutional Neural net
Corresponding second pedestrian of network divides branch model;
Correspondingly, described the step of obtaining pedestrian's parted pattern, further includes:
Second pedestrian divides branch model and the M class cut zone is classified as a kind of training sample, and by N class cut zone
Respectively as N class training sample, N+1 class training sample is trained using the described second sub- convolutional neural networks until meeting
The preset condition of convergence.
4. pedestrian's parted pattern training method according to claim 3, which is characterized in that the convolutional neural networks also wrap
Containing the third sub- convolutional neural networks parallel with the described first sub- convolutional neural networks and the second sub- convolutional neural networks,
In, the sub- convolutional neural networks of third correspond to third pedestrian and divide branch model;
Correspondingly, described the step of obtaining pedestrian's parted pattern, further includes:
The third pedestrian divides branch model using N+M class cut zone as N+M class training sample, using the third
Sub- convolutional neural networks are trained the N+M class training sample until meeting the preset condition of convergence.
5. pedestrian's parted pattern training method according to claim 4, which is characterized in that the method also includes:
First pedestrian is calculated during first pedestrian divides branch model training and divides branch model
Loss function;
Second pedestrian is calculated during second pedestrian divides branch model training and divides branch model
Loss function;
The third pedestrian is calculated during the third pedestrian divides branch model training and divides branch model
Loss function;
First pedestrian is divided into the loss function of branch model, second pedestrian divide branch model loss function and
The loss function that the third pedestrian divides branch model is weighted;
Using the loss function after weighting as the loss function of pedestrian's parted pattern, and by the damage of pedestrian's parted pattern
The condition of convergence of function is lost as the preset condition of convergence.
6. pedestrian's parted pattern training method according to claim 1-5, which is characterized in that described by the instruction
Practice sample set and input the first convolutional neural networks, comprising:
The training sample set is inputted into the second convolutional neural networks, by second convolutional neural networks to the training
Sample image in sample set carries out feature extraction respectively, obtains the feature set of graphs comprising characteristic information;
The feature set of graphs is inputted into the first convolutional neural networks.
7. a kind of pedestrian's dividing method characterized by comprising
Obtain pedestrian image;
Pedestrian image input is trained using the described in any item pedestrian's parted pattern training methods of claim row 1-6
The pedestrian's parted pattern arrived;
The pedestrian image is split by pedestrian's parted pattern, obtains cut zone.
8. pedestrian's dividing method according to claim 7, which is characterized in that it is described by pedestrian's parted pattern to institute
It states pedestrian image to be split, obtains cut zone, comprising:
Divide branch model by the third pedestrian of pedestrian's parted pattern to be split the pedestrian image, be divided
Region.
9. pedestrian's dividing method according to claim 7 or 8, which is characterized in that described to pass through pedestrian's parted pattern
The pedestrian image is split, cut zone is obtained, comprising: the pedestrian image is inputted into the second convolutional neural networks,
Feature extraction is carried out to image by second convolutional neural networks, obtains the characteristic pattern comprising characteristic information;
The characteristic pattern is inputted into pedestrian's parted pattern, obtains cut zone.
10. a kind of pedestrian's parted pattern training device characterized by comprising
Sample acquisition module, for obtaining training sample set;Wherein, the training sample set is by being marked cut zone
Multiple sample images composition, wherein be marked in the multiple sample image first kind label cut zone share N class and by
The cut zone for marking the second class label shares M class, wherein the M and N is positive integer;
Sample input module, for the training sample set to be inputted the first convolutional neural networks;Wherein, first convolution
Neural network includes at least one sub- convolutional neural networks, and the corresponding pedestrian of a sub- convolutional neural networks divides branch's mould
Type;
Model training module divides branch model according to training sample set merging rows training until meeting for each pedestrian
The preset condition of convergence obtains the pedestrian's parted pattern for dividing branch model comprising at least one pedestrian;Wherein, the pedestrian point
Model is cut for dividing the cut zone in pedestrian image.
11. a kind of pedestrian's segmenting device characterized by comprising
Image collection module, for obtaining pedestrian image;
Image input module divides mould using pedestrian described in any one of claims 1-6 for inputting the pedestrian image
Pedestrian's parted pattern that the training of type training method obtains;
Image segmentation module obtains cut zone for being split by pedestrian's parted pattern to the pedestrian image.
12. a kind of electronic equipment characterized by comprising
Processor;
Memory for storage processor executable instruction;Wherein, the processor is configured to perform claim requires 1-6 to appoint
Pedestrian's parted pattern training method described in one or perform claim require the described in any item pedestrian's dividing methods of 7-9.
13. a kind of non-transitorycomputer readable storage medium, when the instruction in the storage medium is by the processing of electronic equipment
When device executes, so that electronic equipment is able to carry out pedestrian's parted pattern training method described in any one of claims 1-6, or
Perform claim requires the described in any item pedestrian's dividing methods of 7-9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910414408.6A CN110287782A (en) | 2019-05-17 | 2019-05-17 | Pedestrian's parted pattern training method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910414408.6A CN110287782A (en) | 2019-05-17 | 2019-05-17 | Pedestrian's parted pattern training method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110287782A true CN110287782A (en) | 2019-09-27 |
Family
ID=68002126
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910414408.6A Pending CN110287782A (en) | 2019-05-17 | 2019-05-17 | Pedestrian's parted pattern training method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110287782A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112541928A (en) * | 2020-12-18 | 2021-03-23 | 上海商汤智能科技有限公司 | Network training method and device, image segmentation method and device and electronic equipment |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107784282A (en) * | 2017-10-24 | 2018-03-09 | 北京旷视科技有限公司 | The recognition methods of object properties, apparatus and system |
CN107909580A (en) * | 2017-11-01 | 2018-04-13 | 深圳市深网视界科技有限公司 | A kind of pedestrian wears color identification method, electronic equipment and storage medium clothes |
CN108764065A (en) * | 2018-05-04 | 2018-11-06 | 华中科技大学 | A kind of method of pedestrian's weight identification feature fusion assisted learning |
CN108921054A (en) * | 2018-06-15 | 2018-11-30 | 华中科技大学 | A kind of more attribute recognition approaches of pedestrian based on semantic segmentation |
CN109598184A (en) * | 2017-09-30 | 2019-04-09 | 北京图森未来科技有限公司 | A kind for the treatment of method and apparatus of multi-split task |
-
2019
- 2019-05-17 CN CN201910414408.6A patent/CN110287782A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109598184A (en) * | 2017-09-30 | 2019-04-09 | 北京图森未来科技有限公司 | A kind for the treatment of method and apparatus of multi-split task |
CN107784282A (en) * | 2017-10-24 | 2018-03-09 | 北京旷视科技有限公司 | The recognition methods of object properties, apparatus and system |
CN107909580A (en) * | 2017-11-01 | 2018-04-13 | 深圳市深网视界科技有限公司 | A kind of pedestrian wears color identification method, electronic equipment and storage medium clothes |
CN108764065A (en) * | 2018-05-04 | 2018-11-06 | 华中科技大学 | A kind of method of pedestrian's weight identification feature fusion assisted learning |
CN108921054A (en) * | 2018-06-15 | 2018-11-30 | 华中科技大学 | A kind of more attribute recognition approaches of pedestrian based on semantic segmentation |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112541928A (en) * | 2020-12-18 | 2021-03-23 | 上海商汤智能科技有限公司 | Network training method and device, image segmentation method and device and electronic equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108121952B (en) | Face key point positioning method, device, equipment and storage medium | |
CN106295566B (en) | Facial expression recognizing method and device | |
CN105809704B (en) | Identify the method and device of image definition | |
CN108399409B (en) | Image classification method, device and terminal | |
CN106339680B (en) | Face key independent positioning method and device | |
CN104408402B (en) | Face identification method and device | |
CN106295511B (en) | Face tracking method and device | |
CN104572905B (en) | Print reference creation method, photo searching method and device | |
CN105631403B (en) | Face identification method and device | |
CN104850828B (en) | Character recognition method and device | |
CN109670397A (en) | Detection method, device, electronic equipment and the storage medium of skeleton key point | |
CN105447864B (en) | Processing method, device and the terminal of image | |
CN104700353B (en) | Image filters generation method and device | |
CN110517185A (en) | Image processing method, device, electronic equipment and storage medium | |
CN107492115A (en) | The detection method and device of destination object | |
CN105512605A (en) | Face image processing method and device | |
CN106296690A (en) | The method for evaluating quality of picture material and device | |
CN107463903A (en) | Face key independent positioning method and device | |
CN108010060A (en) | Object detection method and device | |
CN105528078B (en) | The method and device of controlling electronic devices | |
CN109543066A (en) | Video recommendation method, device and computer readable storage medium | |
CN106295515A (en) | Determine the method and device of human face region in image | |
CN109871843A (en) | Character identifying method and device, the device for character recognition | |
CN109784147A (en) | Critical point detection method, apparatus, electronic equipment and storage medium | |
CN110366050A (en) | Processing method, device, electronic equipment and the storage medium of video data |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190927 |