Convolutional neural networks structure, face character recognition methods, device and terminal device
Technical field
The application belongs to technical field of biometric identification more particularly to a kind of convolutional neural networks structure, face character identification
Method, apparatus and terminal device.
Background technique
With the development of science and technology, people can be obtained from the information sources such as image, audio and video by biological identification technology
The personal information of personage is taken, face character identification is exactly one of biological identification technology, and people can know by face character
The personal information such as personage's gender, age and race are obtained not from image or video.
It is current main by convolutional neural networks progress face character identification, in the process of design convolutional neural networks structure
In, the convolution kernel of identical size is often used in same layer convolutional layer, it is difficult to adapt to the variation of input image size.
Meanwhile current convolutional neural networks use is all convolutional layer, the feature that each layer of convolutional layer learns all compares
The feature that upper one layer of convolutional layer learns is few, the richer feature of no calligraphy learning.
To sum up, there are the convolution kernels that the same convolutional layer uses identical size for existing convolutional neural networks, it is difficult to adapt to
The variation of input image size, and convolutional layer without calligraphy learning to more abundant feature the problem of.
Summary of the invention
In view of this, the embodiment of the present application provides a kind of convolutional neural networks structure, face character recognition methods, device
And terminal device, to solve the convolution kernel that the same convolutional layer of convolutional neural networks in the prior art uses identical size, it is difficult to
Adapt to the variation of input image size, and convolutional layer without calligraphy learning to more abundant feature the problem of.
The first aspect of the embodiment of the present application provides a kind of convolutional neural networks structure for face character identification, institute
It states and is provided at least one layer of warp lamination and at least between the first layer convolutional layer of convolutional neural networks and the last layer convolutional layer
One layer inception layers, using the convolution kernel of multiple and different sizes when the inception carries out convolution.
The second aspect of the embodiment of the present application provides a kind of face character recognition methods, comprising:
Image to be detected is obtained, Face datection is carried out to the image to be detected and face is aligned, is obtained to be identified
Facial image;
The facial image to be identified is inputted into trained convolutional neural networks, obtains the face to be identified
The face character recognition result of image.
The third aspect of the embodiment of the present application provides a kind of face character identification device, comprising:
Face detection module carries out Face datection and people to the image to be detected for obtaining image to be detected
Face alignment, obtains facial image to be identified;
Attribute Recognition module, for the facial image to be identified to be inputted in trained convolutional neural networks,
Obtain the face character recognition result of the facial image to be identified.
The fourth aspect of the embodiment of the present application provides a kind of terminal device, including memory, processor and is stored in
In the memory and the computer program that can run on the processor, when the processor executes the computer program
It realizes such as the step of the above method.
5th aspect of the embodiment of the present application provides a kind of computer readable storage medium, the computer-readable storage
Media storage has computer program, realizes when the computer program is executed by processor such as the step of the above method.
Existing beneficial effect is the embodiment of the present application compared with prior art:
This application provides a kind of convolutional neural networks structures for face character identification, the of convolutional neural networks
Inception layers of at least one layer of warp lamination and at least one layer are provided between one layer of convolutional layer and the last layer convolutional layer, instead
The characteristic image that convolutional layer can learn upper one layer of convolutional layer carries out supplement extension, so that richer feature is arrived in study,
Using the convolution kernel of multiple and different sizes when inception layers of progress convolution, different convolution kernel sizes can make inception
The feature of different scale is arrived in layer study, to allow convolutional neural networks preferably to adapt to the variation of input image size, together
The convolution kernel of the multiple and different sizes of Shi Caiyong can also increase the diversity of feature, solve convolutional neural networks in the prior art
The same convolutional layer uses the convolution kernel of identical size, it is difficult to adapt to the variation of input image size, and convolutional layer is without calligraphy learning
The problem of to more abundant feature.
Detailed description of the invention
It in order to more clearly explain the technical solutions in the embodiments of the present application, below will be to embodiment or description of the prior art
Needed in attached drawing be briefly described, it should be apparent that, the accompanying drawings in the following description is only some of the application
Embodiment for those of ordinary skill in the art without any creative labor, can also be according to these
Attached drawing obtains other attached drawings.
Fig. 1 is the structural representation of the convolutional neural networks structure provided by the embodiments of the present application for face character identification
Figure;
Fig. 2 is the implementation process schematic diagram of face character recognition methods provided by the embodiments of the present application;
Fig. 3 is the schematic diagram of face character identification device provided by the embodiments of the present application;
Fig. 4 is the schematic diagram of terminal device provided by the embodiments of the present application.
Specific embodiment
In being described below, for illustration and not for limitation, the tool of such as particular system structure, technology etc is proposed
Body details, so as to provide a thorough understanding of the present application embodiment.However, it will be clear to one skilled in the art that there is no these specific
The application also may be implemented in the other embodiments of details.In other situations, it omits to well-known system, device, electricity
The detailed description of road and method, so as not to obscure the description of the present application with unnecessary details.
In order to illustrate technical solution described herein, the following is a description of specific embodiments.
It should be appreciated that ought use in this specification and in the appended claims, term " includes " instruction is described special
Sign, entirety, step, operation, the presence of element and/or component, but be not precluded one or more of the other feature, entirety, step,
Operation, the presence or addition of element, component and/or its set.
It is also understood that mesh of the term used in this present specification merely for the sake of description specific embodiment
And be not intended to limit the application.As present specification and it is used in the attached claims, unless on
Other situations are hereafter clearly indicated, otherwise " one " of singular, "one" and "the" are intended to include plural form.
It will be further appreciated that the term "and/or" used in present specification and the appended claims is
Refer to any combination and all possible combinations of one or more of associated item listed, and including these combinations.
As used in this specification and in the appended claims, term " if " can be according to context quilt
Be construed to " when ... " or " once " or " in response to determination " or " in response to detecting ".Similarly, phrase " if it is determined that " or
" if detecting [described condition or event] " can be interpreted to mean according to context " once it is determined that " or " in response to true
It is fixed " or " once detecting [described condition or event] " or " in response to detecting [described condition or event] ".
In the specific implementation, mobile terminal described in the embodiment of the present application is including but not limited to such as with the sensitive table of touch
Mobile phone, laptop computer or the tablet computer in face (for example, touch-screen display and/or touch tablet) etc it is other
Portable device.It is to be further understood that in certain embodiments, above equipment is not portable communication device, but is had
The desktop computer of touch sensitive surface (for example, touch-screen display and/or touch tablet).
In following discussion, the mobile terminal including display and touch sensitive surface is described.However, should manage
Solution, mobile terminal may include that one or more of the other physical User of such as physical keyboard, mouse and/or control-rod connects
Jaws equipment.
Mobile terminal supports various application programs, such as one of the following or multiple: drawing application program, demonstration application
Program, word-processing application, website creation application program, disk imprinting application program, spreadsheet applications, game are answered
With program, telephony application, videoconference application, email application, instant messaging applications, forging
Refining supports application program, photo management application program, digital camera application program, digital camera application program, web-browsing to answer
With program, digital music player application and/or video frequency player application program.
At least one of such as touch sensitive surface can be used in the various application programs that can be executed on mobile terminals
Public physical user-interface device.It can be adjusted among applications and/or in corresponding application programs and/or change touch is quick
Feel the corresponding information shown in the one or more functions and terminal on surface.In this way, terminal public physical structure (for example,
Touch sensitive surface) it can support the various application programs with user interface intuitive and transparent for a user.
Embodiment one:
The embodiment of the present application one provides a kind of convolutional neural networks structure for face character identification, the convolution mind
At least one layer of warp lamination and at least one layer are provided between first layer convolutional layer through network and the last layer convolutional layer
Inception layers, using the convolution kernel of multiple and different sizes when the inception carries out convolution.
Wherein, convolutional neural networks are a kind of multilayer neural networks, are good at the mutually shutdown that processing image is especially big image
Device problem concerning study, convolutional neural networks include in the middle multilayer convolutional layer, can pass through convolutional layer, pond layer and the combination for connecting layer etc. entirely
Obtain different types of convolutional neural networks structure.
The application is provided at least one layer between the first layer convolutional layer and the last layer convolutional layer of convolutional neural networks
Inception layers of warp lamination and at least one layer.
A series of connected convolutional layers, the learning process of each layer convolutional layer are often used in current convolutional neural networks
In, the size of characteristic image constantly becomes smaller, and each layer of convolutional layer is all the base of the characteristic image learnt in upper one layer of convolutional layer
Learnt on plinth, it is difficult to which more abundant feature is arrived in study.
And the forward-propagating process of warp lamination (also referred to as transposition convolutional layer) is considered as the back-propagation process of convolutional layer,
The size that characteristic image can be restored by warp lamination carries out supplement extension to characteristic image, can learn to higher level
Feature, can also reduce the model parameter of network.
Meanwhile in current convolutional neural networks, the same convolutional layer is provided with one or more convolution kernels, when being provided with
When multiple convolution kernels, the size of each convolution kernel is identical, and therefore, convolutional neural networks are difficult to adapt to input image size change
The case where change.
And the convolution kernel of multiple and different sizes is applied in inception layers, and the diversity of characteristic image can be increased, it is more
Scale fusion feature image, and operand is reduced, so that convolutional neural networks is preferably adapted to the variation of input image size.
Further, the output layer of the convolutional neural networks is provided with loss function, wherein in advance draws face character
It is divided into semantic attribute and time sequence attribute, and semantic attribute is set and corresponds to different loss functions with time sequence attribute, improves
The identification accuracy of trained convolutional neural networks.For example, the semantic attribute, which is arranged, corresponds to cross entropy loss function, institute
It states time sequence attribute and corresponds to SmoothL1 loss function.
The output layer of convolutional neural networks is provided with loss function, includes semantic attribute and time sequence category in face character
Property.
Semantic attribute includes integrity attribute (such as race and sex is other), local attribute's (for example whether having beard), movement category
Property one of (such as expression and movement) and wearing attribute (for example whether wearing glasses and cap) or a variety of, semantic attribute is
Refer to and does not need to judge the attribute of size or length in face character.
Secondary sequence attribute refers to the attribute for needing to judge size or length, such as age and hair, and the age needs to judge big
Small, hair needs to judge length.
Different loss functions is set for different face characters in the embodiment of the present application, semantic attribute corresponds to cross entropy
Loss function, secondary sequence attribute correspond to SmoothL1 loss function.
The loss function of each face character is summed according to preset weighted value in the training process of convolutional neural networks
To total loss function, construct with the minimum objective function of the minimum target of total losses function, by the predicted value of training sample
Calculated with true value by minimizing objective function, and backpropagation to the weighted values of convolutional neural networks and bias into
Row updates, so that total loss function is smaller and smaller until reaching preset requirement, completion training minimizes the expression of objective function
Formula is as follows:
Wherein, G is the number of species of face character, such as face character is divided into age, gender and race etc.;MgIt is certain
The classification of face character, such as gender are divided into male and female;N is the quantity of face sample image;λgIt is preset weighted value;Lg
It is the corresponding loss function of each face character, passes through the true value of nonlinear attribute forecast function F and face sample image
Mapping obtains;WgIt is the weighted value of sharing feature sub-network (such as convolutional layer, warp lamination and inception layers);WcIt is specified
The weighted value of attributive character sub-network (such as connecting layer entirely);γ1And γ2It is greater than 0 canonical constant;Φ function is regular terms, right
The weighted value of network plays punishment.
And according to the characteristic of different face characters using targetedly loss function, semantic attribute is corresponding to intersect entropy loss
Function, secondary sequence attribute correspond to SmoothL1 loss function, can more accurately be trained to convolutional neural networks, improve warp
Cross the identification accuracy of trained convolutional neural networks, wherein the expression formula of cross entropy loss function is as follows:
Wherein,It is Softmax anticipation function, result is in [0,1] section;It is jth kind face character
K-th predicted value, be calculated by attribute forecast function F;It is the true value of jth kind face character;
It indicates to export 1 when identical both in bracket, otherwise exports 0.
The expression formula of SmoothL1 loss function is as follows:
Wherein, xiFor the secondary sequence attribute forecast value of i-th of face sample image, yiFor time of i-th of face sample image
Sequence attribute actual value.
The loss function of the loss function of each semantic attribute and time sequence attribute can be directly substituted into minimum target function
In, each semantic attribute can also be summed to obtain the total losses function of semantic attribute according to preset semantic weight value, it will be each
Secondary sequence attribute sums to obtain time total losses function of sequence attribute according to preset secondary sequence weighted value, then again by semantic attribute
Total losses function and the total losses function of time sequence attribute substitute into minimum target function, summation process is as follows:
LcIndicate the total losses function of semantic attribute or the total losses function of secondary sequence attribute;The kind of T expression semantic attribute
The species number of class number or secondary sequence attribute;αtIndicate the preset semantic weight value or t-th sequence attribute of t-th of semantic attribute
Preset secondary sequence weighted value.
By taking the improvement to the Alex model comprising 5 convolutional layers and 2 full articulamentums as an example, improved model is named
For Alinc model, structure is as shown in Figure 1, there are four convolutional layers for setting in Alinc model, in second convolutional layer 2 and third
It is provided with a warp lamination 9 between a convolutional layer 3, is provided between second convolutional layer 2 and warp lamination 9
Inception layer 10, is provided with inception layer 11 between warp lamination 9 and third convolutional layer 3, first convolutional layer 1 with
Input layer 7 is connected, and the 4th convolutional layer 4 is connected with first full articulamentum 5, and first full articulamentum 5 is connect entirely with second
Layer 6 is connected, and second full articulamentum 6 is connected with output layer 8, and loss function is provided in output layer, and semantic attribute is corresponding to intersect
Entropy loss function, secondary sequence attribute correspond to SmoothL1 loss function.
Its accuracy identified, the face of input are trained and tested to Alinc model with the facial image after being aligned
Image may be sized to 3*96*112, wherein 3 be color image port number, wide is 96, a height of 112, after test its
Accuracy is 90.25%, by model similar in Alinc model and other model numbers of plies and model size with Alinc model into
Row compares, such as document " Leveraging Mid-Level Deep Representations For Predicting Face
Attributes in the Wild " in model number of plies ratio Alinc model it is more, and model size is also bigger than Alinc model, but
It is that the accuracy of the model only has 89.8%, it can be effectively to convolutional neural networks by technical solution in the present embodiment known to relatively
Structure improve, improve convolutional neural networks identification accuracy.
In embodiments herein one, between the first layer convolutional layer and the last layer convolutional layer of convolutional neural networks
It is provided with inception layers of at least one layer of warp lamination and at least one layer, convolutional neural networks can be made by inception layers
The variation of input image size can be better adapted to, while increasing the diversity of characteristic image, Multiscale Fusion characteristic image,
And operand is reduced, the size of characteristic image can be restored by warp lamination, and supplement extension, Ji Nengxue are carried out to characteristic image
The feature of higher level is practised, the model parameter of network can be also reduced, it is same to solve convolutional neural networks in the prior art
Convolutional layer uses the convolution kernel of identical size, it is difficult to adapt to the variation of input image size, and convolutional layer without calligraphy learning to more
The problem of feature abundant.
Meanwhile different loss functions is set for different face characters in output layer, semantic attribute is corresponding to intersect
Entropy loss function, secondary sequence attribute corresponds to SmoothL1 loss function, so that more accurately convolutional neural networks are trained,
Improve the identification accuracy of trained convolutional neural networks.
Embodiment two:
A kind of face character recognition methods provided below the embodiment of the present application two is described, and please refers to attached drawing 2, this
Apply for that a kind of face character recognition methods of embodiment includes:
Step S201, image to be detected is obtained, Face datection is carried out to the image to be detected and face is aligned, is obtained
To facial image to be identified.
When in image to be detected including multiple facial images, Face datection and face pair are carried out to image to be detected
Together, multiple facial images to be identified are obtained, it is ensured that a face only occur in a secondary facial image, be convenient for convolutional neural networks
Carry out face character identification.
Step S202, the facial image to be identified is inputted into trained convolutional neural networks, obtain it is described to
The face character recognition result of the facial image of identification.
It is shared in convolutional neural networks after facial image to be identified is inputted trained convolutional neural networks
Feature sub-network (such as convolutional layer, warp lamination and inception layers) can extract the sharing feature of facial image to be identified,
Specified attribute feature sub-network (such as connecting layer entirely) can extract the feature of the specified attribute of facial image to be identified, and output layer is logical
Cross sharing feature and specified attribute feature each face character of facial image to be identified be calculated it is to be detected
The recognition result of the face character of each facial image to be identified in image.
In addition, by image to be detected carry out Face datection before, image to be detected can also be screened, judge to
Whether the image of detection meets prediction picture requirement, for example, whether image definition reaches preset clarity threshold, brightness of image
Whether reach preset luminance threshold etc., image to be detected screening for not meeting prediction picture requirement is rejected.
Further, training obtains the convolutional neural networks in the following manner:
Face sample image is obtained, each face character to be learned in the face sample image is labeled,
The face sample image after mark is inputted in initial convolutional neural networks, to the initial convolutional neural networks into
Row training, obtains the trained convolutional neural networks.
Before being trained to initial convolutional neural networks, need first to obtain face sample image, while it needs to be determined that
Convolutional neural networks need the face character identified, then carry out to each face character to be learned in face sample image
Mark, such as the face character that convolutional neural networks need to identify is race, age and gender, then needs to face sample image
Race, age and the gender of middle face are labeled, such as can be labeled as (yellow, 40 years old, male).
The facial image after mark is inputted in initial convolutional neural networks, to initial convolutional neural networks into
Row training, obtains trained convolutional neural networks.
After obtaining trained convolutional neural networks, trained convolutional neural networks can be carried out test and
Learn again, obtain face test image, face test image is inputted into trained convolutional neural networks, obtains face test
The face character recognition result of image, the face character result and face for calculating trained convolutional neural networks output are tested
Error is greater than the face test image of preset error range as new face sample by the error of the actual face character of image
Image continues to be trained convolutional neural networks, and preset error range can be determined according to the actual situation, such as when need
When the face character to be identified is age, gender and race, preset error range can be set to age deviation within 5 years old, or
Any one of person's gender and race identify that mistake, such as male are identified as women, and white people are identified as yellow.
Further, the face character includes: semantic attribute and/or secondary sequence attribute;
Semantic attribute includes one of integrity attribute, local attribute, action attributes and wearing attribute or a variety of.
Semantic attribute includes integrity attribute (such as race and sex is other), local attribute's (for example whether having beard), movement category
Property one of (such as expression and movement) and wearing attribute (for example whether wearing glasses and cap) or a variety of, semantic attribute is
The attribute of size or length is not needed to judge in face character.
Secondary sequence attribute refers to the attribute for needing to judge size or length, such as age and hair, and the age needs to judge big
Small, hair needs to judge length.
It should be understood that the size of the serial number of each step is not meant that the order of the execution order in above-described embodiment, each process
Execution sequence should be determined by its function and internal logic, the implementation process without coping with the embodiment of the present application constitutes any limit
It is fixed.
Embodiment three:
The embodiment of the present application three provides a kind of face character identification device, for purposes of illustration only, only showing and the application phase
The part of pass, as shown in figure 3, face character identification device includes,
Face detection module 301, for obtaining image to be detected, to the image to be detected carry out Face datection and
Face alignment, obtains facial image to be identified;
Attribute Recognition module 302, for the facial image to be identified to be inputted trained convolutional neural networks
In, obtain the face character recognition result of the facial image to be identified.
Further, face character identification device further include:
Network training module, for obtaining face sample image, to each to be learned in the face sample image
Face character is labeled, and the face sample image after mark is inputted initial convolutional neural networks, to described initial
Convolutional neural networks be trained, obtain the trained convolutional neural networks.
Further, the face character includes: semantic attribute and/or secondary sequence attribute;
Semantic attribute includes one of integrity attribute, local attribute, action attributes and wearing attribute or a variety of.
It should be noted that the contents such as information exchange, implementation procedure between above-mentioned apparatus/unit, due to the application
Embodiment of the method is based on same design, concrete function and bring technical effect, for details, reference can be made to embodiment of the method part, this
Place repeats no more.
Example IV:
Fig. 4 is the schematic diagram for the terminal device that the embodiment of the present application four provides.As shown in figure 4, the terminal of the embodiment is set
Standby 40 include: processor 400, memory 401 and are stored in the memory 401 and can transport on the processor 400
Capable computer program 402.The processor 400 realizes above-mentioned palm and its key point when executing the computer program 402
Step in detection method embodiment, such as step S201 to S202 shown in Fig. 2.Alternatively, the processor 400 execute it is described
Realize the function of each module/unit in above-mentioned each Installation practice when computer program 402, for example, module 301 shown in Fig. 3 to
302 function.
Illustratively, the computer program 402 can be divided into one or more module/units, it is one or
Multiple module/the units of person are stored in the memory 401, and are executed by the processor 400, to complete the application.Institute
Stating one or more module/units can be the series of computation machine program instruction section that can complete specific function, the instruction segment
For describing implementation procedure of the computer program 402 in the terminal device 40.For example, the computer program 402
Face detection module and Attribute Recognition module can be divided into, each module concrete function is as follows:
Image to be detected is obtained, Face datection is carried out to the image to be detected and face is aligned, is obtained to be identified
Facial image;
The facial image to be identified is inputted in trained convolutional neural networks, the people to be identified is obtained
The face character recognition result of face image.
The terminal device 40 can be the calculating such as desktop PC, notebook, palm PC and cloud server and set
It is standby.The terminal device may include, but be not limited only to, processor 400, memory 401.It will be understood by those skilled in the art that
Fig. 4 is only the example of terminal device 40, does not constitute the restriction to terminal device 40, may include more more or less than illustrating
Component, perhaps combine certain components or different components, such as the terminal device can also be set including input and output
Standby, network access equipment, bus etc..
Alleged processor 400 can be central processing unit (Central Processing Unit, CPU), can also be
Other general processors, digital signal processor (Digital Signal Processor, DSP), specific integrated circuit
(Application Specific Integrated Circuit, ASIC), ready-made programmable gate array (Field-
Programmable Gate Array, FPGA) either other programmable logic device, discrete gate or transistor logic,
Discrete hardware components etc..General processor can be microprocessor or the processor is also possible to any conventional processor
Deng.
The memory 401 can be the internal storage unit of the terminal device 40, such as the hard disk of terminal device 40
Or memory.The memory 401 is also possible to the External memory equipment of the terminal device 40, such as on the terminal device 40
The plug-in type hard disk of outfit, intelligent memory card (Smart Media Card, SMC), secure digital (Secure Digital, SD)
Card, flash card (Flash Card) etc..Further, the memory 401 can also be both interior including the terminal device 40
Portion's storage unit also includes External memory equipment.The memory 401 is for storing the computer program and the terminal
Other programs and data needed for equipment.The memory 401, which can be also used for temporarily storing, have been exported or will be defeated
Data out.
It is apparent to those skilled in the art that for convenience of description and succinctly, only with above-mentioned each function
Can unit, module division progress for example, in practical application, can according to need and by above-mentioned function distribution by different
Functional unit, module are completed, i.e., the internal structure of described device is divided into different functional unit or module, more than completing
The all or part of function of description.Each functional unit in embodiment, module can integrate in one processing unit, can also
To be that each unit physically exists alone, can also be integrated in one unit with two or more units, it is above-mentioned integrated
Unit both can take the form of hardware realization, can also realize in the form of software functional units.In addition, each function list
Member, the specific name of module are also only for convenience of distinguishing each other, the protection scope being not intended to limit this application.Above system
The specific work process of middle unit, module, can refer to corresponding processes in the foregoing method embodiment, and details are not described herein.
In the above-described embodiments, it all emphasizes particularly on different fields to the description of each embodiment, is not described in detail or remembers in some embodiment
The part of load may refer to the associated description of other embodiments.
Those of ordinary skill in the art may be aware that list described in conjunction with the examples disclosed in the embodiments of the present disclosure
Member and algorithm steps can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually
It is implemented in hardware or software, the specific application and design constraint depending on technical solution.Professional technician
Each specific application can be used different methods to achieve the described function, but this realization is it is not considered that exceed
Scope of the present application.
In embodiment provided herein, it should be understood that disclosed device/terminal device and method, it can be with
It realizes by another way.For example, device described above/terminal device embodiment is only schematical, for example, institute
The division of module or unit is stated, only a kind of logical function partition, there may be another division manner in actual implementation, such as
Multiple units or components can be combined or can be integrated into another system, or some features can be ignored or not executed.Separately
A bit, shown or discussed mutual coupling or direct-coupling or communication connection can be through some interfaces, device
Or the INDIRECT COUPLING or communication connection of unit, it can be electrical property, mechanical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit
The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple
In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme
's.
It, can also be in addition, each functional unit in each embodiment of the application can integrate in one processing unit
It is that each unit physically exists alone, can also be integrated in one unit with two or more units.Above-mentioned integrated list
Member both can take the form of hardware realization, can also realize in the form of software functional units.
If the integrated module/unit be realized in the form of SFU software functional unit and as independent product sale or
In use, can store in a computer readable storage medium.Based on this understanding, the application realizes above-mentioned implementation
All or part of the process in example method, can also instruct relevant hardware to complete, the meter by computer program
Calculation machine program can be stored in a computer readable storage medium, the computer program when being executed by processor, it can be achieved that on
The step of stating each embodiment of the method.Wherein, the computer program includes computer program code, the computer program generation
Code can be source code form, object identification code form, executable file or certain intermediate forms etc..The computer-readable medium
It may include: any entity or device, recording medium, USB flash disk, mobile hard disk, magnetic that can carry the computer program code
Dish, CD, computer storage, read-only memory (ROM, Read-Only Memory), random access memory (RAM,
Random Access Memory), electric carrier signal, telecommunication signal and software distribution medium etc..It should be noted that described
The content that computer-readable medium includes can carry out increasing appropriate according to the requirement made laws in jurisdiction with patent practice
Subtract, such as does not include electric carrier signal and electricity according to legislation and patent practice, computer-readable medium in certain jurisdictions
Believe signal.
Embodiment described above is only to illustrate the technical solution of the application, rather than its limitations;Although referring to aforementioned reality
Example is applied the application is described in detail, those skilled in the art should understand that: it still can be to aforementioned each
Technical solution documented by embodiment is modified or equivalent replacement of some of the technical features;And these are modified
Or replacement, the spirit and scope of each embodiment technical solution of the application that it does not separate the essence of the corresponding technical solution should all
Comprising within the scope of protection of this application.