CN109165570A - Method and apparatus for generating information - Google Patents

Method and apparatus for generating information Download PDF

Info

Publication number
CN109165570A
CN109165570A CN201810878626.0A CN201810878626A CN109165570A CN 109165570 A CN109165570 A CN 109165570A CN 201810878626 A CN201810878626 A CN 201810878626A CN 109165570 A CN109165570 A CN 109165570A
Authority
CN
China
Prior art keywords
facial image
face
image
target
preset quantity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810878626.0A
Other languages
Chinese (zh)
Inventor
邓启力
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN201810878626.0A priority Critical patent/CN109165570A/en
Publication of CN109165570A publication Critical patent/CN109165570A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Ophthalmology & Optometry (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the present application discloses the method and apparatus for generating information.One specific embodiment of this method includes: facial image to be obtained from human face image sequence corresponding to target face video as target facial image, and obtain preset quantity facial image as preset quantity candidate face image from human face image sequence;The eye closing action recognition model that target facial image and the input of preset quantity candidate face image is trained in advance respectively, obtains preset quantity recognition result corresponding to recognition result corresponding to target facial image and preset quantity candidate face image;Based on recognition result obtained, objective result corresponding to target facial image is determined, wherein objective result is for characterizing whether face indicated by facial image executes blink movement.The embodiment provides support for special efficacy addition, and improves the accuracy of information generation.

Description

Method and apparatus for generating information
Technical field
The invention relates to field of computer technology, more particularly, to generate the method and apparatus of information.
Background technique
With the development of video class application software (such as video processing class software, video social category software etc.), various people Face special effective function is also widely used.
In the prior art, when adding special efficacy, a trigger condition is generally required, trigger condition is usually a technology people The predetermined movement of member.A kind of trigger condition of the blink movement as common face special efficacy, has a wide range of applications.
Summary of the invention
The embodiment of the present application proposes the method and apparatus for generating information.
In a first aspect, the embodiment of the present application provides a kind of method for generating information, this method comprises: from target person Facial image is obtained in human face image sequence corresponding to face video as target facial image, and from human face image sequence Preset quantity facial image is obtained as preset quantity candidate face image, wherein preset quantity candidate face image Including adjacent with target facial image facial image in human face image sequence;Respectively by target facial image and preset quantity A candidate face image input eye closing action recognition model trained in advance, obtains recognition result corresponding to target facial image With preset quantity recognition result corresponding to preset quantity candidate face image, wherein recognition result is for characterizing face Whether face indicated by image executes eye closing movement;Based on recognition result obtained, determine corresponding to target facial image Objective result, wherein objective result for characterize face indicated by facial image whether execute blink movement.
In some embodiments, objective result corresponding to target facial image is determined, comprising: determine target facial image Whether corresponding recognition result characterizes face indicated by target facial image and is not carried out eye closing movement;In response to determining target Face indicated by the characterization target person face image of recognition result corresponding to facial image is not carried out eye closing movement, generates and is used for table Face indicated by sign target facial image is not carried out the objective result of blink movement.
In some embodiments, objective result corresponding to target facial image is determined, comprising: determine target facial image Whether corresponding recognition result characterizes face indicated by target facial image and executes eye closing movement, and determines preset quantity Whether preset quantity recognition result corresponding to a candidate face image characterizes preset quantity candidate face image institute respectively The preset quantity face of instruction is not carried out eye closing movement;In response to determining the characterization of recognition result corresponding to target facial image Face indicated by target facial image executes eye closing movement, and preset quantity corresponding to preset quantity candidate face image A recognition result characterizes preset quantity face indicated by preset quantity candidate face image respectively and is not carried out eye closing movement, Generate the objective result that blink movement is executed for characterizing face indicated by target facial image.
In some embodiments, after determining objective result corresponding to target facial image, this method further include: really Whether the objective result corresponding to facial image that sets the goal characterizes face indicated by target facial image and executes blink movement;It rings Blink movement should be executed in determining face indicated by the characterization target facial image of objective result corresponding to target facial image, Generate total identification that blink movement is executed corresponding to target face video, for characterizing face indicated by target face video As a result.
In some embodiments, eye closing action recognition model is obtained by following steps training: training sample set is obtained, In, training sample includes sample facial image and the specimen discerning that is marked in advance for sample facial image is as a result, specimen discerning As a result for characterizing whether face indicated by sample facial image executes eye closing movement;Using machine learning method, will train The sample facial image of training sample in sample set knows sample corresponding to the sample facial image inputted as input Other result obtains eye closing action recognition model as desired output, training.
Second aspect, the embodiment of the present application provide it is a kind of for generating the device of information, the device include: image obtain Unit is configured to obtain facial image from human face image sequence corresponding to target face video as target face figure Picture, and preset quantity facial image is obtained as preset quantity candidate face image from human face image sequence, wherein Preset quantity candidate face image includes adjacent with target facial image facial image in human face image sequence;Image is defeated Enter unit, is configured to respectively move the eye closing trained in advance of target facial image and the input of preset quantity candidate face image Make identification model, obtains pre- corresponding to recognition result corresponding to target facial image and preset quantity candidate face image If quantity recognition result, wherein recognition result is for characterizing whether face indicated by facial image executes eye closing movement;Knot Fruit determination unit is configured to be based on recognition result obtained, determines objective result corresponding to target facial image, In, objective result is for characterizing whether face indicated by facial image executes blink movement.
In some embodiments, as a result determination unit includes: the first determining module, is configured to determine target facial image Whether corresponding recognition result characterizes face indicated by target facial image and is not carried out eye closing movement;First generation module, It is configured in response to determine face indicated by the characterization target person face image of recognition result corresponding to target facial image not Eye closing movement is executed, the objective result for being not carried out blink movement for characterizing face indicated by target facial image is generated.
In some embodiments, as a result determination unit includes: the second determining module, is configured to determine target facial image Whether corresponding recognition result characterizes face indicated by target facial image and executes eye closing movement, and determines preset quantity Whether preset quantity recognition result corresponding to a candidate face image characterizes preset quantity candidate face image institute respectively The preset quantity face of instruction is not carried out eye closing movement;Second generation module is configured in response to determine target face figure The face as indicated by corresponding recognition result characterization target person face image executes eye closing movement, and preset quantity candidate Preset quantity recognition result corresponding to face image characterizes present count indicated by preset quantity candidate face image respectively It measures a face and is not carried out eye closing movement, generate the target for executing blink movement for characterizing face indicated by target facial image As a result.
In some embodiments, device further include: it is right to be configured to determine target facial image institute for movement determination unit Whether the objective result answered characterizes face indicated by target facial image and executes blink movement;As a result generation unit is configured At in response to determining that face indicated by the characterization target facial image of objective result corresponding to target facial image executes blink Movement generates corresponding to target face video, executes blink movement for characterizing face indicated by target face video Total recognition result.
In some embodiments, eye closing action recognition model is obtained by following steps training: training sample set is obtained, In, training sample includes sample facial image and the specimen discerning that is marked in advance for sample facial image is as a result, specimen discerning As a result for characterizing whether face indicated by sample facial image executes eye closing movement;Using machine learning method, will train The sample facial image of training sample in sample set knows sample corresponding to the sample facial image inputted as input Other result obtains eye closing action recognition model as desired output, training.
The third aspect, the embodiment of the present application provide a kind of electronic equipment, comprising: one or more processors;Storage dress Set, be stored thereon with one or more programs, when one or more programs are executed by one or more processors so that one or The method that multiple processors realize any embodiment in the above-mentioned method for generating information.
Fourth aspect, the embodiment of the present application provide a kind of computer-readable medium, are stored thereon with computer program, should The method of any embodiment in the above-mentioned method for generating information is realized when program is executed by processor.
Method and apparatus provided by the embodiments of the present application for generating information, by corresponding to the target face video Facial image is obtained in human face image sequence as target facial image, and preset quantity is obtained from human face image sequence Facial image is as preset quantity candidate face image, then respectively by target facial image and preset quantity candidate face Image input eye closing action recognition model trained in advance, obtains recognition result and preset quantity corresponding to target facial image Preset quantity recognition result corresponding to a candidate face image, wherein recognition result is for characterizing indicated by facial image Face whether execute eye closing movement, finally be based on recognition result obtained, determine target corresponding to target facial image As a result, wherein objective result is for characterizing whether face indicated by facial image executes blink movement, so that effective use is closed Eye action recognition model identifies the movement of blink corresponding to target facial image in conjunction with candidate face image, for spy Effect addition provides support;Also, it, can be to target facial image by combining eye feature corresponding to candidate face image Corresponding blink movement is more precisely identified, the accuracy of information generation is improved.
Detailed description of the invention
By reading a detailed description of non-restrictive embodiments in the light of the attached drawings below, the application's is other Feature, objects and advantages will become more apparent upon:
Fig. 1 is that one embodiment of the application can be applied to exemplary system architecture figure therein;
Fig. 2 is the flow chart according to one embodiment of the method for generating information of the application;
Fig. 3 is the schematic diagram according to an application scenarios of the method for generating information of the embodiment of the present application;
Fig. 4 is the flow chart according to another embodiment of the method for generating information of the application;
Fig. 5 is the structural schematic diagram according to one embodiment of the device for generating information of the application;
Fig. 6 is adapted for the structural schematic diagram for the computer system for realizing the electronic equipment of the embodiment of the present application.
Specific embodiment
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched The specific embodiment stated is used only for explaining related invention, rather than the restriction to the invention.It also should be noted that in order to Convenient for description, part relevant to related invention is illustrated only in attached drawing.
It should be noted that in the absence of conflict, the features in the embodiments and the embodiments of the present application can phase Mutually combination.The application is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 is shown can be using the method for generating information of the application or the implementation of the device for generating information The exemplary system architecture 100 of example.
As shown in Figure 1, system architecture 100 may include terminal device 101,102,103, network 104 and server 105. Network 104 between terminal device 101,102,103 and server 105 to provide the medium of communication link.Network 104 can be with Including various connection types, such as wired, wireless communication link or fiber optic cables etc..
User can be used terminal device 101,102,103 and be interacted by network 104 with server 105, to receive or send out Send message etc..Various telecommunication customer end applications can be installed, such as video processing class is answered on terminal device 101,102,103 With, video social category application, image processing class application, web browser applications, searching class application, social platform software etc..
Terminal device 101,102,103 can be hardware, be also possible to software.When terminal device 101,102,103 is hard When part, it can be the various electronic equipments with display screen, including but not limited to smart phone, tablet computer, e-book reading (Moving Picture Experts Group Audio Layer III, dynamic image expert compress mark for device, MP3 player Quasi- audio level 3), MP4 (Moving Picture Experts Group Audio Layer IV, dynamic image expert compression Standard audio level 4) player, pocket computer on knee and desktop computer etc..When terminal device 101,102,103 is When software, it may be mounted in above-mentioned cited electronic equipment.Its may be implemented into multiple softwares or software module (such as with To provide the multiple softwares or software module of Distributed Services), single software or software module also may be implemented into.It does not do herein It is specific to limit.
Server 105 can be to provide the server of various services, such as to the people that terminal device 101,102,103 is sent The netscape messaging server Netscape that face video is handled.Netscape messaging server Netscape can carry out the data such as the face video received The processing such as analysis, and obtain processing result (such as objective result corresponding to target facial image).
It should be noted that server can be hardware, it is also possible to software.When server is hardware, may be implemented At the distributed server cluster that multiple servers form, individual server also may be implemented into.It, can when server is software It, can also be with to be implemented as multiple softwares or software module (such as providing multiple softwares of Distributed Services or software module) It is implemented as single software or software module.It is not specifically limited herein.
It should be understood that the number of terminal device, network and server in Fig. 1 is only schematical.According to realization need It wants, can have any number of terminal device, network and server.In target face video or the mistake of generation recognition result Data used in journey do not need in the case where long-range obtain, and above system framework can not include network, and only include Terminal device or server.
With continued reference to Fig. 2, the process of one embodiment of the method for generating information according to the application is shown 200.The method for being used to generate information, comprising the following steps:
Step 201, facial image is obtained from human face image sequence corresponding to target face video as target face Image, and preset quantity facial image is obtained as preset quantity candidate face image from human face image sequence.
In the present embodiment, can lead to for generating the executing subject (such as server shown in FIG. 1) of the method for information It crosses wired connection mode or radio connection and obtains face figure from human face image sequence corresponding to target face video Preset quantity facial image is obtained as preset quantity time as being used as target facial image, and from human face image sequence It chooses face image.Wherein, target face video can be for identify to determine face figure the facial image corresponding to it As whether indicated face executes the face video of blink movement.Face video can be obtained shoot to face Video.It is understood that video is substantially the image sequence obtained according to the sequencing shooting of time.Therefore it is above-mentioned Target face video can correspond to a human face image sequence.
In the present embodiment, target facial image can be the face in human face image sequence, to be determined indicated by it Whether execution blink movement facial image.Herein, target facial image can be any one in human face image sequence Facial image.Preset quantity candidate face image may include being located at before target facial image in human face image sequence Facial image also may include the facial image being located at after target facial image in human face image sequence, however, it is desirable to bright True, preset quantity candidate face image includes adjacent with target facial image face figure in human face image sequence Picture.Herein, the preset quantity of preset quantity candidate face image and position in human face image sequence can be by skills Art personnel preset.
It should be strongly noted that when target facial image is to sort in people's face image sequence in the first facial image When (playing the facial image of display at first), preset quantity candidate face image can only be located at after target facial image Preset quantity facial image;When target facial image be people's face image sequence in sequence last bit facial image (i.e. most The facial image of display is played afterwards) when, preset quantity candidate face image can only be pre- before target facial image If quantity facial image.
Optionally, when preset quantity candidate face image is at least two candidate face image, preset quantity is waited Face image of choosing can be the continuously arranged preset quantity candidate face image in human face image sequence.
In the present embodiment, above-mentioned executing subject can adopt obtains target facial image and preset quantity in various manners Candidate face image.Specifically, on the one hand, above-mentioned executing subject can obtain target face video first, then from target person Facial image is obtained in human face image sequence corresponding to face video as target facial image, and from human face image sequence Preset quantity facial image is obtained as preset quantity candidate face image.Herein, above-mentioned executing subject can use Various modes obtain facial image as target facial image, example from human face image sequence corresponding to target face video Such as, can by the way of obtaining at random or it is available sequence predeterminated position (such as second) facial image.
It should be noted that herein, above-mentioned executing subject is available to be pre-stored within local target face video, Alternatively, the target face video that the available electronic equipment (such as terminal device shown in FIG. 1) for communicating connection is sent.
On the other hand, above-mentioned executing subject can also directly acquire target facial image and preset quantity candidate face figure Picture.Wherein, target facial image is the facial image in human face image sequence corresponding to target face video.Preset quantity Candidate face image is that preset quantity in human face image sequence, to include the facial image adjacent with target facial image is personal Face image.Similar, herein, above-mentioned executing subject is available to be pre-stored within local target facial image and present count A candidate face image is measured, alternatively, available electronic equipment (such as terminal device shown in FIG. 1) hair for communicating connection The target facial image and preset quantity candidate face image sent.
Step 202, the eye closing respectively trained target facial image and the input of preset quantity candidate face image in advance Action recognition model obtains corresponding to recognition result corresponding to target facial image and preset quantity candidate face image Preset quantity recognition result.
In the present embodiment, it is based on target facial image and preset quantity candidate face image obtained in step 201, The eye closing that above-mentioned executing subject can respectively train target facial image and the input of preset quantity candidate face image in advance Action recognition model obtains corresponding to recognition result corresponding to target facial image and preset quantity candidate face image Preset quantity recognition result.Wherein, recognition result can include but is not limited at least one of following: number, text, symbol, Image, audio.Recognition result can be used for characterizing whether face indicated by facial image executes eye closing movement.For example, identification Result may include text "Yes" and text "No", wherein "Yes" can be used for characterizing indicated by inputted facial image Face performs eye closing movement;"No" can be used for characterizing face indicated by inputted facial image, and to be not carried out eye closing dynamic Make.
In the present embodiment, eye closing action recognition model can be used for characterizing knowledge corresponding to facial image and facial image The corresponding relationship of other result.Specifically, eye closing action recognition model can be based on to a large amount of facial images and recognition result into Row statistics and mapping table generate, corresponding relationship that be stored with multiple facial images and recognition result, are also possible to base In training sample, using machine learning method, to initial model (such as convolutional neural networks (Convolutional Neural Network, CNN), residual error network (ResNet) etc.) be trained after obtained model.
In some optional implementations of the present embodiment, above-mentioned eye closing action recognition model can be as follows Training obtains: firstly, obtaining training sample set.Wherein, training sample may include sample facial image and for sample face The specimen discerning result that image marks in advance.Specimen discerning result can be used for characterizing face indicated by sample facial image No execution eye closing movement.Then, using machine learning method, the sample facial image for the training sample that training sample is concentrated is made For input, using specimen discerning result corresponding to the sample facial image inputted as desired output, training obtains closing one's eyes dynamic Make identification model.
Specifically, as an example, after getting above-mentioned training sample set, the training sample concentrated based on training sample This, can train as follows and obtain eye closing action recognition model: it can be concentrated from training sample and choose training sample, and It executes following training step: the sample facial image of selected training sample being inputted into predetermined initial model, is obtained Recognition result corresponding to sample facial image;Using specimen discerning result corresponding to the sample facial image inputted as just The desired output of beginning model determines penalty values of the recognition result obtained relative to specimen discerning result, and really based on institute Fixed penalty values, using the parameter of the method adjustment initial model of backpropagation;Determine training sample concentrate with the presence or absence of not by The training sample of selection;Unselected training sample is not present in response to determining, initial model adjusted is determined as closing Eye action recognition model.
It should be noted that the selection mode of training sample is not intended to limit in this application.Such as can be and randomly select, It is also possible to preferentially to choose included specimen discerning result and performs for characterizing face indicated by sample facial image and closes The training sample that eye movement is made.It should also be noted that, herein, can be determined using preset various loss functions obtained Penalty values of the recognition result relative to specimen discerning result, for example, penalty values can be calculated as loss function using L2 norm.
In this example, can with the following steps are included: in response to determine there are unselected training samples, never by Again training sample is chosen in the training sample of selection, and uses the last initial model adjusted as new introductory die Type continues to execute above-mentioned training step.
It should be noted that practice in, for the step of generating eye closing action recognition model executing subject can with It is same or different in the executing subject for the method for generating information.If identical, for generating eye closing action recognition model Trained model can be stored in local after training obtains eye closing action recognition model by the executing subject of step.If no It together, then can be after training obtains eye closing action recognition model for the executing subject for the step of generating eye closing action recognition model Trained model is sent to the executing subject for being used to generate the method for information.
Step 203, it is based on recognition result obtained, determines objective result corresponding to target facial image.
In the present embodiment, based on recognition result and present count corresponding to step 202 target facial image obtained Preset quantity recognition result corresponding to a candidate face image is measured, above-mentioned executing subject can determine target facial image institute Corresponding objective result.Wherein, objective result can include but is not limited at least one of following: number, text, symbol, image, Audio, objective result can be used for characterizing whether face indicated by facial image executes blink movement.For example, objective result can To include text "Yes" and text "No", wherein "Yes" can be used for characterizing face indicated by facial image and perform blink Movement;"No" can be used for characterizing face indicated by facial image and be not carried out blink movement.
In some optional implementations of the present embodiment, above-mentioned executing subject can determine target by following steps Objective result corresponding to facial image: firstly, above-mentioned executing subject can determine identification knot corresponding to target facial image Whether fruit characterizes face indicated by target facial image and is not carried out eye closing movement.Then, above-mentioned executing subject can be in response to Determine that face indicated by the characterization target person face image of recognition result corresponding to target facial image is not carried out eye closing movement, it is raw At the objective result for being not carried out blink movement for characterizing face indicated by target facial image.
It should be noted that herein, above-mentioned executing subject can be generated by various modes for characterizing target face Face indicated by image is not carried out the objective result of blink movement.For example, above-mentioned executing subject can be from preset candidate knot It is chosen in fruit set and is not carried out the candidate result of blink movement as objective result for characterizing face indicated by facial image. Wherein, candidate result set may include the candidate knot for being not carried out blink movement for characterizing face indicated by facial image Fruit also may include performing the candidate result of blink movement for characterizing face indicated by facial image;Alternatively, above-mentioned hold Recognition result corresponding to target facial image directly can also be determined as target corresponding to target facial image by row main body As a result.
In some optional implementations of the present embodiment, above-mentioned executing subject can also determine mesh by following steps Mark objective result corresponding to facial image: firstly, above-mentioned executing subject can determine identification corresponding to target facial image As a result whether characterize face indicated by target facial image and execute eye closing movement, and determine preset quantity candidate face figure As whether corresponding preset quantity recognition result characterizes present count indicated by preset quantity candidate face image respectively It measures a face and is not carried out eye closing movement.Then, above-mentioned executing subject can be in response to determining knowledge corresponding to target facial image Face indicated by other result characterization target facial image executes eye closing movement, and corresponding to preset quantity candidate face image Preset quantity recognition result characterize preset quantity face indicated by preset quantity candidate face image respectively and do not hold Row eye closing movement, generates the objective result that blink movement is executed for characterizing face indicated by target facial image.
With continued reference to the signal that Fig. 3, Fig. 3 are according to the application scenarios of the method for generating information of the present embodiment Figure.In the application scenarios of Fig. 3, server 301 obtains the target face video 303 of the transmission of terminal device 302 first.Then, Server 301 can obtain facial image as target face from human face image sequence corresponding to target face video 303 Image 304, and two facial images are obtained as candidate face image 305 and candidate face image from human face image sequence 306, wherein candidate face image 305 is adjacent with target facial image 304 in above-mentioned human face image sequence.Then, server Target facial image 304, candidate face image 305 and candidate face image 306 can be inputted closing for training in advance respectively by 301 Eye action recognition model 307, obtains recognition result 308 " eye closing ", candidate face image corresponding to target facial image 304 Recognition result 310 " not closing one's eyes " corresponding to recognition result 309 " not closing one's eyes " and candidate face image 306 corresponding to 305, In, recognition result " eye closing " can be used for characterizing face indicated by facial image and perform eye closing movement;Recognition result " patent Eye " can be used for characterizing face indicated by facial image and be not carried out eye closing movement.Finally, being based on recognition result obtained 308,309,310, server 301 can determine objective result 311 " blink " corresponding to target facial image 303, wherein mesh Mark result " blink " can be used for characterizing face indicated by facial image and perform blink movement.
The method provided by the above embodiment of the application passes through from human face image sequence corresponding to target face video Facial image is obtained as target facial image, and obtains preset quantity facial image as pre- from human face image sequence If quantity candidate face image, then target facial image and the input of preset quantity candidate face image are instructed in advance respectively Experienced eye closing action recognition model obtains recognition result and preset quantity candidate face image corresponding to target facial image Corresponding preset quantity recognition result, wherein recognition result is for characterizing whether face indicated by facial image executes Eye closing movement, is finally based on recognition result obtained, determines objective result corresponding to target facial image, wherein target As a result for characterizing whether face indicated by facial image executes blink movement, to efficiently use eye closing action recognition mould Type identifies the movement of blink corresponding to target facial image in conjunction with candidate face image, provides for special efficacy addition It supports;Also, it, can be to blink corresponding to target facial image by combining eye feature corresponding to candidate face image Movement is more precisely identified, the accuracy of information generation is improved.
With further reference to Fig. 4, it illustrates the processes 400 of another embodiment of the method for generating information.The use In the process 400 for the method for generating information, comprising the following steps:
Step 401, facial image is obtained from human face image sequence corresponding to target face video as target face Image, and preset quantity facial image is obtained as preset quantity candidate face image from human face image sequence.
In the present embodiment, can lead to for generating the executing subject (such as server shown in FIG. 1) of the method for information It crosses wired connection mode or radio connection and obtains face figure from human face image sequence corresponding to target face video Preset quantity facial image is obtained as preset quantity time as being used as target facial image, and from human face image sequence It chooses face image.Wherein, target face video can be for identify to determine face figure the facial image corresponding to it As whether indicated face executes the face video of blink movement.Face video can be obtained shoot to face Video.It is understood that video is substantially the image sequence obtained according to the sequencing shooting of time.Therefore it is above-mentioned Target face video can correspond to a human face image sequence.
In the present embodiment, target facial image can be the face in human face image sequence, to be determined indicated by it Whether execution blink movement facial image.Herein, target facial image can be any one in human face image sequence Facial image.Preset quantity candidate face image may include being located at before target facial image in human face image sequence Facial image also may include the facial image being located at after target facial image in human face image sequence, however, it is desirable to bright True, preset quantity candidate face image includes adjacent with target facial image face figure in human face image sequence Picture.Herein, the preset quantity of preset quantity candidate face image and position in human face image sequence can be by skills Art personnel preset.
Step 402, the eye closing respectively trained target facial image and the input of preset quantity candidate face image in advance Action recognition model obtains corresponding to recognition result corresponding to target facial image and preset quantity candidate face image Preset quantity recognition result.
In the present embodiment, it is based on target facial image and preset quantity candidate face image obtained in step 401, The eye closing that above-mentioned executing subject can respectively train target facial image and the input of preset quantity candidate face image in advance Action recognition model obtains corresponding to recognition result corresponding to target facial image and preset quantity candidate face image Preset quantity recognition result.Wherein, recognition result can include but is not limited at least one of following: number, text, symbol, Image, audio.Recognition result can be used for characterizing whether face indicated by facial image executes eye closing movement.
In the present embodiment, eye closing action recognition model can be used for characterizing knowledge corresponding to facial image and facial image The corresponding relationship of other result.Specifically, eye closing action recognition model can be based on to a large amount of facial images and recognition result into Row statistics and mapping table generate, corresponding relationship that be stored with multiple facial images and recognition result, are also possible to base In training sample, using machine learning method, after being trained to initial model (such as convolutional neural networks, residual error network etc.) Obtained model.
Step 403, it is based on recognition result obtained, determines objective result corresponding to target facial image.
In the present embodiment, based on recognition result and present count corresponding to step 402 target facial image obtained Preset quantity recognition result corresponding to a candidate face image is measured, above-mentioned executing subject can determine target facial image institute Corresponding objective result.Wherein, objective result can include but is not limited at least one of following: number, text, symbol, image, Audio, objective result can be used for characterizing whether face indicated by facial image executes blink movement.For example, objective result can To include text "Yes" and text "No", wherein "Yes" can be used for characterizing face indicated by facial image and perform blink Movement;"No" can be used for characterizing face indicated by facial image and be not carried out blink movement.
Above-mentioned steps 401, step 402, step 403 respectively with step 201, step 202, the step in previous embodiment 203 is consistent, and the description above with respect to step 201, step 202 and step 203 is also applied for step 401, step 402 and step 403, details are not described herein again.
Step 404, determine whether objective result corresponding to target facial image characterizes indicated by target facial image Face executes blink movement.
In the present embodiment, based on objective result obtained in step 403, above-mentioned executing subject can determine target face Whether objective result corresponding to image characterizes face indicated by target facial image and executes blink movement.
Specifically, above-mentioned executing subject can determine that objective result corresponding to target facial image is by various modes Face indicated by no characterization target facial image executes blink movement.For example, above-mentioned executing subject can be to above-mentioned target knot It is (such as right that fruit and benchmark result preset, that blink movement is executed for characterizing face indicated by facial image are matched Objective result and benchmark result carry out similarity calculation), and then determine whether above-mentioned objective result characterizes mesh according to matching result It marks face indicated by facial image and executes blink movement;Alternatively, above-mentioned executing subject can be according to for determining objective result Recognition result come determine above-mentioned objective result whether characterize face indicated by target facial image execute blink movement.
As an example, above-mentioned executing subject can be determined for determining corresponding to objective result, target facial image Whether recognition result characterizes face indicated by target facial image and executes eye closing movement, and for determine objective result, it is pre- If whether preset quantity recognition result corresponding to quantity candidate face image characterizes preset quantity candidate face image Indicated preset quantity face is not carried out eye closing movement;If so, above-mentioned executing subject can determine above-mentioned objective result It characterizes face indicated by target facial image and executes blink movement.
Step 405, in response to determining indicated by the characterization target facial image of objective result corresponding to target facial image Face execute blink movement, generate target face video corresponding to, for characterizing face indicated by target face video Execute total recognition result of blink movement.
In the present embodiment, above-mentioned executing subject can be in response to determining objective result table corresponding to target facial image It levies face indicated by target facial image and executes blink movement, generate corresponding to target face video, for characterizing target Face indicated by face video executes total recognition result of blink movement.Wherein, total recognition result can include but is not limited to At least one of below: number, text, symbol, image, audio.Particularly, herein, total recognition result can be with target face Objective result corresponding to image is identical.
Figure 4, it is seen that the method for generating information compared with the corresponding embodiment of Fig. 2, in the present embodiment Process 400 highlight using objective result corresponding to target facial image, determine total knowledge corresponding to target face video The step of other result.The scheme of the present embodiment description can draw based on facial image corresponding to face video, to face as a result, Whether face indicated by video, which performs blink movement, is identified, the comprehensive of information generation is improved.
With further reference to Fig. 5, as the realization to method shown in above-mentioned each figure, this application provides one kind for generating letter One embodiment of the device of breath, the Installation practice is corresponding with embodiment of the method shown in Fig. 2, which can specifically answer For in various electronic equipments.
As shown in figure 5, the device 500 for generating information of the present embodiment includes: that image acquisition unit 501, image are defeated Enter unit 502 and result determination unit 503.Wherein, image acquisition unit 501 is configured to corresponding to the target face video Facial image is obtained in human face image sequence as target facial image, and preset quantity is obtained from human face image sequence Facial image is as preset quantity candidate face image, wherein preset quantity candidate face image is included in facial image The facial image adjacent with target facial image in sequence;Image input units 502 are configured to target facial image respectively Eye closing action recognition model trained in advance, obtains corresponding to target facial image with the input of preset quantity candidate face image Recognition result and preset quantity candidate face image corresponding to preset quantity recognition result, wherein recognition result can For characterizing whether face indicated by facial image executes eye closing movement;As a result determination unit 503 is configured to based on institute The recognition result of acquisition determines objective result corresponding to target facial image, wherein objective result can be used for characterizing face Whether face indicated by image executes blink movement.
It in the present embodiment, can be by wired connection side for generating the image acquisition unit 501 of the device 500 of information Formula or radio connection obtain facial image as target person from human face image sequence corresponding to target face video Face image, and preset quantity facial image is obtained as preset quantity candidate face image from human face image sequence. Wherein, target face video can be for identify to determine people indicated by facial image the facial image corresponding to it Whether face executes the face video of blink movement.Face video can be to carry out shooting video obtained to face.It can manage Solution, video are substantially the image sequence obtained according to the sequencing shooting of time.Therefore above-mentioned target face view Frequency can correspond to a human face image sequence.
In the present embodiment, target facial image can be the face in human face image sequence, to be determined indicated by it Whether execution blink movement facial image.Herein, target facial image can be any one in human face image sequence Facial image.Preset quantity candidate face image may include being located at before target facial image in human face image sequence Facial image also may include the facial image being located at after target facial image in human face image sequence, however, it is desirable to bright True, preset quantity candidate face image includes adjacent with target facial image face figure in human face image sequence Picture.Herein, the preset quantity of preset quantity candidate face image and position in human face image sequence can be by skills Art personnel preset.
In the present embodiment, the target facial image obtained based on image acquisition unit 501 and preset quantity candidate Face image, image input units 502 can respectively input target facial image and preset quantity candidate face image preparatory Trained eye closing action recognition model obtains recognition result and preset quantity candidate face figure corresponding to target facial image As corresponding preset quantity recognition result.Wherein, recognition result can include but is not limited at least one of following: number, Text, symbol, image, audio.Recognition result can be used for characterizing whether face indicated by facial image executes eye closing movement.
In the present embodiment, eye closing action recognition model can be used for characterizing knowledge corresponding to facial image and facial image The corresponding relationship of other result.Specifically, eye closing action recognition model can be based on to a large amount of facial images and recognition result into Row statistics and mapping table generate, corresponding relationship that be stored with multiple facial images and recognition result, are also possible to base In training sample, using machine learning method, after being trained to initial model (such as convolutional neural networks, residual error network etc.) Obtained model.
In the present embodiment, based on recognition result corresponding to the target facial image obtained of image input units 502 With preset quantity recognition result corresponding to preset quantity candidate face image, as a result determination unit 503 can determine mesh Mark objective result corresponding to facial image.Wherein, objective result can include but is not limited at least one of following: number, text Word, symbol, image, audio, objective result can be used for characterizing whether face indicated by facial image executes blink movement.
In some optional implementations of the present embodiment, as a result determination unit 503 may include: the first determining module (not shown), is configured to determine whether recognition result corresponding to target facial image characterizes target facial image meaning The face shown is not carried out eye closing movement;First generation module (not shown) is configured in response to determine target face figure The face as indicated by corresponding recognition result characterization target person face image is not carried out eye closing movement, generates for characterizing target Face indicated by facial image is not carried out the objective result of blink movement.
In some optional implementations of the present embodiment, as a result determination unit 503 may include: the second determining module (not shown), is configured to determine whether recognition result corresponding to target facial image characterizes target facial image meaning The face shown executes eye closing movement, and determines preset quantity recognition result corresponding to preset quantity candidate face image Preset quantity face indicated by preset quantity candidate face image whether is characterized respectively is not carried out eye closing movement;Second is raw At module (not shown), it is configured in response to determine the characterization target person face of recognition result corresponding to target facial image Face indicated by image executes eye closing movement, and the identification knot of preset quantity corresponding to preset quantity candidate face image Fruit characterizes preset quantity face indicated by preset quantity candidate face image respectively and is not carried out eye closing movement, and generation is used for Characterize the objective result that face indicated by target facial image executes blink movement.
In some optional implementations of the present embodiment, device 500 can also include: movement determination unit (in figure It is not shown), it is configured to determine whether objective result corresponding to target facial image characterizes indicated by target facial image Face executes blink movement;As a result generation unit (not shown) is configured in response to determine that target facial image institute is right Face indicated by the objective result characterization target facial image answered executes blink movement, generates corresponding to target face video , for characterize face indicated by target face video execute blink movement total recognition result.
In some optional implementations of the present embodiment, eye closing action recognition model can pass through following steps training It obtains: firstly, obtaining training sample set.Wherein, training sample may include sample facial image and for sample facial image The specimen discerning result marked in advance.Specimen discerning result can be used for characterizing whether face indicated by sample facial image is held Row eye closing movement.Then, using machine learning method, the sample facial image for the training sample that training sample is concentrated is as defeated Enter, using specimen discerning result corresponding to the sample facial image inputted as desired output, training obtains eye closing movement and knows Other model.
It is understood that all units recorded in the device 500 and each step phase in the method with reference to Fig. 2 description It is corresponding.As a result, above with respect to the operation of method description, the beneficial effect of feature and generation be equally applicable to device 500 and its In include unit, details are not described herein.
The device provided by the above embodiment 500 of the application is right from target face video by image acquisition unit 501 Facial image is obtained in the human face image sequence answered as target facial image, and present count is obtained from human face image sequence A facial image is measured as preset quantity candidate face image, then image input units 502 are respectively by target facial image Eye closing action recognition model trained in advance, obtains corresponding to target facial image with the input of preset quantity candidate face image Recognition result and preset quantity candidate face image corresponding to preset quantity recognition result, wherein recognition result use Whether the face indicated by characterization facial image executes eye closing movement, and final result determination unit 503 is based on knowledge obtained Not as a result, determining objective result corresponding to target facial image, wherein objective result is for characterizing indicated by facial image Whether face executes blink movement, so that eye closing action recognition model is efficiently used, in conjunction with candidate face image, to target face The movement of blink corresponding to image is identified, provides support for special efficacy addition;Also, by combining candidate face image Corresponding eye feature can act blink corresponding to target facial image and more precisely be identified, be improved The accuracy that information generates.
Below with reference to Fig. 6, it is (such as shown in FIG. 1 that it illustrates the electronic equipments for being suitable for being used to realize the embodiment of the present application Terminal device/server) computer system 600 structural schematic diagram.Electronic equipment shown in Fig. 6 is only an example, Should not function to the embodiment of the present application and use scope bring any restrictions.
As shown in fig. 6, computer system 600 includes central processing unit (CPU) 601, it can be read-only according to being stored in Program in memory (ROM) 602 or be loaded into the program in random access storage device (RAM) 603 from storage section 608 and Execute various movements appropriate and processing.In RAM 603, also it is stored with system 600 and operates required various programs and data. CPU 601, ROM 602 and RAM 603 are connected with each other by bus 604.Input/output (I/O) interface 605 is also connected to always Line 604.
I/O interface 605 is connected to lower component: the importation 606 including keyboard, mouse etc.;It is penetrated including such as cathode The output par, c 607 of spool (CRT), liquid crystal display (LCD) etc. and loudspeaker etc.;Storage section 608 including hard disk etc.; And the communications portion 609 of the network interface card including LAN card, modem etc..Communications portion 609 via such as because The network of spy's net executes communication process.Driver 610 is also connected to I/O interface 605 as needed.Detachable media 611, such as Disk, CD, magneto-optic disk, semiconductor memory etc. are mounted on as needed on driver 610, in order to read from thereon Computer program be mounted into storage section 608 as needed.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be carried on computer-readable medium On computer program, which includes the program code for method shown in execution flow chart.In such reality It applies in example, which can be downloaded and installed from network by communications portion 609, and/or from detachable media 611 are mounted.When the computer program is executed by central processing unit (CPU) 601, limited in execution the present processes Above-mentioned function.It should be noted that computer-readable medium described herein can be computer-readable signal media or Computer readable storage medium either the two any combination.Computer readable storage medium for example can be --- but Be not limited to --- electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor system, device or device, or any above combination. The more specific example of computer readable storage medium can include but is not limited to: have one or more conducting wires electrical connection, Portable computer diskette, hard disk, random access storage device (RAM), read-only memory (ROM), erasable type may be programmed read-only deposit Reservoir (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory Part or above-mentioned any appropriate combination.In this application, computer readable storage medium, which can be, any include or stores The tangible medium of program, the program can be commanded execution system, device or device use or in connection.And In the application, computer-readable signal media may include in a base band or the data as the propagation of carrier wave a part are believed Number, wherein carrying computer-readable program code.The data-signal of this propagation can take various forms, including but not It is limited to electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be computer Any computer-readable medium other than readable storage medium storing program for executing, the computer-readable medium can send, propagate or transmit use In by the use of instruction execution system, device or device or program in connection.Include on computer-readable medium Program code can transmit with any suitable medium, including but not limited to: wireless, electric wire, optical cable, RF etc., Huo Zheshang Any appropriate combination stated.
Flow chart and block diagram in attached drawing are illustrated according to the system of the various embodiments of the application, method and computer journey The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation A part of one module, program segment or code of table, a part of the module, program segment or code include one or more use The executable instruction of the logic function as defined in realizing.It should also be noted that in some implementations as replacements, being marked in box The function of note can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are actually It can be basically executed in parallel, they can also be executed in the opposite order sometimes, and this depends on the function involved.Also it to infuse Meaning, the combination of each box in block diagram and or flow chart and the box in block diagram and or flow chart can be with holding The dedicated hardware based system of functions or operations as defined in row is realized, or can use specialized hardware and computer instruction Combination realize.
Being described in unit involved in the embodiment of the present application can be realized by way of software, can also be by hard The mode of part is realized.Described unit also can be set in the processor, for example, can be described as: a kind of processor packet Include image acquisition unit, image input units and result determination unit.Wherein, the title of these units is not under certain conditions The restriction to the unit itself is constituted, for example, image acquisition unit is also described as " obtaining target facial image and candidate The unit of facial image ".
As on the other hand, present invention also provides a kind of computer-readable medium, which be can be Included in electronic equipment described in above-described embodiment;It is also possible to individualism, and without in the supplying electronic equipment. Above-mentioned computer-readable medium carries one or more program, when said one or multiple programs are held by the electronic equipment When row, so that the electronic equipment: obtaining facial image from human face image sequence corresponding to target face video as target Facial image, and preset quantity facial image is obtained as preset quantity candidate face figure from human face image sequence Picture, wherein preset quantity candidate face image includes adjacent with target facial image face figure in human face image sequence Picture;The eye closing action recognition model that target facial image and the input of preset quantity candidate face image is trained in advance respectively, Preset quantity corresponding to recognition result corresponding to target facial image and preset quantity candidate face image is obtained to know Other result, wherein recognition result is for characterizing whether face indicated by facial image executes eye closing movement;Based on obtained Recognition result determines objective result corresponding to target facial image, wherein objective result is for characterizing indicated by facial image Face whether execute blink movement.
Above description is only the preferred embodiment of the application and the explanation to institute's application technology principle.Those skilled in the art Member is it should be appreciated that invention scope involved in the application, however it is not limited to technology made of the specific combination of above-mentioned technical characteristic Scheme, while should also cover in the case where not departing from foregoing invention design, it is carried out by above-mentioned technical characteristic or its equivalent feature Any combination and the other technical solutions formed.Such as features described above has similar function with (but being not limited to) disclosed herein Can technical characteristic replaced mutually and the technical solution that is formed.

Claims (12)

1. a kind of method for generating information, comprising:
Facial image is obtained from human face image sequence corresponding to target face video as target facial image, and from institute It states and obtains preset quantity facial image in human face image sequence as preset quantity candidate face image, wherein present count Measuring a candidate face image includes adjacent with target facial image facial image in the human face image sequence;
The eye closing action recognition that the target facial image and the input of preset quantity candidate face image is trained in advance respectively Model obtains preset quantity corresponding to recognition result corresponding to target facial image and preset quantity candidate face image A recognition result, wherein recognition result is for characterizing whether face indicated by facial image executes eye closing movement;
Based on recognition result obtained, objective result corresponding to the target facial image is determined, wherein objective result is used Whether the face indicated by characterization facial image executes blink movement.
2. according to the method described in claim 1, wherein, objective result corresponding to the determination target facial image, Include:
Determine whether recognition result corresponding to the target facial image characterizes face indicated by the target facial image It is not carried out eye closing movement;
People indicated by the target facial image is characterized in response to recognition result corresponding to the determination target facial image Face is not carried out eye closing movement, generates the target for being not carried out blink movement for characterizing face indicated by the target facial image As a result.
3. according to the method described in claim 1, wherein, objective result corresponding to the determination target facial image, Include:
Determine whether recognition result corresponding to the target facial image characterizes face indicated by the target facial image Eye closing movement is executed, and whether determines preset quantity recognition result corresponding to the preset quantity candidate face image Preset quantity face indicated by preset quantity candidate face image is characterized respectively is not carried out eye closing movement;
People indicated by the target facial image is characterized in response to recognition result corresponding to the determination target facial image Face executes eye closing movement, and preset quantity recognition result corresponding to preset quantity candidate face image characterizes preset respectively Preset quantity face indicated by quantity candidate face image is not carried out eye closing movement, generates for characterizing the target person Face indicated by face image executes the objective result of blink movement.
4. according to the method described in claim 1, wherein, the objective result corresponding to the determination target facial image Later, the method also includes:
Determine whether objective result corresponding to the target facial image characterizes face indicated by the target facial image Execute blink movement;
People indicated by the target facial image is characterized in response to objective result corresponding to the determination target facial image Face executes blink movement, generates corresponding to the target face video, for characterizing indicated by the target face video Face executes total recognition result of blink movement.
5. method described in one of -4 according to claim 1, wherein the eye closing action recognition model passes through following steps training It obtains:
Obtain training sample set, wherein training sample includes sample facial image and marks in advance for sample facial image Specimen discerning is as a result, specimen discerning result is used to characterize whether face indicated by sample facial image to execute eye closing movement;
Using machine learning method, the sample facial image for the training sample that the training sample is concentrated is as input, by institute Specimen discerning result corresponding to the sample facial image of input obtains eye closing action recognition model as desired output, training.
6. a kind of for generating the device of information, comprising:
Image acquisition unit is configured to obtain facial image conduct from human face image sequence corresponding to target face video Target facial image, and preset quantity facial image is obtained as preset quantity candidate from the human face image sequence Facial image, wherein preset quantity candidate face image is included in the human face image sequence and the target face figure As adjacent facial image;
Image input units are configured to respectively input the target facial image and preset quantity candidate face image pre- First trained eye closing action recognition model obtains recognition result and preset quantity candidate face corresponding to target facial image Preset quantity recognition result corresponding to image, wherein whether recognition result is for characterizing face indicated by facial image Execute eye closing movement;
As a result determination unit is configured to determine mesh corresponding to the target facial image based on recognition result obtained Mark result, wherein objective result is for characterizing whether face indicated by facial image executes blink movement.
7. device according to claim 6, wherein the result determination unit includes:
First determining module, is configured to determine whether recognition result corresponding to the target facial image characterizes the target Face indicated by facial image is not carried out eye closing movement;
First generation module is configured in response to determine that recognition result corresponding to the target facial image characterizes the mesh Face indicated by mark facial image is not carried out eye closing movement, generates for characterizing face indicated by the target facial image It is not carried out the objective result of blink movement.
8. device according to claim 6, wherein the result determination unit includes:
Second determining module, is configured to determine whether recognition result corresponding to the target facial image characterizes the target Face indicated by facial image executes eye closing movement, and determines pre- corresponding to the preset quantity candidate face image It is not held if whether quantity recognition result characterizes preset quantity face indicated by preset quantity candidate face image respectively Row eye closing movement;
Second generation module is configured in response to determine that recognition result corresponding to the target facial image characterizes the mesh It marks face indicated by facial image and executes eye closing movement, and preset quantity corresponding to preset quantity candidate face image Recognition result characterizes preset quantity face indicated by preset quantity candidate face image respectively and is not carried out eye closing movement, raw At the objective result for executing blink movement for characterizing face indicated by the target facial image.
9. device according to claim 6, wherein described device further include:
Determination unit is acted, is configured to determine whether objective result corresponding to the target facial image characterizes the target Face indicated by facial image executes blink movement;
As a result generation unit is configured in response to determine that objective result corresponding to the target facial image characterizes the mesh It marks face indicated by facial image and executes blink movement, generate corresponding to the target face video, is described for characterizing Face indicated by target face video executes total recognition result of blink movement.
10. the device according to one of claim 6-9, wherein the eye closing action recognition model is instructed by following steps It gets:
Obtain training sample set, wherein training sample includes sample facial image and marks in advance for sample facial image Specimen discerning is as a result, specimen discerning result is used to characterize whether face indicated by sample facial image to execute eye closing movement;
Using machine learning method, the sample facial image for the training sample that the training sample is concentrated is as input, by institute Specimen discerning result corresponding to the sample facial image of input obtains eye closing action recognition model as desired output, training.
11. a kind of electronic equipment, comprising:
One or more processors;
Storage device is stored thereon with one or more programs,
When one or more of programs are executed by one or more of processors, so that one or more of processors are real Now such as method as claimed in any one of claims 1 to 5.
12. a kind of computer-readable medium, is stored thereon with computer program, wherein the realization when program is executed by processor Such as method as claimed in any one of claims 1 to 5.
CN201810878626.0A 2018-08-03 2018-08-03 Method and apparatus for generating information Pending CN109165570A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810878626.0A CN109165570A (en) 2018-08-03 2018-08-03 Method and apparatus for generating information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810878626.0A CN109165570A (en) 2018-08-03 2018-08-03 Method and apparatus for generating information

Publications (1)

Publication Number Publication Date
CN109165570A true CN109165570A (en) 2019-01-08

Family

ID=64898873

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810878626.0A Pending CN109165570A (en) 2018-08-03 2018-08-03 Method and apparatus for generating information

Country Status (1)

Country Link
CN (1) CN109165570A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111277893A (en) * 2020-02-12 2020-06-12 北京字节跳动网络技术有限公司 Video processing method and device, readable medium and electronic equipment
CN112465717A (en) * 2020-11-25 2021-03-09 北京字跳网络技术有限公司 Face image processing model training method and device, electronic equipment and medium
CN113158948A (en) * 2021-04-29 2021-07-23 宜宾中星技术智能***有限公司 Information generation method and device and terminal equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106897659A (en) * 2015-12-18 2017-06-27 腾讯科技(深圳)有限公司 The recognition methods of blink motion and device
CN106934329A (en) * 2015-12-31 2017-07-07 掌赢信息科技(上海)有限公司 One kind blink recognition methods and electronic equipment
CN107766840A (en) * 2017-11-09 2018-03-06 杭州有盾网络科技有限公司 A kind of method, apparatus of blink detection, equipment and computer-readable recording medium
CN108229376A (en) * 2017-12-29 2018-06-29 百度在线网络技术(北京)有限公司 For detecting the method and device of blink

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106897659A (en) * 2015-12-18 2017-06-27 腾讯科技(深圳)有限公司 The recognition methods of blink motion and device
CN106934329A (en) * 2015-12-31 2017-07-07 掌赢信息科技(上海)有限公司 One kind blink recognition methods and electronic equipment
CN107766840A (en) * 2017-11-09 2018-03-06 杭州有盾网络科技有限公司 A kind of method, apparatus of blink detection, equipment and computer-readable recording medium
CN108229376A (en) * 2017-12-29 2018-06-29 百度在线网络技术(北京)有限公司 For detecting the method and device of blink

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111277893A (en) * 2020-02-12 2020-06-12 北京字节跳动网络技术有限公司 Video processing method and device, readable medium and electronic equipment
CN111277893B (en) * 2020-02-12 2021-06-25 北京字节跳动网络技术有限公司 Video processing method and device, readable medium and electronic equipment
CN112465717A (en) * 2020-11-25 2021-03-09 北京字跳网络技术有限公司 Face image processing model training method and device, electronic equipment and medium
CN112465717B (en) * 2020-11-25 2024-05-31 北京字跳网络技术有限公司 Face image processing model training method, device, electronic equipment and medium
CN113158948A (en) * 2021-04-29 2021-07-23 宜宾中星技术智能***有限公司 Information generation method and device and terminal equipment

Similar Documents

Publication Publication Date Title
CN108805091A (en) Method and apparatus for generating model
CN108898185A (en) Method and apparatus for generating image recognition model
CN109858445A (en) Method and apparatus for generating model
CN108985257A (en) Method and apparatus for generating information
CN108537152A (en) Method and apparatus for detecting live body
CN108960316A (en) Method and apparatus for generating model
CN107578017A (en) Method and apparatus for generating image
CN109086719A (en) Method and apparatus for output data
CN109460514A (en) Method and apparatus for pushed information
CN108595628A (en) Method and apparatus for pushed information
CN109545192A (en) Method and apparatus for generating model
CN108121800A (en) Information generating method and device based on artificial intelligence
CN109993150A (en) The method and apparatus at age for identification
CN109976997A (en) Test method and device
CN108345387A (en) Method and apparatus for output information
CN109829432A (en) Method and apparatus for generating information
CN108491823A (en) Method and apparatus for generating eye recognition model
CN109214501A (en) The method and apparatus of information for identification
CN109299477A (en) Method and apparatus for generating text header
CN108509921A (en) Method and apparatus for generating information
CN108960110A (en) Method and apparatus for generating information
CN108511066A (en) information generating method and device
CN108182472A (en) For generating the method and apparatus of information
CN108446658A (en) The method and apparatus of facial image for identification
CN110059624A (en) Method and apparatus for detecting living body

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination