CN108737729A - Automatic photographing method and device - Google Patents

Automatic photographing method and device Download PDF

Info

Publication number
CN108737729A
CN108737729A CN201810420214.2A CN201810420214A CN108737729A CN 108737729 A CN108737729 A CN 108737729A CN 201810420214 A CN201810420214 A CN 201810420214A CN 108737729 A CN108737729 A CN 108737729A
Authority
CN
China
Prior art keywords
baby
expression
face
real
pictures
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810420214.2A
Other languages
Chinese (zh)
Inventor
张弓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201810420214.2A priority Critical patent/CN108737729A/en
Publication of CN108737729A publication Critical patent/CN108737729A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Signal Processing (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

Present applicant proposes a kind of automatic photographing method and devices, wherein method includes:According to the baby's human face recognition model trained based on deep learning, it whether there is baby's facial area in detection acquisition image;If detecting, there are baby's facial areas in acquisition image, according to the real-time expression of baby's Expression Recognition model inspection baby's face based on deep learning training;Know that real-time expression belongs to the sample expression classification stored in preset candid photograph database if comparing, shoots baby's face picture corresponding with real-time expression.Thus, realize the automatic camera to the interesting face-image of baby, efficiency of taking pictures is improved while ensureing to take pictures satisfaction, it solves in the prior art, it takes pictures because baby is difficult to coordinate and needs user to keep the main attention of height for a long time and capture manually, cause candid photograph difficulty larger and less efficient technical problem of taking pictures.

Description

Automatic photographing method and device
Technical field
This application involves field of photographing technology more particularly to a kind of automatic photographing methods and device.
Background technology
With popularizing for camera function, user increasingly gets used to by photographing to record the point in daily production and life Drop, the picture of baby for example, parent is keen to take pictures for baby, stored or shared.Wherein, take pictures it is interesting compared with High baby picture obviously can more meet the recreational demand etc. of user.In the related technology, relatively has interest to shoot Baby picture generally concentrates attention in the expression shape change of baby, and is captured manually to interesting baby's expression, so And this shooting style, on the one hand, since the expression of baby is transient, preferable photo opportunity may be lost manually by capturing, On the other hand, the user that needs to take pictures fixes attention on the expression shape change of baby, takes pictures less efficient.
Apply for content
A kind of automatic photographing method of the application offer and device are in step with solving in the prior art because baby is difficult to match According to and need user to keep the main attention of height for a long time and capture manually, it is larger and take pictures less efficient to lead to capture difficulty Technical problem.
The application first embodiment provides a kind of automatic photographing method, includes the following steps:It is instructed according to based on deep learning Experienced baby's human face recognition model, detection, which acquires, whether there is baby's facial area in image;If detecting the acquisition image In there are baby's facial area, then according to baby face described in baby's Expression Recognition model inspection based on deep learning training The real-time expression in portion;Know that the real-time expression belongs to the sample expression classification stored in preset candid photograph database if comparing, Then shooting baby's face picture corresponding with the real-time expression.
The application second embodiment provides a kind of Automatic camera, including:First detection module, for according to based on deep Baby's human face recognition model of learning training is spent, whether there is baby's facial area in detection acquisition image;Second detection module, For in detecting the acquisition image there are baby's facial area, then according to baby's table based on deep learning training Feelings identification model detects the real-time expression of baby's face;Taking module, for knowing that the real-time expression belongs to comparing The sample expression classification stored in preset candid photograph database, then shooting baby's face picture corresponding with the real-time expression.
The application 3rd embodiment provides a kind of computer equipment, including:Memory, processor and storage are on a memory And the computer program that can be run on a processor, when the processor executes the computer program, realize such as above-mentioned implementation Automatic photographing method described in example.
The application fourth embodiment provides a kind of non-transitorycomputer readable storage medium, is stored thereon with computer journey Sequence realizes the automatic photographing method as described in above-described embodiment when the computer program is executed by processor.
Technical solution provided by the embodiments of the present application can include the following benefits:
It is facial with the presence or absence of baby in detection acquisition image according to the baby's human face recognition model trained based on deep learning Region, if detecting, there are baby's facial areas in acquisition image, according to baby's Expression Recognition based on deep learning training The real-time expression of model inspection baby's face stores if comparing and knowing that real-time expression belongs in preset candid photograph database in turn Sample expression classification, then shoot corresponding with real-time expression baby's face picture.Hereby it is achieved that face figure interesting to baby The automatic camera of picture improves efficiency of taking pictures while ensureing to take pictures satisfaction, solves in the prior art, because baby is difficult It is taken pictures with cooperation and needs user to keep the main attention of height for a long time and capture manually, cause candid photograph difficulty larger and effect of taking pictures The relatively low technical problem of rate.
The additional aspect of the application and advantage will be set forth in part in the description, and will partly become from the following description It obtains obviously, or recognized by the practice of the application.
Description of the drawings
The application is above-mentioned and/or additional aspect and advantage will become from the following description of the accompanying drawings of embodiments Obviously and it is readily appreciated that, wherein
Fig. 1 is the flow chart according to the automatic photographing method of the application one embodiment;
Fig. 2 is the flow chart according to the automatic photographing method of second embodiment of the application;
Fig. 3 is the flow chart according to the automatic photographing method of the application third embodiment;
Fig. 4 is the application scenarios schematic diagram according to the automatic photographing method of the application one embodiment;
Fig. 5 is the flow chart according to the automatic photographing method of the 4th embodiment of the application;
Fig. 6 is the flow chart according to the automatic photographing method of the 5th embodiment of the application;
Fig. 7 is the structural schematic diagram according to the Automatic camera of the application one embodiment;
Fig. 8 is the structural schematic diagram according to the Automatic camera of second embodiment of the application;And
Fig. 9 is the structural schematic diagram according to the Automatic camera of the application third embodiment.
Specific implementation mode
Embodiments herein is described below in detail, examples of the embodiments are shown in the accompanying drawings, wherein from beginning to end Same or similar label indicates same or similar element or element with the same or similar functions.Below with reference to attached The embodiment of figure description is exemplary, it is intended to for explaining the application, and should not be understood as the limitation to the application.
Below with reference to the accompanying drawings the automatic photographing method and device of the embodiment of the present application are described.
Fig. 1 is the flow chart according to the automatic photographing method of the application one embodiment.As shown in Figure 1, the automatic bat Include according to method:
Step 101, according to the baby's human face recognition model trained based on deep learning, detection acquires whether there is in image Baby's facial area.
Specifically, in advance to lot of experimental data be trained study obtain baby's human face recognition model, according to based on Baby's human face recognition model of deep learning training, detection, which acquires, whether there is baby's facial area in image, in order to know After not going out baby's facial area, the further candid photograph processing for carrying out the embodiment of the present application.
It should be understood that in the present embodiment, on the one hand, according to deep learning training baby's human face recognition model into The study of row baby's facial characteristics improves the accuracy of baby's facial area identification, the generation for avoiding leakage from clapping.On the other hand, After can not identify baby's facial area image, system does not start relevant operation, avoids the waste of resource, improves terminal device Cruising ability.
Wherein, in different application scenarios, the training method of baby's human face recognition model based on deep learning training Difference illustrates below:
As a kind of possible realization method, as shown in Fig. 2, above-mentioned steps 101 include:
Step 201, acquisition includes the samples pictures of baby's facial area.
Wherein, according to the difference of application scenarios, the source of the samples pictures comprising baby's facial area of acquisition is different, In some examples, when application scenarios are the scene for recording the growth of some specific baby in family, then acquisition includes baby's face Samples pictures be the specific baby picture, the picture of the specific baby can derive from photograph album, Dropbox etc., can also refer to Show user's captured in real-time when building model.In this example, it can also in real time be obtained according to the growth time of baby new Samples pictures update corresponding baby's human face recognition model.In some instances, when application scenarios are the shootings such as camera mechanism When the interesting picture of baby, then it is a large amount of different and baby pictures to acquire the samples pictures comprising baby's facial area, The picture of a large amount of different babies can with from internet etc..
Step 202, every samples pictures are cut into multiple data blocks according to pre-set dimension, and be arranged for each data block Content tab.
Step 203, all data blocks and the corresponding content tab of each data block are input to convolutional neural networks Model is trained, and generates baby's human face recognition model.
Every samples pictures are cut into multiple data blocks according to preset size, are each data block set content mark Label are background contents to mark corresponding data block contents by the content tab, face content or empty content etc., into And all data blocks and the corresponding content tab of each data block are input in convolutional neural networks model and are instructed Practice, generate baby's human face recognition model, there will be picture to be split as multiple data blocks as a result, baby is built by granularity of data block Human face recognition model improves the accuracy of baby's facial area identification, wherein the data block of fractionation is more, and the granularity of identification is got over Careful, the accuracy of identification is higher.
As alternatively possible realization method, since baby's face has significantly compared to the face of adult or children Distinguishing characteristics point, for example, the facial eyes brightness of baby is higher, eyebrow hair is less, facial skin is relatively smooth and mouth Portion is not due to having tooth to cause cheek baby fertilizer apparent etc., thus, it is possible to based on these differences in a large amount of baby's samples pictures Characteristic point is trained, and baby's human face recognition model is generated according to training result, to which the model is by identifying the image acquired In whether comprising the corresponding Feature point recognition of baby's face whether there is baby's facial area.
Step 102, if detecting, there are baby's facial areas in acquisition image, according to the baby based on deep learning training The real-time expression of youngster's Expression Recognition model inspection baby's face.
In practical applications, when the real-time expression of baby's face is more interesting, the demand of taking pictures etc. of user can more be met, For example, for parent user, the expression photo that baby yawns or smiles is aobvious relative to the glassy-eyed photo of baby Value for preservation etc. is so had more, thus, in order to capture the photo that baby does interesting expression, in detecting acquisition image After baby's facial area, according to the baby's Expression Recognition model trained based on deep learning, accurately to detect baby face The real-time expression in portion.
Wherein, the training method of baby's Expression Recognition model based on deep learning training, according to the difference of application scenarios And it is different, it illustrates as follows:
The first example:
In this example, with it is above-mentioned Fig. 2 shows method it is corresponding, be adopted as samples pictures different data block region addition The mode of content tab trains to obtain baby's Expression Recognition model.
Specifically, as shown in figure 3, step 102 includes:
Step 301, it intercepts baby's facial area in each samples pictures and obtains sample face picture, and be each sample Expression label is arranged in facial picture.
Wherein, in expression label user annotation sample face picture a people facial expression, including smile, yawn, Curiosity etc., wherein be arranged expression label mode can be related technical personnel actively mark, can also according to expressive features into Row automatic identification, for example, identifying whether current expression is sobbing etc. by detecting tear stains.
Step 302, all sample face pictures and expression label corresponding with each sample face picture are inputted It is trained to convolutional neural networks model, generates baby's Expression Recognition model.
Specifically, all sample face pictures and expression label corresponding with each sample face picture are input to Convolutional neural networks model is trained, and generates baby's Expression Recognition model, to which baby's face-image is input to the baby Expression Recognition model, you can the real-time expression for identifying current baby, in order to be accurately held according to the real-time expression of baby The opportunity of shooting.
Second of example:
The a large amount of sample face picture for each baby's expression is obtained respectively in advance, for example, for sobbing expression A large amount of sample face picture, for a large amount of sample face picture of smile expression, in turn, for a large amount of of each expression Sample face picture carries out deep learning, analyzes the expressive features for each expression, to extract current baby's face Current expressive features, the expressive features gone out with above-mentioned deep learning are matched, and are determined and current expressive features according to matching degree The corresponding baby's expression of more consistent expressive features is the real-time expression of current baby face.
Step 103, know that real-time expression belongs to the sample expression classification stored in preset candid photograph database if comparing, Shooting baby's face picture corresponding with real-time expression.
It is appreciated that according to the candid photograph demand of user, the sample expression stored in capturing database, for example, user Candid photograph demand is that baby's smile is captured, then the sample expression stored in capturing database is smile sample expression, to, if than Relatively know that real-time expression belongs to the sample expression stored in preset candid photograph database, then shoots baby corresponding with real-time expression Facial picture realizes the automatic candid photograph to the interesting expression of baby, realizes that simply efficiency of taking pictures is higher.
It is emphasized that the automatic photographing method in the embodiment of the present application, it can be to be clapped desired by automatic identification to user Baby's expression for taking the photograph simultaneously automatically snaps, and this mode can be applied not only in daily shooting process, can also be applied to regard The shooting of frequency, the interesting video of shooting can more meet the requirement of user.
For example, as shown in the upper figures of Fig. 4, the activity of baby is monitored always, the automatic camera side in through this embodiment Method starts to shoot video-frequency band a after the particular emotion 1 for recognizing baby, until detecting that special expression 1 disappears, is recognizing After the particular emotion 2 of baby, start to shoot video-frequency band b, until detecting that special expression 3 disappears, is recognizing the specific of baby After expression 3, start to shoot video-frequency band c, until detecting that special expression 3 disappears, in turn, synthesis shooting video-frequency band a, b and c are closed At videograph baby do splendid moment of interesting expression, can more meet the recreational demand of user.
Wherein, in the application scenarios, in order to more comprehensively record the interesting moment of baby, it can also synchronize detection and remember The voice messaging for recording baby, to, the multiple demand of user is met, the candid photograph of growth moment excellent to baby is realized, For example, crying baby the candid photograph etc. of the moment of " mother " for the first time.
In conclusion the automatic photographing method of the embodiment of the present application, is known according to the baby's face trained based on deep learning Other model whether there is baby's facial area in detection acquisition image, if detecting in acquisition image there are baby's facial area, Then obtained in turn if comparing according to the real-time expression of baby's Expression Recognition model inspection baby's face based on deep learning training Know that real-time expression belongs to the sample expression classification stored in preset candid photograph database, then shoots baby corresponding with real-time expression Facial picture.Hereby it is achieved that the automatic camera of the interesting face-image of baby, improved while ensureing to take pictures satisfaction It takes pictures efficiency, solves in the prior art, take pictures because baby is difficult to coordinate and user is needed to keep the main attention of height for a long time And capture manually, cause candid photograph difficulty larger and less efficient technical problem of taking pictures.
Based on above example, after the picture that automatic Zhu is discharged to the more interesting expression of baby, in order to further meet The recreational demand of user can also carry out the picture of candid photograph processing etc. again.
In one embodiment of the application, special effect processing is carried out to the picture of candid photograph.
Fig. 5 is according to the flow chart of the automatic photographing method of the 4th embodiment of the application, as shown in figure 5, in above-mentioned step After rapid 103, this method includes:
Step 401, it obtains and the associated Image Processing parameter of real-time expression.
Step 402, image procossing is carried out to baby's face picture according to Image Processing parameter, generates effect picture.
It is appreciated that pre-setting and the different corresponding Image Processing parameters of real-time expression, wherein Image Processing parameter Including the U.S. face processing such as the addition of the special efficacys such as raindrop, spark, meteor, eyes amplification, hog snout addition, word addition, specific trait Cutting etc., for example, being addition raindrop background to the corresponding Image Processing parameter of real-time expression of sobbing.
To acquisition and the associated Image Processing parameter of real-time expression, according to Image Processing parameter to baby's face picture Image procossing is carried out, effect picture is generated and further increases the interest of picture to improve the interest of picture.
In one embodiment of the application, expression packet processing is carried out to the picture of candid photograph.
Fig. 6 is according to the flow chart of the automatic photographing method of the 5th embodiment of the application, as shown in fig. 6, in above-mentioned step After rapid 103, this method includes:
Step 501, multiple the baby's face pictures for belonging to same expression type are obtained.
Step 502, background music corresponding with expression type is obtained.
Wherein it is possible to pre-set from the correspondence of the different corresponding background musics of expression type, such as to sadness Expression type setting sad music, happy expression type is arranged happy music etc., to, recognizing expression class After type, the correspondence is inquired, obtains background music corresponding with expression type.
Step 503, baby's expression packet is generated according to multiple baby's face pictures and background music.
In practical applications, many social softwares all have expression packet function, the interesting picture for the baby that can be captured The expression packet of generation can further meet the entertainment requirements of user.
Specifically, multiple the baby's face pictures for belonging to same expression type are obtained, and according to multiple babies face Picture and background music generate baby's expression packet.Certainly, in practical applications, the different expression of baby can also be collectively referred to as one A baby's expression packet, to meet the entertainment requirements of user individual.
In conclusion the automatic photographing method of the embodiment of the present application, can the entertainment requirements based on user's diversification to baby Youngster's face picture carries out different processing, improves the interest captured automatically as a result, further meets the amusement of user and needs It asks.
In order to realize that above-described embodiment, the application also proposed a kind of Automatic camera, Fig. 7 is according to the application first The structural schematic diagram of the Automatic camera of a embodiment, as shown in fig. 7, the Automatic camera includes:First detection module 100, the second detection module 200 and taking module 300.
Wherein, first detection module 100, for according to baby's human face recognition model based on deep learning training, detection It acquires and whether there is baby's facial area in image.
Second detection module 200, in detecting acquisition image there are baby's facial area, then according to being based on depth The real-time expression of baby's Expression Recognition model inspection baby's face of learning training.
Taking module 300, for knowing that real-time expression belongs to the sample table that stores in preset candid photograph database comparing Feelings classification then shoots baby's face picture corresponding with real-time expression.
In one embodiment of the application, as shown in figure 8, the Automatic camera further includes acquisition module 400, setting Module 500 and training generation module 600.
Wherein, acquisition module 400, for acquiring the samples pictures for including baby's facial area.
Setup module 500, for every samples pictures to be cut into multiple data blocks according to pre-set dimension, and for per number According to block set content label.
Training generation module 600, for all data blocks and the corresponding content tab of each data block to be input to Convolutional neural networks model is trained, and generates baby's human face recognition model.
It should be noted that the aforementioned explanation to automatic photographing method, is also applied for the automatic of the embodiment of the present application Camera arrangement, realization principle is similar, and details are not described herein.
In conclusion the Automatic camera of the embodiment of the present application, is known according to the baby's face trained based on deep learning Other model whether there is baby's facial area in detection acquisition image, if detecting in acquisition image there are baby's facial area, Then obtained in turn if comparing according to the real-time expression of baby's Expression Recognition model inspection baby's face based on deep learning training Know that real-time expression belongs to the sample expression classification stored in preset candid photograph database, then shoots baby corresponding with real-time expression Facial picture.Hereby it is achieved that the automatic camera of the interesting face-image of baby, improved while ensureing to take pictures satisfaction It takes pictures efficiency, solves in the prior art, take pictures because baby is difficult to coordinate and user is needed to keep the main attention of height for a long time And capture manually, cause candid photograph difficulty larger and less efficient technical problem of taking pictures.
Fig. 9 is according to the structural schematic diagram of the Automatic camera of the application third embodiment, as shown in figure 9, such as On the basis of shown in Fig. 7, which further includes acquisition module 700 and generation module 800.
Wherein, acquisition module 700, for obtaining and the associated Image Processing parameter of real-time expression.
Generation module 800 generates design sketch for carrying out image procossing to baby's face picture according to Image Processing parameter Piece.
It should be noted that the aforementioned explanation to automatic photographing method, is also applied for the automatic of the embodiment of the present application Camera arrangement, realization principle is similar, and details are not described herein.
In conclusion the Automatic camera of the embodiment of the present application, can the entertainment requirements based on user's diversification to baby Youngster's face picture carries out different processing, improves the interest captured automatically as a result, further meets the amusement of user and needs It asks.
In order to realize above-described embodiment, the application also proposed a kind of computer equipment, including memory, processor and deposit The computer program that can be run on a memory and on a processor is stored up, it is real when the processor executes the computer program The now automatic photographing method as described in previous embodiment.
In order to realize that above-described embodiment, the application also propose a kind of non-transitorycomputer readable storage medium, deposit thereon Computer program is contained, automatic camera as in the foregoing embodiment can be realized when the computer program is executed by processor Method.
In the description of this specification, reference term " one embodiment ", " some embodiments ", " example ", " specifically show The description of example " or " some examples " etc. means specific features, structure, material or spy described in conjunction with this embodiment or example Point is contained at least one embodiment or example of the application.In the present specification, schematic expression of the above terms are not It must be directed to identical embodiment or example.Moreover, particular features, structures, materials, or characteristics described can be in office It can be combined in any suitable manner in one or more embodiments or example.In addition, without conflicting with each other, the skill of this field Art personnel can tie the feature of different embodiments or examples described in this specification and different embodiments or examples It closes and combines.
In addition, term " first ", " second " are used for description purposes only, it is not understood to indicate or imply relative importance Or implicitly indicate the quantity of indicated technical characteristic.Define " first " as a result, the feature of " second " can be expressed or Implicitly include at least one this feature.In the description of the present application, the meaning of " plurality " is at least two, such as two, three It is a etc., unless otherwise specifically defined.
Any process described otherwise above or method description are construed as in flow chart or herein, and expression includes It is one or more for realizing custom logic function or process the step of executable instruction code module, segment or portion Point, and the range of the preferred embodiment of the application includes other realization, wherein can not press shown or discuss suitable Sequence, include according to involved function by it is basic simultaneously in the way of or in the opposite order, to execute function, this should be by the application Embodiment person of ordinary skill in the field understood.
Expression or logic and/or step described otherwise above herein in flow charts, for example, being considered use In the order list for the executable instruction for realizing logic function, may be embodied in any computer-readable medium, for Instruction execution system, device or equipment (system of such as computer based system including processor or other can be held from instruction The instruction fetch of row system, device or equipment and the system executed instruction) it uses, or combine these instruction execution systems, device or set It is standby and use.For the purpose of this specification, " computer-readable medium " can any can be included, store, communicating, propagating or passing Defeated program is for instruction execution system, device or equipment or the dress used in conjunction with these instruction execution systems, device or equipment It sets.The more specific example (non-exhaustive list) of computer-readable medium includes following:Electricity with one or more wiring Interconnecting piece (electronic device), portable computer diskette box (magnetic device), random access memory (RAM), read-only memory (ROM), erasable edit read-only storage (EPROM or flash memory), fiber device and portable optic disk is read-only deposits Reservoir (CDROM).In addition, computer-readable medium can even is that the paper that can print described program on it or other are suitable Medium, because can be for example by carrying out optical scanner to paper or other media, then into edlin, interpretation or when necessary with it His suitable method is handled electronically to obtain described program, is then stored in computer storage.
It should be appreciated that each section of the application can be realized with hardware, software, firmware or combination thereof.Above-mentioned In embodiment, software that multiple steps or method can in memory and by suitable instruction execution system be executed with storage Or firmware is realized.Such as, if realized in another embodiment with hardware, following skill well known in the art can be used Any one of art or their combination are realized:With for data-signal realize logic function logic gates from Logic circuit is dissipated, the application-specific integrated circuit with suitable combinational logic gate circuit, programmable gate array (PGA), scene can compile Journey gate array (FPGA) etc..
Those skilled in the art are appreciated that realize all or part of step that above-described embodiment method carries Suddenly it is that relevant hardware can be instructed to complete by program, the program can be stored in a kind of computer-readable storage medium In matter, which includes the steps that one or a combination set of embodiment of the method when being executed.
In addition, each functional unit in each embodiment of the application can be integrated in a processing module, it can also That each unit physically exists alone, can also two or more units be integrated in a module.Above-mentioned integrated mould The form that hardware had both may be used in block is realized, can also be realized in the form of software function module.The integrated module is such as Fruit is realized in the form of software function module and when sold or used as an independent product, can also be stored in a computer In read/write memory medium.
Storage medium mentioned above can be read-only memory, disk or CD etc..Although having been shown and retouching above Embodiments herein is stated, it is to be understood that above-described embodiment is exemplary, and should not be understood as the limit to the application System, those skilled in the art can be changed above-described embodiment, change, replace and become within the scope of application Type.

Claims (10)

1. a kind of automatic photographing method, which is characterized in that include the following steps:
According to the baby's human face recognition model trained based on deep learning, it whether there is baby facial area in detection acquisition image Domain;
If detecting, there are baby's facial areas in the acquisition image, according to baby's table based on deep learning training Feelings identification model detects the real-time expression of baby's face;
Know that the real-time expression belongs to the sample expression classification stored in preset candid photograph database, shooting and institute if comparing State the corresponding baby's face picture of real-time expression.
2. the method as described in claim 1, which is characterized in that known based on baby's face that deep learning is trained in the basis Other model, detection acquire in image with the presence or absence of before baby's facial area, further include:
Acquisition includes the samples pictures of baby's facial area;
Every samples pictures are cut into multiple data blocks according to pre-set dimension, and are each data block set content label;
All data blocks and the corresponding content tab of each data block are input to convolutional neural networks model to be trained, Generate baby's human face recognition model.
3. method as claimed in claim 2, which is characterized in that known based on baby's expression that deep learning is trained in the basis Before the real-time expression of baby's face described in other model inspection, further include:
It intercepts baby's facial area in each samples pictures and obtains sample face picture, and be arranged for each sample face picture Expression label;
All sample face pictures and expression label corresponding with each sample face picture are input to convolutional Neural net Network model is trained, and generates baby's Expression Recognition model.
4. the method as described in claim 1, which is characterized in that in the startup shooting instruction, shooting and the real-time expression After corresponding baby's face picture, further include:
It obtains and the real-time associated Image Processing parameter of expression;
Image procossing is carried out to baby's face picture according to described image processing parameter, generates effect picture.
5. the method as described in claim 1-4 is any, which is characterized in that further include:
Obtain multiple the baby's face pictures for belonging to same expression type;
Obtain background music corresponding with the expression type;
Baby's expression packet is generated according to multiple described baby's face pictures and the background music.
6. a kind of Automatic camera, which is characterized in that including:
First detection module, for according to baby's human face recognition model based on deep learning training, detection, which acquires in image, to be It is no that there are baby's facial areas;
Second detection module, in detecting the acquisition image there are baby's facial area, then according to based on deep Spend the real-time expression classification of baby's face described in baby's Expression Recognition model inspection of learning training;
Taking module, for knowing that the real-time expression belongs to the sample expression that stores in preset candid photograph database comparing, Then shooting baby's face picture corresponding with the real-time expression.
7. device as claimed in claim 6, which is characterized in that further include:
Acquisition module, for acquiring the samples pictures for including baby's facial area;
Setup module for every samples pictures to be cut into multiple data blocks according to pre-set dimension, and sets for each data block Set content tab;
Training generation module, for all data blocks and the corresponding content tab of each data block to be input to convolutional Neural Network model is trained, and generates baby's human face recognition model.
8. device as claimed in claim 6, which is characterized in that further include:
Acquisition module, for obtaining and the real-time associated Image Processing parameter of expression;
Generation module generates effect for carrying out image procossing to baby's face picture according to described image processing parameter Picture.
9. a kind of computer equipment, which is characterized in that on a memory and can be in processor including memory, processor and storage The computer program of upper operation when the processor executes the computer program, is realized as described in any in claim 1-5 Automatic photographing method.
10. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the computer program The automatic photographing method as described in any in claim 1-5 is realized when being executed by processor.
CN201810420214.2A 2018-05-04 2018-05-04 Automatic photographing method and device Pending CN108737729A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810420214.2A CN108737729A (en) 2018-05-04 2018-05-04 Automatic photographing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810420214.2A CN108737729A (en) 2018-05-04 2018-05-04 Automatic photographing method and device

Publications (1)

Publication Number Publication Date
CN108737729A true CN108737729A (en) 2018-11-02

Family

ID=63937989

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810420214.2A Pending CN108737729A (en) 2018-05-04 2018-05-04 Automatic photographing method and device

Country Status (1)

Country Link
CN (1) CN108737729A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109922266A (en) * 2019-03-29 2019-06-21 睿魔智能科技(深圳)有限公司 Grasp shoot method and system, video camera and storage medium applied to video capture
CN111046814A (en) * 2019-12-18 2020-04-21 维沃移动通信有限公司 Image processing method and electronic device
CN112511746A (en) * 2020-11-27 2021-03-16 恒大新能源汽车投资控股集团有限公司 In-vehicle photographing processing method and device and computer readable storage medium
CN112581417A (en) * 2020-12-14 2021-03-30 深圳市众采堂艺术空间设计有限公司 Facial expression obtaining, modifying and imaging system and method
CN112637487A (en) * 2020-12-17 2021-04-09 四川长虹电器股份有限公司 Television intelligent photographing method based on time stack expression recognition
CN113315904A (en) * 2020-02-26 2021-08-27 北京小米移动软件有限公司 Imaging method, imaging device, and storage medium
CN113505748A (en) * 2021-07-28 2021-10-15 宁波星巡智能科技有限公司 Method, device, equipment and medium for capturing wonderful images of infants
CN116229674A (en) * 2023-02-07 2023-06-06 广州后为科技有限公司 Infant monitoring method, device, electronic equipment and storage medium
EP4290473A1 (en) * 2022-06-06 2023-12-13 Compal Electronics, Inc. Dynamic image processing method, electronic device, and terminal device and mobile communication device connected thereto

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150098001A1 (en) * 2013-10-08 2015-04-09 Panasonic Corporation Imaging device
CN105354565A (en) * 2015-12-23 2016-02-24 北京市商汤科技开发有限公司 Full convolution network based facial feature positioning and distinguishing method and system
CN105373777A (en) * 2015-10-30 2016-03-02 中国科学院自动化研究所 Face recognition method and device
CN105872352A (en) * 2015-12-08 2016-08-17 乐视移动智能信息技术(北京)有限公司 Method and device for shooting picture
CN106210526A (en) * 2016-07-29 2016-12-07 维沃移动通信有限公司 A kind of image pickup method and mobile terminal
CN106372627A (en) * 2016-11-07 2017-02-01 捷开通讯(深圳)有限公司 Automatic photographing method and device based on face image recognition and electronic device
CN106778525A (en) * 2016-11-25 2017-05-31 北京旷视科技有限公司 Identity identifying method and device
CN107147842A (en) * 2017-04-26 2017-09-08 广东艾檬电子科技有限公司 A kind of method and device of children's photograph
CN107249100A (en) * 2017-06-30 2017-10-13 北京金山安全软件有限公司 Photographing method and device, electronic equipment and storage medium
CN107347138A (en) * 2017-06-30 2017-11-14 广东欧珀移动通信有限公司 Image processing method, device, storage medium and terminal
CN107786803A (en) * 2016-08-29 2018-03-09 中兴通讯股份有限公司 A kind of image generating method, device and terminal device
CN107948512A (en) * 2017-11-29 2018-04-20 奇酷互联网络科技(深圳)有限公司 Image pickup method, device, readable storage medium storing program for executing and intelligent terminal
CN107958230A (en) * 2017-12-22 2018-04-24 中国科学院深圳先进技术研究院 Facial expression recognizing method and device

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150098001A1 (en) * 2013-10-08 2015-04-09 Panasonic Corporation Imaging device
CN105373777A (en) * 2015-10-30 2016-03-02 中国科学院自动化研究所 Face recognition method and device
CN105872352A (en) * 2015-12-08 2016-08-17 乐视移动智能信息技术(北京)有限公司 Method and device for shooting picture
CN105354565A (en) * 2015-12-23 2016-02-24 北京市商汤科技开发有限公司 Full convolution network based facial feature positioning and distinguishing method and system
CN106210526A (en) * 2016-07-29 2016-12-07 维沃移动通信有限公司 A kind of image pickup method and mobile terminal
CN107786803A (en) * 2016-08-29 2018-03-09 中兴通讯股份有限公司 A kind of image generating method, device and terminal device
CN106372627A (en) * 2016-11-07 2017-02-01 捷开通讯(深圳)有限公司 Automatic photographing method and device based on face image recognition and electronic device
CN106778525A (en) * 2016-11-25 2017-05-31 北京旷视科技有限公司 Identity identifying method and device
CN107147842A (en) * 2017-04-26 2017-09-08 广东艾檬电子科技有限公司 A kind of method and device of children's photograph
CN107249100A (en) * 2017-06-30 2017-10-13 北京金山安全软件有限公司 Photographing method and device, electronic equipment and storage medium
CN107347138A (en) * 2017-06-30 2017-11-14 广东欧珀移动通信有限公司 Image processing method, device, storage medium and terminal
CN107948512A (en) * 2017-11-29 2018-04-20 奇酷互联网络科技(深圳)有限公司 Image pickup method, device, readable storage medium storing program for executing and intelligent terminal
CN107958230A (en) * 2017-12-22 2018-04-24 中国科学院深圳先进技术研究院 Facial expression recognizing method and device

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109922266A (en) * 2019-03-29 2019-06-21 睿魔智能科技(深圳)有限公司 Grasp shoot method and system, video camera and storage medium applied to video capture
CN109922266B (en) * 2019-03-29 2021-04-06 睿魔智能科技(深圳)有限公司 Snapshot method and system applied to video shooting, camera and storage medium
CN111046814A (en) * 2019-12-18 2020-04-21 维沃移动通信有限公司 Image processing method and electronic device
CN113315904A (en) * 2020-02-26 2021-08-27 北京小米移动软件有限公司 Imaging method, imaging device, and storage medium
CN113315904B (en) * 2020-02-26 2023-09-26 北京小米移动软件有限公司 Shooting method, shooting device and storage medium
CN112511746A (en) * 2020-11-27 2021-03-16 恒大新能源汽车投资控股集团有限公司 In-vehicle photographing processing method and device and computer readable storage medium
CN112581417A (en) * 2020-12-14 2021-03-30 深圳市众采堂艺术空间设计有限公司 Facial expression obtaining, modifying and imaging system and method
CN112637487A (en) * 2020-12-17 2021-04-09 四川长虹电器股份有限公司 Television intelligent photographing method based on time stack expression recognition
CN113505748A (en) * 2021-07-28 2021-10-15 宁波星巡智能科技有限公司 Method, device, equipment and medium for capturing wonderful images of infants
EP4290473A1 (en) * 2022-06-06 2023-12-13 Compal Electronics, Inc. Dynamic image processing method, electronic device, and terminal device and mobile communication device connected thereto
CN116229674A (en) * 2023-02-07 2023-06-06 广州后为科技有限公司 Infant monitoring method, device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN108737729A (en) Automatic photographing method and device
Zhang et al. A high-resolution spontaneous 3d dynamic facial expression database
Bettadapura Face expression recognition and analysis: the state of the art
Hu et al. Jointly learning heterogeneous features for RGB-D activity recognition
Fathi et al. Understanding egocentric activities
TW201203134A (en) Facial expression capturing method and apparatus therewith
EP2977949A1 (en) Method and device for playing advertisements based on relationship information between viewers
CN109887095A (en) A kind of emotional distress virtual reality scenario automatic creation system and method
KR20150064977A (en) Video analysis and visualization system based on face information
Seuss et al. Emotion expression from different angles: A video database for facial expressions of actors shot by a camera array
Khodabakhsh et al. A taxonomy of audiovisual fake multimedia content creation technology
Zhang et al. Intelligent Facial Action and emotion recognition for humanoid robots
Gorbova et al. Going deeper in hidden sadness recognition using spontaneous micro expressions database
US11163822B2 (en) Emotional experience metadata on recorded images
CN109086351A (en) A kind of method and user tag system obtaining user tag
US9928877B2 (en) Method and system for automatic generation of an animated message from one or more images
Barros et al. A self-organizing model for affective memory
Mohseni et al. Facial expression recognition using DCT features and neural network based decision tree
KR102247481B1 (en) Device and method for generating job image having face to which age transformation is applied
CN106537417B (en) Summarizing photo album
Vo et al. SAM: the school attachment monitor
Weber et al. A survey on databases of facial macro-expression and micro-expression
Bhutada et al. Emotion based music recommendation system
Melgare et al. Investigating emotion style in human faces and avatars
McQuire Digital photography and the operational archive

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20181102