CN108629339A - Image processing method and related product - Google Patents

Image processing method and related product Download PDF

Info

Publication number
CN108629339A
CN108629339A CN201810633163.1A CN201810633163A CN108629339A CN 108629339 A CN108629339 A CN 108629339A CN 201810633163 A CN201810633163 A CN 201810633163A CN 108629339 A CN108629339 A CN 108629339A
Authority
CN
China
Prior art keywords
image
target
information collection
feature information
target cranial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810633163.1A
Other languages
Chinese (zh)
Other versions
CN108629339B (en
Inventor
胡孔勇
姚娟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201810633163.1A priority Critical patent/CN108629339B/en
Publication of CN108629339A publication Critical patent/CN108629339A/en
Application granted granted Critical
Publication of CN108629339B publication Critical patent/CN108629339B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/162Detection; Localisation; Normalisation using pixel segmentation or colour matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2024Style variation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/179Human faces, e.g. facial parts, sketches or expressions metadata assisted face recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Artificial Intelligence (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the present application discloses a kind of image processing method and Related product, wherein method include:Extract the target cranial image in pending image;Obtain the corresponding multiple head feature information of the target cranial image;According to the multiple head feature information creating target cranial model;According to pre-stored body model and the first three dimensional character of target cranial model creation image.Using the application, three dimensional character image corresponding with target cranial image can be created, improves interesting and user experience.

Description

Image processing method and Related product
Technical field
This application involves technical field of electronic equipment, a kind of image processing method and Related product have been related generally to.
Background technology
With the development of electronic device technology, more and more users use electronic equipment (such as mobile phone, tablet computer) Shoot image, face exchanges the new hot spot for being increasingly becoming people's social entertainment, a variety of applications for having function of changing face are developed Out, enjoyment is brought for the entertainment life of people.
Invention content
The embodiment of the present application provides a kind of image processing method and Related product, can create corresponding with target cranial image Three dimensional character image, improve interesting and user experience.
In a first aspect, the embodiment of the present application provides a kind of image processing method, including:
Extract the target cranial image in pending image;
Obtain the corresponding multiple head feature information of the target cranial image;
According to the multiple head feature information creating target cranial model;
According to pre-stored body model and the first three dimensional character of target cranial model creation image.
Second aspect, the embodiment of the present application provide a kind of image processing apparatus, including:
Extraction unit, for extracting the target cranial image in pending image;
Acquiring unit, for obtaining the corresponding multiple head feature information of the target cranial image;
Creating unit, for according to the multiple head feature information creating target cranial model;And according to depositing in advance The body model of storage and the first three dimensional character of target cranial model creation image.
The third aspect, the embodiment of the present application provide a kind of electronic equipment, including processor, memory and one or more Program, wherein said one or multiple programs are stored in above-mentioned memory, and are configured to be executed by above-mentioned processor, Described program includes the instruction for the step some or all of as described in first aspect.
Fourth aspect, the embodiment of the present application provide a kind of computer readable storage medium, wherein described computer-readable Storage medium stores computer program, wherein the computer program makes computer execute such as the embodiment of the present application first party Step some or all of described in face.
5th aspect, the embodiment of the present application provide a kind of computer program product, wherein the computer program product Non-transient computer readable storage medium including storing computer program, the computer program are operable to make calculating Machine executes the step some or all of as described in the embodiment of the present application first aspect.The computer program product can be one A software installation packet.
Implement the embodiment of the present application, will have the advantages that:
After using above-mentioned image processing method and Related product, electronic equipment extracts the target in pending image Head image obtains the corresponding multiple head feature information of the target cranial image, according to the multiple head feature information Target cranial model is created, further according to pre-stored body model and the first three dimensional character of target cranial model creation figure Picture.In this way, three dimensional character image corresponding with target cranial image can be created, to improve interesting and user experience.
Description of the drawings
In order to illustrate the technical solutions in the embodiments of the present application or in the prior art more clearly, to embodiment or will show below There is attached drawing needed in technology description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this Some embodiments of application for those of ordinary skill in the art without creative efforts, can be with Obtain other attached drawings according to these attached drawings.
Wherein:
Figure 1A is a kind of flow diagram of image processing method provided by the embodiments of the present application;
Figure 1B is the scene signal of the target cranial image in a kind of pending image of extraction provided by the embodiments of the present application Figure;
Fig. 1 C are the schematic diagram of a scenario of the characteristic point in a kind of extraction target cranial image provided by the embodiments of the present application;
Fig. 1 D are a kind of scene signal determining a plurality of target direction in pending image provided by the embodiments of the present application Figure;
Fig. 1 E are a kind of schematic diagram of a scenario of first three dimensional character image of editor provided by the embodiments of the present application;
Fig. 2 is the flow diagram of another image processing method provided by the embodiments of the present application;
Fig. 3 is a kind of structural schematic diagram of image processing apparatus provided by the embodiments of the present application;
Fig. 4 is the structural schematic diagram of a kind of electronic equipment provided by the embodiments of the present application.
Specific implementation mode
In order to make those skilled in the art more fully understand application scheme, below in conjunction in the embodiment of the present application Attached drawing, technical solutions in the embodiments of the present application are clearly and completely described, it is clear that described embodiment is only Some embodiments of the present application, instead of all the embodiments.Based on the embodiment in the application, those of ordinary skill in the art The every other embodiment obtained without creative efforts, shall fall in the protection scope of this application.
Term " first ", " second " in the description and claims of this application and above-mentioned attached drawing etc. are for distinguishing Different objects, rather than for describing particular order.In addition, term " comprising " and " having " and their any deformations, it is intended that It is to cover and non-exclusive includes.Such as process, method, system, product or the equipment for containing series of steps or unit do not have It is defined in the step of having listed or unit, but further includes the steps that optionally not listing or unit, or optionally also wrap It includes for other intrinsic steps of these processes, method, product or equipment or unit.
Referenced herein " embodiment " is it is meant that a particular feature, structure, or characteristic described can wrap in conjunction with the embodiments It is contained at least one embodiment of the application.Each position in the description occur the phrase might not each mean it is identical Embodiment, nor the independent or alternative embodiment with other embodiments mutual exclusion.Those skilled in the art explicitly and Implicitly understand, embodiment described herein can be combined with other embodiments.
Electronic equipment involved by the embodiment of the present application may include various handheld devices with wireless communication function, Mobile unit, wearable device, computing device or other processing equipments and various forms for being connected to radio modem User equipment (user equipment, UE), mobile station (mobile station, MS), terminal device (terminal Device) etc..For convenience of description, apparatus mentioned above is referred to as electronic equipment.The embodiment of the present application is carried out below detailed It is thin to introduce.
The embodiment of the present application provides a kind of image processing method and Related product, can create corresponding with target cranial image Three dimensional character image, improve interesting and user experience.
Figure 1A is please referred to, the embodiment of the present application provides a kind of flow diagram of image processing method.Specifically, such as Figure 1A It is shown, a kind of image processing method, including:
S101:Extract the target cranial image in pending image.
In the embodiment of the present application, pending image can be acquired image during captured in real-time, can also The image stored in the photograph album for being electronic equipment, or the image etc. to be downloaded in network, do not limit herein.Its data type It can be yuv data, can also be RGBA data, also do not limit herein.Pending image is shot to can be used in electronic equipment Preconfigured camera then can be by project structured light to target user if camera is 3 dimension (dimensions, D) cameras Face on, and acquire pending image be 3D rendering, can be improved create target cranial model accuracy.
The method for extracting target cranial image is not also limited, can be the image in specified region, such as Figure 1B institutes Show, left side is the acquisition scene of pending image, wherein circle is specified region, that is, extracts pending Circle in Digital Images circle and correspond to Image, obtain right side shown in target cranial image;Because there is certain difference, mesh in the colour of skin, hair color and background color Mark head image can also determine target cranial image etc. according to the aberration between the colour of skin, hair color and background.
Optionally, using the deep learning frame of recognition of face, that is, the characteristic value of the pending image is extracted, according to institute It states characteristic value and integral image is obtained to the pending image progress Integral Processing;Integrogram is distinguished using self-adaptive enhancement algorithm The strong classifier of face and non-face as in;The strong classifier of face is cascaded up to obtain using Waterfall type cascade classifier Then target facial image determines the target cranial image according to the aberration between hair color and background.
Wherein, feature extraction algorithm can be used histograms of oriented gradients (Histogram of Oriented Gradient, HOG), local binary patterns (Local Binary Pattern, LBP), Gabor wavelet transformation, class Ha Er (Haar-like) etc. Feature extraction algorithm carries out recognition of face etc., does not also limit herein.
Using this method, different features is calculated with the identical time under a variety of scales, can be eliminated rapidly a large amount of to be checked Region is surveyed, average detected expense is reduced, to improve the efficiency and accuracy rate of recognition of face.
S102:Obtain the corresponding multiple head feature information of the target cranial image.
In the embodiment of the present application, multiple head feature information include face, the colour of skin, hair style, accessories (such as:Glasses, ear Ring etc.) etc. characteristic informations.The method for obtaining head feature information is not also limited, can be identified according to corresponding deep learning Target cranial image is identified in model.Wherein, the characteristic information of the colour of skin and color development can be determined by matching color;Five The characteristic information of official can be by carrying out facial modeling, as shown in Figure 1 C to pending image, and the point in figure is face figure The characteristic point position of picture, wherein the corresponding characteristic value of each characteristic point, then according to the location determination face feature of characteristic point The position of point (eyes, eyebrow, nose, face, face's outer profile), then according to the corresponding location determination of each face's organ five The characteristic information of official, such as:Organ contours size, eye distance width, face's ratio, shape of face etc..
S103:According to the multiple head feature information creating target cranial model.
In the embodiment of the present application, target cranial model is three-dimensional head model, for creating the side of target cranial model Method does not limit, optionally, described to include according to the multiple head feature information creating target cranial model:To the multiple Head feature information is classified to obtain facial feature information collection and non-facial feature information collection;According to the facial feature information Collection determines the corresponding characterization information collection of the target cranial image;The target head is determined according to the non-facial feature information collection The corresponding information collection of dressing up of portion's image;The target cranial mould is created according to the characterization information collection and the information collection of dressing up Type.
In an alternative embodiment, facial feature information collection includes face feature and face mask feature;Non- face feature letter Breath collection includes hair style, the colour of skin, may also include the accessories such as earrings, eyes.
Characterization information collection includes the characterization of multiple dimensions such as expressive features, sex character, face feature, face mask feature Information.The method for determining characterization information is not limited, if target dimension is any dimension in characterization information, with the mesh It is described that the corresponding characterization information Ji Bao of the target cranial image is determined according to the facial feature information collection for marking dimension It includes:Extract multiple target signature informations corresponding with the target dimension in the multiple head feature information;It will be the multiple Target signature is matched to obtain multiple matching values with the characteristic parameter of the target dimension;It is determined according to the multiple matching value The corresponding characterization information of the target dimension.
Wherein, the concrete form of matching value is not construed as limiting, can is percentage or decimal;The feature of target dimension is joined Number includes a variety of Expressive Features, such as:The corresponding characteristic parameter of characterization information of gender dimension is man and female;The table of expression dimension Reference ceases corresponding characteristic parameter and is excited, glad, surprised, sad, fears, is shy, contempt, anger etc.;The table of face dimension Reference ceases the mole that corresponding characteristic parameter is double-edged eyelid, single-edge eyelid, Roman nose, the low bridge of the nose, thick face, thin face, specific position (beauty mole, woman matchmaker mole etc.) etc..
It, can be by the maximum in multiple matching values for how according to multiple matching values to determine that characterization information does not also limit Value chooses maximum matching value as the corresponding characterization letter of the target dimension as the corresponding characterization information of the target dimension Breath;Processing can also be weighted to multiple matching values, the preset weights per dimension can according to the matching values of multiple dimensions into Row determination, such as:Expressive features, which are the probability that happy probability is 60%, and motion characteristic is smile, 80%, then can carry The weight of high happy expressive features, weights the characterization probability value of happy expressive features to obtain 80%;Or table in face feature Feelings are characterized as that grim probability is 60%, and have the probit value for staying beard 80%, then the power of grim expressive features can be improved Weight, weights the characterization probability value of grim expressive features to obtain 80%.Consider a variety of different face characteristics, makes most The decision of adaptation, to improve the accuracy for judging face's characterization information.
For example, in target cranial image as shown in Figure 1 C, by multiple head feature information respectively with expression dimension, The characteristic parameter of personality dimension and face dimension is matched, and it is special to obtain 80% happy expressive features, 5% sad expression Sign, 98% male gender feature, 2% female gender feature, 70% square face contour feature, then expressive features characterization letter Breath is happy, and the characterization information of sex character is male, and the characterization information of face mask feature is square face.
It is appreciated that first obtain corresponding with target dimension characteristic information, i.e., multiple target signature informations reduce With range, to improve the accuracy of determining characterization information.Then corresponding with target dimension according to multiple target signature informations Characteristic parameter is matched to obtain multiple matching values, and the corresponding characterization information of target dimension is determined further according to multiple matching values, from And further increase the accuracy rate of face feature description.
Optionally, the corresponding characterization information of the target cranial image is determined according to the facial feature information collection described Before collection, the method further includes:The integrity degree of the target cranial image is obtained according to the facial feature information collection;If institute It states integrity degree and is more than predetermined threshold value, execution is described to determine that the target cranial image is corresponding according to the facial feature information collection The step of characterization information collection.
Wherein, the method for obtaining integrity degree is not limited, each face's device in multiple face's organic images can be obtained Official's image includes the ratio between the number characteristic point sum corresponding with face's organ of characteristic point, obtains multiple ratios;Root The integrity degree of the target cranial image is determined according to the multiple ratio.
In the present embodiment, the corresponding preset weights of each face's organ can be prestored, i.e., according to each face's organ Preset weights and ratio be weighted to obtain the integrity degree, the accuracy of recognition of face can be improved.
It is appreciated that first obtaining the integrity degree of target cranial image according to facial feature information collection, it is more than in integrity degree pre- If when threshold value, according to multiple head feature information creating target cranial models of target cranial image, the target head can be improved Similar degree between portion's model and target cranial image.
In a wherein possible example, the method further includes:If the integrity degree is less than or equal to described default Threshold value acquires an at least frame image;The corresponding multiple head feature letters of target cranial image in an at least frame image described in obtaining Breath.
That is, when integrity degree is less than or equal to predetermined threshold value, an at least frame image is resurveyed, institute is then obtained The corresponding multiple head feature information of target cranial image in an at least frame image are stated, i.e., multiple heads are determined according to multiple image Characteristic information, to improve the accuracy of head feature information, and by the multiple head feature information creating target cranial mould Type, convenient for improving the similar degree between the target cranial model and target cranial image.
Further, the method further includes:The target cranial image is determined according to the facial feature information collection Deviation angle;Prompt message is generated according to the deviation angle;The prompt message is prompted.
In this application, the method for determining deviation angle does not limit, can be first true according to the facial feature information collection Determine the corresponding a plurality of target line of the target cranial image, the deviation angle is determined according to a plurality of target line.
Wherein, a plurality of target line is not limited, a plurality of target line is for determining face in target cranial image Deviation angle between camera.
For example, in target cranial image as shown in figure iD, a plurality of target line includes the bridge of the nose and face center Between line L1, on the left of the bridge of the nose and face between line L2 and the bridge of the nose and right side between line L3, eyes both sides and the corners of the mouth Line L4 and L5 between both sides, the line L6 between two.
It is appreciated that when integrity degree is less than or equal to predetermined threshold value, target head is first determined according to facial feature information collection Then the deviation angle of portion's image prompts prompt message corresponding with deviation angle, secondary acquisition is carried out to improve user Accuracy.
In an alternative embodiment, for how according to characterization information collection and head to dress up information creating target cranial model not It limits, can determine that the corresponding a plurality of target of the target cranial image is straight according to the facial feature information collection according to above-mentioned Then line generates 3-D view according to a plurality of target line and the characterization information collection, then the head is dressed up information The texture (texture) of generation is drawn obtains the target cranial model with the 3-D view.
That is, determining the corresponding 3-D view of target cranial image according to a plurality of target line, head is then added again Portion dresss up the corresponding texture of information, and the similitude between target cranial model and the target facial image can be improved.
In an alternative embodiment, electronic equipment is classified to obtain facial feature information to the multiple head feature information Collection and non-facial feature information collection, then determine the corresponding characterization of the target cranial image according to the facial feature information collection Information collection determines the corresponding information collection of dressing up of the target cranial image, further according to institute according to the non-facial feature information collection It states characterization information collection and the information collection of dressing up creates the target cranial model.In this way, the feature based on target cranial image Information creating target cranial model corresponding with target cranial image, to improve create target cranial model accuracy and User experience.
In a wherein possible example, described according to the multiple head feature information creating target cranial model Later, the method further includes:Show the target cranial model;If being not detected in preset duration for the target head Portion's model retakes instruction, executes step S104;If it is not, the shooting pending image.
Wherein, retake instruction be used to indicate electronic equipment carry out screening-mode, and the image that screening-mode is obtained as Pending image;Preset duration does not limit.
That is, after creating target cranial model, live preview target cranial model, if user does not send Instruction is retaken, then adds body model for target cranial model, to complete to create the first three dimensional character image, otherwise, shooting waits for Image is handled, and re-executes step S101, to improve user experience, and saves the power consumption of electronic equipment.
S104:According to pre-stored body model and the first three dimensional character of target cranial model creation image.
In the embodiment of the present application, body model includes the trunk and four limbs in human body, can pre-enter user height, The physical traits parameters such as weight, three-dimensional, brachium, leg length, then generate and are stored according to physical trait parameter body model, that is, exist After creating target cranial model, body model is added for the target cranial model;If the body for not pre-entering user is special Parameter is levied, then uses the body model of system default, complete three-dimensional people is then obtained by body model and target cranial model Object image.
First three dimensional character image can be the 3-D view for restoring the corresponding personage of target cranial image, can also be with Cartoon figure's image does not limit herein to improve the interest of model as the figure kind.
The method for storing body model is not limited, it, can since head size and stature ratio have certain relationship Determine that body model corresponding with target cranial model can also carry out to improve the concordance of personage according to user preferences Setting, such as:Cartoon figure is in order to increase lovely property, head large percentage;Female user hobby is slender very thin, and curve is well-balanced Stature etc., to which flexibility and the user experience of selection can be improved.
Optionally, the method further includes:Multiple of lookup and the facial feature information collection successful match from photograph album Reference picture;At least one is chosen from multiple described reference pictures refers to whole body images;It is complete according to an at least reference Body image determines the target proportion information between head and body;According to the percent information between pre-stored head and body Mapping relations between body model determine the corresponding intended body model of the target proportion information;The basis is advance Storage body model and the first three dimensional character of target cranial model creation image include:According to the intended body model and First three dimensional character image described in the target cranial model creation.
Wherein, the facial feature information of each reference picture is matched with the facial feature information collection in multiple reference pictures Success, can be according to the character image searched in the human classification image set in photograph album with the facial feature information collection successful match Collection, then the character image collection is the multiple reference picture;Also the facial feature information of each image in photograph album is can extract, if with Facial feature information collection successful match, then the image is reference picture.
It is the whole body images for including user corresponding with the target cranial image with reference to whole body images, that is, includes head figure Picture and body image, so as to determine the stature ratio of user according to reference to whole body images, i.e., head model and body model it Between percent information, can be improved the original image being close to the users, enhance flexibility.
If refer to whole body images including multiple, each can be obtained with reference to the corresponding stature ratio of whole body images, then Its average value or the most ratio value of number are obtained as the target proportion information, to improve the accurate of target proportion information Property.
It is appreciated that first obtaining multiple reference pictures with the facial feature information collection successful match from photograph album, so It chooses at least one from multiple reference pictures again afterwards and refers to whole body images, determined with reference to whole body images according to described at least one Target proportion information between head and body, then by the percent information and body model between pre-stored head and body Between mapping relations determine the corresponding intended body model of the target proportion information, further according to the intended body model and First three dimensional character image described in the target cranial model creation.In this way, image of being close to the users, and improve the harmony of personage Property.
In image processing method as shown in Figure 1A, electronic equipment extracts the target cranial image in pending image, The corresponding multiple head feature information of the target cranial image are obtained, according to the multiple head feature information creating target head Portion's model, further according to pre-stored body model and the first three dimensional character of target cranial model creation image.In this way, can Three dimensional character image corresponding with target cranial image is created, to improve interesting and user experience.
Optionally, described according to pre-stored body model and first three dimensional character of target cranial model creation After image, the method further includes:Show the first three dimensional character image;If receiving for first three dimensional character The replacement of image is dressed up instruction, is jumped to recommendation and is dressed up the page;If receiving selection instruction corresponding for selection component, according to Target, which is dressed up, to change the outfit the first three dimensional character image to obtain the second three dimensional character image;If receiving for described The completion of two three dimensional character images is dressed up instruction, and the second three dimensional character image is stored.
In an alternative embodiment, instruction of dressing up is replaced to be used to indicate electronic equipment and carry out edit model of dressing up;Recommendation is dressed up The page includes the corresponding selection component of a variety of types of dressing up, such as:The colour of skin, hair style, model, glasses, accessories, background etc.. Wherein, a variety of types of dressing up include that target is dressed up;Selection instruction is used to indicate electronic equipment and the second three dimensional character image is arranged Dress up and dresss up for the corresponding target of selection component;Completion instruction of dressing up is used to indicate electronic equipment and exits edit model of dressing up, and Second dressing up for three dimensional character image was completed.
For example, Fig. 1 E are a kind of scene signal of first three dimensional character image of editor provided by the embodiments of the present application Figure, as referring to figure 1E, the recommendation page of dressing up includes hair style, color development, the colour of skin, glasses and five kinds of clothes selection components, further includes weight It claps and preserves two kinds of functional units, i.e., when functional unit is retaken in selection, resurvey pending image, and work(is preserved in selection When energy component, the second three dimensional character image is preserved, to improve flexibility and the user experience of selection.
It is appreciated that after creating the first three dimensional character image, live preview the first three dimensional character image, if receiving Dress up instruction to the replacement for the first three dimensional character image, jump to recommendation and dress up the page, then user can recommend to fill Play the part of and selects satisfied target in the page and dress up.If selection instruction corresponding for the selection component is received, according to the mesh Mark, which is dressed up, to change the outfit the first three dimensional character image to obtain the second three dimensional character image, i.e., is carried out during selection Preview, user can modify according to the image effect of the second three dimensional character image.If receiving for the described second three-dimensional people The completion of object image is dressed up instruction, and the second three dimensional character image is stored, then is needing to use the second three dimensional character next time When image, can directly according to the second three dimensional character image of storage carry out edit operation, to improve selection flexibility and User experience.
It is consistent with the embodiment of Figure 1A, Fig. 2 is please referred to, Fig. 2 is another image processing method provided by the embodiments of the present application The flow diagram of method, as shown in Fig. 2, above-mentioned image processing method includes:
S201:Extract the target cranial image in pending image.
S201:Obtain the corresponding multiple head feature information of the target cranial image.
S203:The multiple head feature information is classified to obtain facial feature information collection and non-facial feature information Collection.
Optionally, the method further includes:The complete of the target cranial image is obtained according to the facial feature information collection Whole degree;If the integrity degree is more than predetermined threshold value, step S204 is executed.
It is appreciated that first obtaining the integrity degree of target cranial image according to facial feature information collection, it is more than in integrity degree pre- If when threshold value, according to multiple head feature information creating target cranial models of target cranial image, the target head can be improved Similar degree between portion's model and target cranial image.
S204:The corresponding characterization information collection of the target cranial image is determined according to the facial feature information collection, according to The non-facial feature information collection determines the corresponding information collection of dressing up of the target cranial image.
S205:Target cranial model is created according to the characterization information collection and the information collection of dressing up.
S206:According to pre-stored body model and the first three dimensional character of target cranial model creation image.
Optionally, the method further includes:Multiple of lookup and the facial feature information collection successful match from photograph album Reference picture;At least one is chosen from multiple described reference pictures refers to whole body images;It is complete according to an at least reference Body image determines the target proportion information between head and body;According to the percent information between pre-stored head and body Mapping relations between body model determine the corresponding intended body model of the target proportion information;The basis is advance Storage body model and the first three dimensional character of target cranial model creation image include:According to the intended body model and First three dimensional character image described in the target cranial model creation.
It is appreciated that first obtaining multiple reference pictures with the facial feature information collection successful match from photograph album, so It chooses at least one from multiple reference pictures again afterwards and refers to whole body images, determined with reference to whole body images according to described at least one Target proportion information between head and body, then by the percent information and body model between pre-stored head and body Between mapping relations determine the corresponding intended body model of the target proportion information, further according to the intended body model and First three dimensional character image described in the target cranial model creation.In this way, image of being close to the users, and improve the harmony of personage Property.
In image processing method as shown in Figure 2, electronic equipment extracts the target cranial image in pending image, obtains The corresponding multiple head feature information of the target cranial image are taken, the multiple head feature information is classified to obtain face Portion's characteristic information collection and non-facial feature information collection, then determine the target cranial image according to the facial feature information collection Corresponding characterization information collection determines the corresponding information of dressing up of the target cranial image according to the non-facial feature information collection Collection creates the target cranial model, according to pre-stored body further according to the characterization information collection and the information collection of dressing up Body Model and the first three dimensional character of target cranial model creation image.In this way, the characteristic information based on target cranial image Create corresponding with target cranial image three dimensional character image, to improve create the accuracy of the first three dimensional character image with User experience.
It is consistent with the embodiment of Figure 1A, Fig. 3 is please referred to, Fig. 3 is a kind of image processing apparatus provided by the embodiments of the present application Structural schematic diagram, as shown in figure 3, above-mentioned image processing apparatus 300 includes:
Extraction unit 301, for extracting the target cranial image in pending image;
Acquiring unit 302, for obtaining the corresponding multiple head feature information of the target cranial image;
Creating unit 303, for according to the multiple head feature information creating target cranial model;And according to advance The body model of storage and the first three dimensional character of target cranial model creation image.
It is appreciated that extraction unit 301 extracts the target cranial image in pending image, acquiring unit 302 obtains institute The corresponding multiple head feature information of target cranial image are stated, creating unit 303 is according to the multiple head feature information creating Target cranial model, further according to pre-stored body model and the first three dimensional character of target cranial model creation image. In this way, three dimensional character image corresponding with target cranial image can be created, to improve interesting and user experience.
In a possible example, described according to the multiple head feature information creating target cranial model side Face, the creating unit 303 be specifically used for the multiple head feature information classified to obtain facial feature information collection and Non- facial feature information collection;The corresponding characterization information collection of the target cranial image is determined according to the facial feature information collection; The corresponding information collection of dressing up of the target cranial image is determined according to the non-facial feature information collection;According to the characterization information Collection and the information collection of dressing up create the target cranial model.
In a possible example, the target cranial image pair is determined according to the facial feature information collection described Before the characterization information collection answered, the acquiring unit 302 is additionally operable to obtain the target head according to the facial feature information collection The integrity degree of portion's image;If the integrity degree is more than predetermined threshold value, the creating unit is called to execute described according to the face Characteristic information collection determines the step of target cranial image corresponding characterization information collection.
In a possible example, before the target cranial image in the pending image of extraction, described device 300 further include:
Searching unit 304, for searching multiple reference charts with the facial feature information collection successful match from photograph album Picture;
Selection unit 305 refers to whole body images for choosing at least one from multiple described reference pictures;
Determination unit 306 determines the target between head and body at least one described in basis with reference to whole body images Percent information;And according to the percent information between pre-stored head and body and the mapping relations between body model, Determine the corresponding intended body model of the target proportion information;
Described according to pre-stored body model and first three dimensional character of target cranial model creation image side Face, the creating unit 303 are specifically used for according to first described in the intended body model and the target cranial model creation Three dimensional character image.
In a possible example, described according to pre-stored body model and the target cranial model creation After first three dimensional character image, described device 300 further includes:
Display unit 307, for showing the first three dimensional character image;
Jump-transfer unit 308, the instruction if replacement for receiving for the first three dimensional character image is dressed up, jumps to Recommendation is dressed up the page, and the recommendation page of dressing up includes that target is dressed up corresponding selection component;
Change the outfit unit 309, if for receiving selection instruction corresponding for the selection component, is filled according to the target Play the part of and change the outfit to the first three dimensional character image, obtains the second three dimensional character image;
Storage unit 310, the instruction if completion for receiving for the second three dimensional character image is dressed up, stores institute State the second three dimensional character image.
It is consistent with the embodiment of Figure 1A, Fig. 4 is please referred to, Fig. 4 is the knot of a kind of electronic equipment provided by the embodiments of the present application Structure schematic diagram.As shown in figure 4, the electronic equipment 400 include processor 410, memory 420, communication interface 430 and one or Multiple programs 440, wherein said one or multiple programs 440 are stored in above-mentioned memory 420, and are configured by upper The execution of processor 410 is stated, above procedure 440 includes the instruction for executing following steps:
Extract the target cranial image in pending image;
Obtain the corresponding multiple head feature information of the target cranial image;
According to the multiple head feature information creating target cranial model;
According to pre-stored body model and the first three dimensional character of target cranial model creation image.
It is appreciated that electronic equipment 400 extracts the target cranial image in pending image, the target cranial figure is obtained As corresponding multiple head feature information, according to the multiple head feature information creating target cranial model, further according to advance The body model of storage and the first three dimensional character of target cranial model creation image.In this way, can create and target cranial figure As corresponding three dimensional character image, to improve interesting and user experience.
In a possible example, described according to the multiple head feature information creating target cranial model side Face, the instruction in described program 440 are specifically used for executing following operation:
Classify to the multiple head feature information, obtains facial feature information collection and non-facial feature information collection;
The corresponding characterization information collection of the target cranial image is determined according to the facial feature information collection;
The corresponding information collection of dressing up of the target cranial image is determined according to the non-facial feature information collection;
The target cranial model is created according to the characterization information collection and the information collection of dressing up.
In a possible example, the target cranial image pair is determined according to the facial feature information collection described Before the characterization information collection answered, the instruction in described program 440 is additionally operable to execute following operation:
The integrity degree of the target cranial image is obtained according to the facial feature information collection;
If the integrity degree is more than predetermined threshold value, execution is described to determine the target head according to the facial feature information collection The step of portion's image corresponding characterization information collection.
In a possible example, the instruction in described program 440 is additionally operable to execute following operation:
Multiple reference pictures with the facial feature information collection successful match are searched from photograph album;
At least one is chosen from multiple described reference pictures refers to whole body images;
According to described at least one the target proportion information between head and body is determined with reference to whole body images;
According to the percent information between pre-stored head and body and the mapping relations between body model, institute is determined State the corresponding intended body model of target proportion information;
In terms of the basis prestores body model and the first three dimensional character of target cranial model creation image, Instruction in described program 440 is specifically used for executing following operation:
According to the first three dimensional character image described in the intended body model and the target cranial model creation.
In a possible example, described according to pre-stored body model and the target cranial model creation After first three dimensional character image, the instruction in described program 440 is additionally operable to execute following operation:
Show the first three dimensional character image;
If receiving the replacement for the first three dimensional character image to dress up instruction, jumps to recommendation and dress up the page, institute It includes that target is dressed up corresponding selection component to state the recommendation page of dressing up;
If receiving selection instruction corresponding for the selection component, dressed up according to the target three-dimensional to described first Character image changes the outfit, and obtains the second three dimensional character image;
If receiving the completion for the second three dimensional character image to dress up instruction, the second three dimensional character figure is stored Picture.
The embodiment of the present application also provides a kind of computer storage media, wherein the computer storage media is stored for depositing Computer program is stored up, which makes computer execute the part or complete of the either method as described in embodiment of the method Portion's step, computer include electronic equipment.
The embodiment of the present application also provides a kind of computer program product, and computer program product includes storing computer journey The non-transient computer readable storage medium of sequence, computer program are operable to that computer is made to execute as in embodiment of the method remembered Some or all of either method of load step.The computer program product can be a software installation packet, and computer includes Electronic equipment.
It should be noted that for each method embodiment above-mentioned, for simple description, therefore it is all expressed as a series of Combination of actions, but those skilled in the art should understand that, the application is not limited by the described action sequence because According to the application, certain steps can be performed in other orders or simultaneously.Secondly, those skilled in the art should also know It knows, embodiment described in this description belongs to preferred embodiment, involved action and pattern not necessarily the application It is necessary.
In the above-described embodiments, it all emphasizes particularly on different fields to the description of each embodiment, there is no the portion being described in detail in some embodiment Point, it may refer to the associated description of other embodiment.
In several embodiments provided herein, it should be understood that disclosed device, it can be by another way It realizes.For example, the apparatus embodiments described above are merely exemplary, for example, unit division, only a kind of logic Function divides, and formula that in actual implementation, there may be another division manner, such as multiple units or component can combine or can collect At to another system, or some features can be ignored or not executed.Another point, shown or discussed mutual coupling Close or direct-coupling or communication connection can be by some interfaces, the INDIRECT COUPLING or communication connection of device or unit, can be with It is electrical or other forms.
The unit illustrated as separating component may or may not be physically separated, and be shown as unit Component may or may not be physical unit, you can be located at a place, or may be distributed over multiple networks On unit.Some or all of unit therein can be selected according to the actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the application can be integrated in a processing unit, it can also It is that each unit physically exists alone, it can also be during two or more units be integrated in one unit.Above-mentioned integrated list The form that hardware had both may be used in member is realized, can also be realized in the form of software program pattern.
If integrated unit is realized and when sold or used as an independent product in the form of software program pattern, can To be stored in a computer-readable access to memory.Based on this understanding, the technical solution of the application is substantially in other words The all or part of the part that contributes to existing technology or the technical solution can embody in the form of software products Come, which is stored in a memory, including some instructions are used so that a computer equipment (can be Personal computer, server or network equipment etc.) execute each embodiment method of the application all or part of step.And it is preceding The memory stated includes:USB flash disk, read-only memory (read-only memory, ROM), random access memory (random Access memory, RAM), mobile hard disk, the various media that can store program code such as magnetic disc or CD.
One of ordinary skill in the art will appreciate that all or part of step in the various methods of above-described embodiment is can It is completed with instructing relevant hardware by program, which can be stored in a computer-readable memory, memory May include:Flash disk, ROM, RAM, disk or CD etc..
The embodiment of the present application is described in detail above, specific case used herein to the principle of the application and Embodiment is expounded, the description of the example is only used to help understand the method for the present application and its core ideas; Meanwhile for those of ordinary skill in the art, according to the thought of the application, can in specific embodiments and applications There is change place, to sum up, the contents of this specification should not be construed as limiting the present application.

Claims (12)

1. a kind of image processing method, which is characterized in that including:
Extract the target cranial image in pending image;
Obtain the corresponding multiple head feature information of the target cranial image;
According to the multiple head feature information creating target cranial model;
According to pre-stored body model and the first three dimensional character of target cranial model creation image.
2. according to the method described in claim 1, it is characterized in that, described according to the multiple head feature information creating target Head model, including:
Classify to the multiple head feature information, obtains facial feature information collection and non-facial feature information collection;
The corresponding characterization information collection of the target cranial image is determined according to the facial feature information collection;
The corresponding information collection of dressing up of the target cranial image is determined according to the non-facial feature information collection;
The target cranial model is created according to the characterization information collection and the information collection of dressing up.
3. according to the method described in claim 2, it is characterized in that, described according to described in facial feature information collection determination Before the corresponding characterization information collection of target cranial image, the method further includes:
The integrity degree of the target cranial image is obtained according to the facial feature information collection;
If the integrity degree is more than predetermined threshold value, execution is described to determine the target cranial figure according to the facial feature information collection As the step of corresponding characterization information collection.
4. according to the method in claim 2 or 3, which is characterized in that the method further includes:
Multiple reference pictures with the facial feature information collection successful match are searched from photograph album;
At least one is chosen from multiple described reference pictures refers to whole body images;
According to described at least one the target proportion information between head and body is determined with reference to whole body images;
According to the percent information between pre-stored head and body and the mapping relations between body model, the mesh is determined Mark the corresponding intended body model of percent information;
The basis prestores body model and the first three dimensional character of target cranial model creation image, including:
According to the first three dimensional character image described in the intended body model and the target cranial model creation.
5. according to claim 1-4 any one of them methods, which is characterized in that described according to pre-stored body model After the first three dimensional character of target cranial model creation image, the method further includes:
Show the first three dimensional character image;
If receiving the replacement for the first three dimensional character image to dress up instruction, jumps to recommendation and dress up the page, it is described to push away It includes that target is dressed up corresponding selection component to recommend the page of dressing up;
If receiving selection instruction corresponding for the selection component, dressed up to first three dimensional character according to the target Image changes the outfit, and obtains the second three dimensional character image;
If receiving the completion for the second three dimensional character image to dress up instruction, the second three dimensional character image is stored.
6. a kind of image processing apparatus, which is characterized in that including:
Extraction unit, for extracting the target cranial image in pending image;
Acquiring unit, for obtaining the corresponding multiple head feature information of the target cranial image;
Creating unit, for according to the multiple head feature information creating target cranial model;And according to pre-stored Body model and the first three dimensional character of target cranial model creation image.
7. device according to claim 6, which is characterized in that described according to the multiple head feature information creating mesh In terms of marking head model, the creating unit is specifically used for classifying to the multiple head feature information, obtains face spy Reference breath collection and non-facial feature information collection;The corresponding table of the target cranial image is determined according to the facial feature information collection Reference breath collection;The corresponding information collection of dressing up of the target cranial image is determined according to the non-facial feature information collection;According to institute It states characterization information collection and the information collection of dressing up creates the target cranial model.
8. device according to claim 7, which is characterized in that described according to described in facial feature information collection determination Before the corresponding characterization information collection of target cranial image, the acquiring unit is additionally operable to be obtained according to the facial feature information collection The integrity degree of the target cranial image;If the integrity degree is more than predetermined threshold value, the creating unit is called to execute described The step of target cranial image corresponding characterization information collection being determined according to the facial feature information collection.
9. device according to claim 7 or 8, which is characterized in that the target cranial in the pending image of extraction Before image, described device further includes:
Searching unit, for searching multiple reference pictures with the facial feature information collection successful match from photograph album;
Selection unit refers to whole body images for choosing at least one from multiple described reference pictures;
Determination unit determines that the target proportion between head and body is believed at least one described in basis with reference to whole body images Breath;And according to the percent information between pre-stored head and body and the mapping relations between body model, determine institute State the corresponding intended body model of target proportion information;
It is described according to pre-stored body model and the first three dimensional character of target cranial model creation image in terms of, institute Creating unit is stated to be specifically used for according to the first three dimensional character described in the intended body model and the target cranial model creation Image.
10. according to claim 6-9 any one of them devices, which is characterized in that described according to pre-stored body mould After type and the first three dimensional character of target cranial model creation image, described device further includes:
Display unit, for showing the first three dimensional character image;
Jump-transfer unit, the instruction if replacement for receiving for the first three dimensional character image is dressed up jump to recommendation dress Play the part of the page, the recommendation page of dressing up includes that target is dressed up corresponding selection component;
Change the outfit unit, if for receiving selection instruction corresponding for the selection component, is dressed up to institute according to the target It states the first three dimensional character image to change the outfit, obtains the second three dimensional character image;
Storage unit, the instruction if completion for receiving for the second three dimensional character image is dressed up, storage described second Three dimensional character image.
11. a kind of electronic equipment, which is characterized in that including processor, memory and one or more programs, wherein described One or more programs are stored in the memory, and are configured to be executed by the processor, and described program includes using The instruction of the step in any one of 1-5 methods is required in perform claim.
12. a kind of computer readable storage medium, which is characterized in that it is used to store computer program, wherein the computer Program makes computer execute method as described in any one in claim 1-5.
CN201810633163.1A 2018-06-15 2018-06-15 Image processing method and related product Active CN108629339B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810633163.1A CN108629339B (en) 2018-06-15 2018-06-15 Image processing method and related product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810633163.1A CN108629339B (en) 2018-06-15 2018-06-15 Image processing method and related product

Publications (2)

Publication Number Publication Date
CN108629339A true CN108629339A (en) 2018-10-09
CN108629339B CN108629339B (en) 2022-10-18

Family

ID=63691979

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810633163.1A Active CN108629339B (en) 2018-06-15 2018-06-15 Image processing method and related product

Country Status (1)

Country Link
CN (1) CN108629339B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109584145A (en) * 2018-10-15 2019-04-05 深圳市商汤科技有限公司 Cartoonize method and apparatus, electronic equipment and computer storage medium
CN109597480A (en) * 2018-11-06 2019-04-09 北京奇虎科技有限公司 Man-machine interaction method, device, electronic equipment and computer readable storage medium
CN110321865A (en) * 2019-07-09 2019-10-11 北京字节跳动网络技术有限公司 Head effect processing method and device, storage medium
CN110755847A (en) * 2019-10-30 2020-02-07 腾讯科技(深圳)有限公司 Virtual operation object generation method and device, storage medium and electronic device
CN111259695A (en) * 2018-11-30 2020-06-09 百度在线网络技术(北京)有限公司 Method and device for acquiring information
CN111383198A (en) * 2020-03-17 2020-07-07 Oppo广东移动通信有限公司 Image processing method and related product
CN112884908A (en) * 2021-02-09 2021-06-01 脸萌有限公司 Augmented reality-based display method, device, storage medium, and program product
CN113408669A (en) * 2021-07-30 2021-09-17 浙江大华技术股份有限公司 Image determination method and device, storage medium and electronic device
US11380037B2 (en) 2019-10-30 2022-07-05 Tencent Technology (Shenzhen) Company Limited Method and apparatus for generating virtual operating object, storage medium, and electronic device
US20220254116A1 (en) 2021-02-09 2022-08-11 Beijing Zitiao Network Technology Co., Ltd. Display method based on augmented reality, device, storage medium and program product

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103200181A (en) * 2013-03-11 2013-07-10 刘强 Network virtual method based on user real identification
CN104318262A (en) * 2014-09-12 2015-01-28 上海明穆电子科技有限公司 Method and system for replacing skin through human face photos
CN106815557A (en) * 2016-12-20 2017-06-09 北京奇虎科技有限公司 A kind of evaluation method of face features, device and mobile terminal
CN106934851A (en) * 2017-02-16 2017-07-07 落地创意(武汉)科技有限公司 A kind of mobile phone scans portrait 3D printing method and system
CN107124560A (en) * 2017-06-19 2017-09-01 上海爱优威软件开发有限公司 A kind of self-heterodyne system, medium and method
CN107174826A (en) * 2017-05-25 2017-09-19 合肥泽诺信息科技有限公司 A kind of game role based on augmented reality is played the part of with the dressing system that changes the outfit
CN107862712A (en) * 2017-10-20 2018-03-30 陈宸 Sized data determines method, apparatus, storage medium and processor

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103200181A (en) * 2013-03-11 2013-07-10 刘强 Network virtual method based on user real identification
CN104318262A (en) * 2014-09-12 2015-01-28 上海明穆电子科技有限公司 Method and system for replacing skin through human face photos
CN106815557A (en) * 2016-12-20 2017-06-09 北京奇虎科技有限公司 A kind of evaluation method of face features, device and mobile terminal
CN106934851A (en) * 2017-02-16 2017-07-07 落地创意(武汉)科技有限公司 A kind of mobile phone scans portrait 3D printing method and system
CN107174826A (en) * 2017-05-25 2017-09-19 合肥泽诺信息科技有限公司 A kind of game role based on augmented reality is played the part of with the dressing system that changes the outfit
CN107124560A (en) * 2017-06-19 2017-09-01 上海爱优威软件开发有限公司 A kind of self-heterodyne system, medium and method
CN107862712A (en) * 2017-10-20 2018-03-30 陈宸 Sized data determines method, apparatus, storage medium and processor

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109584145A (en) * 2018-10-15 2019-04-05 深圳市商汤科技有限公司 Cartoonize method and apparatus, electronic equipment and computer storage medium
CN109597480A (en) * 2018-11-06 2019-04-09 北京奇虎科技有限公司 Man-machine interaction method, device, electronic equipment and computer readable storage medium
CN111259695B (en) * 2018-11-30 2023-08-29 百度在线网络技术(北京)有限公司 Method and device for acquiring information
CN111259695A (en) * 2018-11-30 2020-06-09 百度在线网络技术(北京)有限公司 Method and device for acquiring information
WO2021004322A1 (en) * 2019-07-09 2021-01-14 北京字节跳动网络技术有限公司 Head special effect processing method and apparatus, and storage medium
CN110321865A (en) * 2019-07-09 2019-10-11 北京字节跳动网络技术有限公司 Head effect processing method and device, storage medium
WO2021082787A1 (en) * 2019-10-30 2021-05-06 腾讯科技(深圳)有限公司 Virtual operation object generation method and device, storage medium and electronic apparatus
CN110755847B (en) * 2019-10-30 2021-03-16 腾讯科技(深圳)有限公司 Virtual operation object generation method and device, storage medium and electronic device
US11380037B2 (en) 2019-10-30 2022-07-05 Tencent Technology (Shenzhen) Company Limited Method and apparatus for generating virtual operating object, storage medium, and electronic device
CN110755847A (en) * 2019-10-30 2020-02-07 腾讯科技(深圳)有限公司 Virtual operation object generation method and device, storage medium and electronic device
CN111383198A (en) * 2020-03-17 2020-07-07 Oppo广东移动通信有限公司 Image processing method and related product
CN111383198B (en) * 2020-03-17 2023-04-25 Oppo广东移动通信有限公司 Image processing method and related product
CN112884908A (en) * 2021-02-09 2021-06-01 脸萌有限公司 Augmented reality-based display method, device, storage medium, and program product
US20220254116A1 (en) 2021-02-09 2022-08-11 Beijing Zitiao Network Technology Co., Ltd. Display method based on augmented reality, device, storage medium and program product
WO2022170958A1 (en) * 2021-02-09 2022-08-18 北京字跳网络技术有限公司 Augmented reality-based display method and device, storage medium, and program product
US11763533B2 (en) 2021-02-09 2023-09-19 Beijing Zitiao Network Technology Co., Ltd. Display method based on augmented reality, device, storage medium and program product
CN113408669A (en) * 2021-07-30 2021-09-17 浙江大华技术股份有限公司 Image determination method and device, storage medium and electronic device
CN113408669B (en) * 2021-07-30 2023-06-16 浙江大华技术股份有限公司 Image determining method and device, storage medium and electronic device

Also Published As

Publication number Publication date
CN108629339B (en) 2022-10-18

Similar Documents

Publication Publication Date Title
CN108629339A (en) Image processing method and related product
US11798246B2 (en) Electronic device for generating image including 3D avatar reflecting face motion through 3D avatar corresponding to face and method of operating same
KR102241153B1 (en) Method, apparatus, and system generating 3d avartar from 2d image
US12002160B2 (en) Avatar generation method, apparatus and device, and medium
CN109447895B (en) Picture generation method and device, storage medium and electronic device
TW202044202A (en) Method and apparatus for rendering face model , computer readable storage medium and electronic device
CN110956691B (en) Three-dimensional face reconstruction method, device, equipment and storage medium
CN111325846B (en) Expression base determination method, avatar driving method, device and medium
CN113362263B (en) Method, apparatus, medium and program product for transforming an image of a virtual idol
CN111833236B (en) Method and device for generating three-dimensional face model for simulating user
CN106161939A (en) A kind of method, photo taking and terminal
CN111783511A (en) Beauty treatment method, device, terminal and storage medium
CN110755847B (en) Virtual operation object generation method and device, storage medium and electronic device
CN113628327A (en) Head three-dimensional reconstruction method and equipment
CN111862116A (en) Animation portrait generation method and device, storage medium and computer equipment
US20220284678A1 (en) Method and apparatus for processing face information and electronic device and storage medium
CN111832372A (en) Method and device for generating three-dimensional face model simulating user
WO2023138345A1 (en) Virtual image generation method and system
CN113658324A (en) Image processing method and related equipment, migration network training method and related equipment
CN110276657A (en) Determination method, apparatus, storage medium and the electronic device of target object
CN110489634A (en) A kind of build information recommended method, device, system and terminal device
CN114612614A (en) Human body model reconstruction method and device, computer equipment and storage medium
CN113952738A (en) Virtual character head portrait generation method and device, electronic equipment and readable medium
CN117274504B (en) Intelligent business card manufacturing method, intelligent sales system and storage medium
CN115999156B (en) Role control method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant