CN109302628A - A kind of face processing method based on live streaming, device, equipment and storage medium - Google Patents
A kind of face processing method based on live streaming, device, equipment and storage medium Download PDFInfo
- Publication number
- CN109302628A CN109302628A CN201811241860.9A CN201811241860A CN109302628A CN 109302628 A CN109302628 A CN 109302628A CN 201811241860 A CN201811241860 A CN 201811241860A CN 109302628 A CN109302628 A CN 109302628A
- Authority
- CN
- China
- Prior art keywords
- face
- target
- data
- feature
- human face
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 23
- 238000000034 method Methods 0.000 claims abstract description 21
- 238000012545 processing Methods 0.000 claims description 47
- 210000000887 face Anatomy 0.000 claims description 35
- 230000015654 memory Effects 0.000 claims description 18
- 230000006870 function Effects 0.000 claims description 17
- 238000004590 computer program Methods 0.000 claims description 2
- 238000005516 engineering process Methods 0.000 abstract description 5
- 230000000694 effects Effects 0.000 description 9
- 230000001815 facial effect Effects 0.000 description 9
- 230000003796 beauty Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 238000004891 communication Methods 0.000 description 4
- 238000005265 energy consumption Methods 0.000 description 4
- 230000004044 response Effects 0.000 description 4
- 230000009286 beneficial effect Effects 0.000 description 3
- 230000001737 promoting effect Effects 0.000 description 3
- 230000007246 mechanism Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 241000208340 Araliaceae Species 0.000 description 1
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 description 1
- 235000003140 Panax quinquefolius Nutrition 0.000 description 1
- 238000005452 bending Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 230000005662 electromechanics Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 235000008434 ginseng Nutrition 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 238000010899 nucleation Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000002087 whitening effect Effects 0.000 description 1
- 210000000216 zygoma Anatomy 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/422—Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
- H04N21/4223—Cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/04—Context-preserving transformations, e.g. by using an importance map
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
- H04N21/4312—Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44008—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Signal Processing (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a kind of face processing method based on live streaming, device, equipment and storage mediums.This method comprises: acquiring image data when starting direct broadcasting room;Face datection is carried out in described image data, obtains the target face characteristic in target human face data and the target human face data;The target face characteristic is compared with preset standard faces feature, and image procossing is carried out to the target human face data according to the result of the comparison;The live data streams of the direct broadcasting room are generated according to the target human face data after image procossing.By the way that this method solve in existing net cast technology, automatic U.S. face is excessively unnatural, and U.S. face needs user to spend a lot of time manually, debugging step is troublesome and the problem of parameter complexity.
Description
Technical field
The present embodiments relate to image processing techniques more particularly to a kind of face processing method based on live streaming, device,
Equipment and storage medium.
Background technique
With being widely current for this entertainment way of live streaming.In order to which satisfied effect is broadcast live out, usual main broadcaster be may require that
Use the live streaming software with modification video capability.As the popularity rate of mobile video live streaming software is also higher and higher, everybody is right
The requirement that the U.S. face function of software is broadcast live is also higher and higher.Especially require U.S. face effect and it is true oneself close to, but
The defect of itself is modified well.Especially to common effects such as such as skin makeup, mill skin, thin faces, more stringent requirements are proposed by user.
The existing beautification method for facial image uses a set of identical beautification template, such as beauty for different faces
Scheme elegant show etc. and the makeup templates such as fixed identical a set of whitening, mill skin are all made of after identifying face to different facial images
It is handled, and corresponding landscaping effect can not be provided according to face characteristic different in facial image, landscaping effect is single.Such as
Fruit needs more fully U.S. face optimization to need user's manual adjustment, and geting started there are user, of long duration, step is troublesome, parameter is multiple
The deficiencies of miscellaneous, program operational efficiency is low and excessively unnatural.
Summary of the invention
The present invention provides a kind of face processing method based on live streaming, device, equipment and storage medium, solves existing view
In frequency direct seeding technique, automatic U.S. face is excessively unnatural, and U.S. face needs user to spend a lot of time manually, debugging step is troublesome and ginseng
The complicated problem of number.
In a first aspect, the embodiment of the invention provides a kind of face processing methods based on live streaming, comprising:
When starting direct broadcasting room, image data is acquired;
Face datection is carried out in described image data, obtains the mesh in target human face data and the target human face data
Mark face characteristic;
The target face characteristic is compared with preset standard faces feature, and according to the result of the comparison to described
Target human face data carries out image procossing;
The live data streams of the direct broadcasting room are generated according to the target human face data after image procossing.
Second aspect, the embodiment of the invention also provides a kind of face processing device based on live streaming, comprising:
Image capture module, for acquiring image data when starting direct broadcasting room;
Characteristic extracting module obtains target human face data and described for carrying out Face datection in described image data
Target face characteristic in target human face data;
Feature comparison module, for the target face characteristic to be compared with preset standard faces feature, and root
Image procossing is carried out to the target human face data according to comparison result;
Data flow generation module, for generating the live streaming of the direct broadcasting room according to the target human face data after image procossing
Data flow.
The third aspect, the embodiment of the invention also provides a kind of electronic equipment, the electronic equipment includes central processing unit
With graphics processor;The central processing unit includes image capture module, characteristic extracting module and data flow generation module, described
Graphics processor includes feature comparison module;
Described image acquisition module, for acquiring image data when starting direct broadcasting room;
The characteristic extracting module, for carrying out Face datection in described image data, obtain target human face data and
Target face characteristic in the target human face data;
The feature comparison module, for the target face characteristic to be compared with preset standard faces feature,
And image procossing is carried out to the target human face data according to the result of the comparison;
The data flow generation module, for generating the direct broadcasting room according to the target human face data after image procossing
Live data streams.
Fourth aspect, the embodiment of the invention also provides a kind of electronic equipment, comprising:
One or more processors;
Memory, for storing one or more programs;
When one or more of programs are executed by one or more of processors, so that one or more of processing
Device realizes the face processing method based on live streaming as described in any embodiment.
5th aspect, the embodiment of the invention also provides a kind of computer readable storage mediums, are stored thereon with computer
Program realizes a kind of face processing method based on live streaming as described in any embodiment when the program is executed by processor.
The present invention determines target face characteristic by obtaining target human face data;By target face characteristic and standard faces
Feature is compared, and carries out image procossing, last Shanghai live data to the target human face data according to the result of the comparison
Stream.Solve in existing net cast technology, automatic U.S. face is excessively unnatural, manually U.S. face need user to spend a lot of time,
The troublesome problem with parameter complexity of debugging step, realizes in net cast according to character facial profile, eye size, spacing
Etc. information beauty operation is optimized to face automatically.Reduced on the basis of original user spend on parameter processing when
Between, program operational efficiency height is realized, low in energy consumption, response is fast, is finally reached the effect for promoting user experience.
Detailed description of the invention
Fig. 1 is a kind of flow chart for face processing method based on live streaming that the embodiment of the present invention one provides;
Fig. 2A is a kind of flow chart of the face processing method based on live streaming provided by Embodiment 2 of the present invention;
Fig. 2 B is the schematic diagram provided by Embodiment 2 of the present invention that destination image data is obtained by image data;
Fig. 3 is a kind of mechanism map for face processing device that the embodiment of the present invention three provides;
Fig. 4 is the structural schematic diagram for a kind of electronic equipment that the embodiment of the present invention four provides;
Fig. 5 is the structural schematic diagram for a kind of electronic equipment that the embodiment of the present invention five provides.
Specific embodiment
The present invention is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched
The specific embodiment stated is used only for explaining the present invention rather than limiting the invention.It also should be noted that in order to just
Only the parts related to the present invention are shown in description, attached drawing rather than entire infrastructure.
Embodiment one
Fig. 1 is a kind of flow chart for face processing method based on live streaming that the embodiment of the present invention one provides.The present embodiment
In technical solution, optionally be suitable for main broadcaster live streaming when, pass through picture pick-up device generate video information scene.It can manage
Solution, the technical program is readily applicable in other application scenarios, simply by the presence of need to beautify video information
Problem.This method is executed by a kind of face processing device based on live streaming, which can be by software and/or software
Mode is realized, is generally disposed in electronic equipment.It is generally necessary to which electronic equipment is provided simultaneously with CPU (Central Processing
Unit, central processing unit) and GPU (Graphics Processing Unit, graphics processor), but only have the electronics of CPU
This operation also can be performed in equipment.
The scene being broadcast live due to the main broadcaster that this programme is primarily adapted for use in live streaming platform by picture pick-up device.Platform is broadcast live
Including multiple live streaming rooms, it includes: uniform resource locator (URL), room number, room current state (in use that room, which is broadcast live,
Or idle) and the information such as room live content.Live streaming platform can carry out at cluster room according to the live content in live streaming room
Reason.The user group of live streaming platform can be divided into spectators user and main broadcaster user's two major classes, role of the two class users in live streaming platform
Difference, therefore have different permission and data processing method.When main broadcaster is broadcast live, while needing to be broadcast live software and hardware and setting
Standby cooperation can be broadcast live by the modes such as computer and picture pick-up device, mobile terminal.
With reference to Fig. 1, this method comprises:
S101, when starting direct broadcasting room, acquire image data.
Wherein, starting direct broadcasting room is that main broadcaster starts the relevant software of live streaming.Image data refers to each picture indicated with numerical value
The set of the gray value of plain (pixel).Acquisition image data refers to the picture that main broadcaster's live streaming is acquired by picture pick-up device.It should manage
Solution should acquire while acquiring image data since the present embodiment is suitable for the scene that main broadcaster is broadcast live
Audio data.
Specifically, face processing device acquires the picture that main broadcaster is broadcast live by picture pick-up device when main broadcaster starts direct broadcasting room,
The image data obtained at this time is directed to for each frame picture.
S102, Face datection is carried out in described image data, obtain target human face data and the target human face data
In target face characteristic.
Wherein, progress Face datection, which refers to, detects whether there is facial image in image data by method for detecting human face,
And the information such as specific location of facial image.Target human face data refers to the human face data obtained from image data.Target
Face characteristic refers to certain specific parts in target human face data, such as can be target face contour feature, is also possible to mesh
Mark eye contour feature.
Specifically, face processing device carries out Face datection to image data by CPU, detect it after face figure
It is abstracted as target human face data, target human face data is handled, obtains target face characteristic.
S103, the target face characteristic is compared with preset standard faces feature, and according to the result of the comparison
Image procossing is carried out to the target human face data.
Wherein, standard faces are the targets of beautification, can be and generally acknowledge the most nice face based on what big data obtained, can also
To be the good-looking face of user setting.Preset standard faces are characterized in the feature obtained after handling standard faces.
Specifically, face processing device is by GPU by the target face characteristic of acquisition and according to the mark of standard faces acquisition
Quasi- face characteristic is compared, and target human face data is adjusted according to comparison result, so that target human face data fit standard face
Data.It is also possible to the extraction that CPU carries out target face characteristic, transfers to GPU to be compared feature, can also be and directly pass through
CPU is compared.
S104, the live data streams that the direct broadcasting room is generated according to the target human face data after image procossing.
Wherein, live data streams include the number for the data flow of local echo and for being transmitted to spectators' user client
According to stream.Meanwhile audio-video is packaged as video file, direct broadcast server is uploaded to by the way of stream, direct broadcast server can mention
Supply spectators.
Specifically, face processing device will be by (adjusting target face number according to comparison result after image procossing by CPU
According to so that target human face data fit standard human face data) target human face data generate direct broadcasting room live data streams, the number
It is echoed and data granting (carrying out data stream transmitting by modes such as content distributing networks) according to can be used for video.
The embodiment of the present invention determines target face characteristic by obtaining target human face data;By target face characteristic and mark
Quasi- face characteristic is compared, and carries out image procossing to the target human face data according to the result of the comparison, is ultimately produced straight
Multicast data stream.It solves in existing net cast technology, automatic U.S. face is excessively unnatural, and U.S. face needs user to spend largely manually
Time, debugging step trouble and parameter complexity problem, realize according to character facial profile in net cast, eye is big
The information such as small, spacing optimize beauty operation to face automatically.User is reduced on the basis of original to spend in parameter processing
On time, realize that program operational efficiency is high, and low in energy consumption, response is fast, be finally reached the effect for promoting user experience.
Embodiment two
Fig. 2A is a kind of flow chart of the face processing method based on live streaming provided by Embodiment 2 of the present invention.The present embodiment
It is the refinement carried out on the basis of example 1, essentially describes when target face characteristic is respectively target face contour feature
When with target eye contour feature, how to be fitted with standard faces feature.
With reference to Fig. 2A, the present embodiment specifically comprises the following steps:
S201, when starting direct broadcasting room, acquire image data.
Specifically, face processing device acquires the picture that main broadcaster is broadcast live by picture pick-up device when main broadcaster starts direct broadcasting room,
The image data obtained at this time is directed to for each frame picture.
S202, Face datection is carried out in described image data, obtain target human face data and the target human face data
In target face characteristic.
Specifically, face processing device carries out Face datection to image data by CPU, detect it after face figure
It is abstracted as target human face data, target human face data is handled, obtains target face characteristic.
S203, the target face contour feature is compared with the standard face mask feature, and according to comparing
Result in the target human face data face mask carry out image procossing.
Wherein, target face contour feature is the face mask part in target face characteristic.Standard face mask feature
Refer to the face mask part in standard faces feature.
Specifically, face processing device takes turns the face mask part in target face characteristic and the face in standard faces
Wide part is compared, using standard face mask feature as fit object, in the appropriate range to target face contour feature into
Row adjustment, so that target human face data fit standard human face data.
Optionally, step S203 can be refined as following steps:
By between the first gradient value of target face contour feature and the second gradient value of standard face mask feature
Gradient disparities determine first object bend tension coefficient;
On the basis of face mask in the target human face data according to the first object bend tension coefficient into
Row image procossing.
Specifically, calculating the first gradient value of the target face contour feature;Since standard face mask feature is preparatory
It is stored in server, therefore the second gradient value of standard face mask feature can be obtained directly from server;Described in calculating
Gradient disparities between first gradient value and second gradient value;Gradient disparities are counted by the first bend tension function
It calculates, obtains first object bend tension coefficient;According to described on the basis of face mask in the target human face data
One target flexural drawing coefficient carries out image procossing.
Wherein, according to the first object bend tension system on the basis of face mask in the target human face data
Number carries out image procossing and specifically includes:
Determine adjustment a reference value;
Point to be adjusted is chosen from the face mask in target human face data, determines the corresponding adjustment system of the point to be adjusted
Number;Wherein, to be adjusted number is two or more;
Using the point to be adjusted as the center of circle, the product of the adjustment a reference value and the regulation coefficient is radius, determines and adjusts
Whole range;
By the face mask in target human face data in the adjusting range according to the first object bend tension coefficient into
Row image procossing obtains intermediate face mask;
The corresponding intermediate face mask of each point to be adjusted is subjected to mixed processing, is obtained by the face after image procossing
Contouring.
Wherein, adjustment a reference value is a parameter for determining adjustment adjusting range radius, preferably be can be set to
Distance in target human face data from nose to chin.Point to be adjusted is the point in the face mask in target human face data, choosing
It takes more points that can obtain finer face mask as point to be adjusted, since this method is primarily adapted for use in live streaming field, is
Guarantee the requirement of real-time, preferably the cheekbone of face and cheek position totally four points as point to be adjusted.Regulation coefficient is used for
To the point that adjusting range radius is modified, it is chosen as 0.8~1.2, has different regulation coefficients for different points to be adjusted.It is mixed
Closing processing includes a variety of processing modes, it is preferred that and it can be and be overlapped four intermediate face masks, if any lap,
The point near nose is taken, soft and smooth processing then is carried out to lines.
Wherein, the face mask by target human face data in the adjusting range is drawn according to first object bending
It stretches coefficient and carries out image procossing, the formula for obtaining intermediate face mask includes:
Wherein,
The corresponding intermediate face mask of a certain point to be adjusted of Image_face ' expression, α are the corresponding tune of a certain point to be adjusted
Integral coefficient, R are adjustment a reference value, and (α × R) is using point to be adjusted as the radius value in the center of circle, i.e. adjusting range, and σ is first gradient
Value, σ ' are the second gradient value,Indicate by gradient disparities substitute into the first bend tension function calculate, ⊙ indicate with
Point to be adjusted is the center of circle, in the adjusting range (α × R) according toImage procossing is carried out, mage_face indicates target face
Contouring feature, * indicate (α × R) ⊙Process is the processing carried out on the basis of Image_face.
S204, in described image data, before the target face data cover image procossing after image procossing
Target human face data, as destination image data.
Wherein, covering, which refers to, carries out pure color for the target face characteristic in the target human face data before image procossing
Filling, loads on the pure color fill part for the target human face data after image procossing after filling.Destination image data
Refer to the image data for being used to generate data flow after U.S. figure.
Specifically, determining after target face characteristic (face mask feature and/or eye contour feature), in image data
The target face characteristic is sheared or is filled with pure color, by the target face data cover image after image procossing
Target human face data before processing, and using the image data obtained at this time as destination image data.Mesh may be implemented in this way
The accurate beautification for marking face characteristic position, not will cause the distortion of other backgrounds.
S205, the live data streams that the direct broadcasting room is generated based on the destination image data.
Specifically, destination image data is generated the live data streams of direct broadcasting room, the data by CPU by face processing device
It can be used for video echo and data granting (carrying out data stream transmitting by modes such as content distributing networks).In above-described embodiment
On the basis of, step S203 describes the case where target face characteristic is target face contour feature.Step S203 could alternatively be
The case where target face characteristic is target eye contour feature, is denoted as step S206.Step S203 and step S206 can select one
Progress can also carry out simultaneously, preferably first carry out face mask processing, then carry out eye profile processing.
Step S206 be the target eye contour feature is compared with the standard eye contouring feature, and according to
Comparison result carries out image procossing to the eye profile in the target human face data,.
Wherein, target eye contour feature is the eye outline portion in target face characteristic.Standard eye contouring feature
Refer to the eye outline portion in standard faces feature.
Specifically, face processing device by target face characteristic eye outline portion and standard faces in eye wheel
Wide part is compared, using standard eye contouring feature as fit object, in the appropriate range to target eye contour feature into
Row adjustment, so that target human face data fit standard human face data.
Optionally, step S206 can be refined as following steps:
Step 1: calculating the distance between described target eye contour feature, the mesh in the target human face data is obtained
Mark eyes size and objective eye spacing;
Step 2: obtaining the standard eyes size and standard eyes spacing in the standard faces data;
Step 3: calculating the difference in size between the objective eye size and the standard eyes size;
Step 4: calculating the distance between the objective eye spacing and the standard eyes spacing difference;
It is calculated Step 5: difference in size is stretched function by zoom, obtains target zoom coefficient;
Step 6: distance difference is calculated by the second bend tension function, obtains the second target flexural and stretch system
Number;
Step 7: simultaneously according to the target zoom on the basis of eye profile in the target human face data
Coefficient and the second target flexural drawing coefficient carry out image procossing.
Optionally, step S206 can be refined as following formula:
Wherein,
Image′eyeIndicate the eye profile after image procossing,Indicate difference in size,Indicate distance difference,Indicate by difference in size by zoom stretch function into
Row calculates, and obtains target zoom coefficient,It indicates distance difference passing through the second bend tension letter
Number is calculated, and the second target flexural drawing coefficient is obtained,It indicatesWith
It carries out simultaneously, Image_eye indicates that target eye profile, * indicate
Process is carried out on the basis of Image_eye.
Fig. 2 B is the schematic diagram provided by Embodiment 2 of the present invention that destination image data is obtained by image data.With reference to figure
2B, the target face contour feature 23 in image data 20 obtain the face after image procossing by the processing of step S203
Profile 24;Target eye contour feature 21 in image data 20 by step S206 processing, after obtaining image procossing
Eye profile 22;Eye profile 22 and the combination of face mask 24 after image procossing obtain destination image data 25.
The embodiment of the present invention determines target face characteristic by obtaining target human face data;By target face characteristic and mark
Quasi- face characteristic is compared, and carries out image procossing to the target human face data according to the result of the comparison, and last Shanghai is straight
Multicast data stream.The present embodiment is also disclosed when target face characteristic is respectively that target face contour feature and target eye profile are special
When sign, how to be fitted with standard faces feature.It solving in existing net cast technology, automatic U.S. face is excessively unnatural,
U.S. face needs user to spend a lot of time manually, debugging step is troublesome and the problem of parameter complexity, realizes in net cast
According to character facial profile, the information such as eye size, spacing optimize beauty operation to face automatically.On the basis of original
It reduces user and spends the time on parameter processing, realize program operational efficiency height, low in energy consumption, response is fast, is finally reached and mentions
Rise the effect of user experience.
Embodiment three
Fig. 3 is a kind of mechanism map for face processing device that the embodiment of the present invention three provides.The device includes: Image Acquisition
Module 31, characteristic extracting module 32, feature comparison module 33 and data flow generation module 34.Wherein:
Image capture module 31, for acquiring image data when starting direct broadcasting room;
Characteristic extracting module 32 obtains target human face data and institute for carrying out Face datection in described image data
State the target face characteristic in target human face data;
Feature comparison module 33, for the target face characteristic to be compared with preset standard faces feature, and
Image procossing is carried out to the target human face data according to the result of the comparison;
Data flow generation module 34, for generating the straight of the direct broadcasting room according to the target human face data after image procossing
Multicast data stream.
The embodiment of the present invention determines target face characteristic by obtaining target human face data;By target face characteristic and mark
Quasi- face characteristic is compared, and carries out image procossing to the target human face data according to the result of the comparison, and last Shanghai is straight
Multicast data stream.It solves in existing net cast technology, automatic U.S. face is excessively unnatural, and U.S. face needs user to spend largely manually
Time, debugging step trouble and parameter complexity problem, realize according to character facial profile in net cast, eye is big
The information such as small, spacing optimize beauty operation to face automatically.User is reduced on the basis of original to spend in parameter processing
On time, realize that program operational efficiency is high, and low in energy consumption, response is fast, be finally reached the effect for promoting user experience.
On the basis of the above embodiments, the target face characteristic includes target face contour feature, the standard people
Face feature includes the standard face mask feature in standard faces data;Feature comparison module is used at this time:
The target face contour feature is compared with the standard face mask feature, and according to the result of the comparison
Image procossing is carried out to the face mask in the target human face data.
On the basis of the above embodiments, described by the target face contour feature and the standard face mask feature
It is compared, and image procossing is carried out to the face mask in the target human face data according to the result of the comparison, comprising:
By between the first gradient value of target face contour feature and the second gradient value of standard face mask feature
Gradient disparities determine first object bend tension coefficient;
On the basis of face mask in the target human face data according to the first object bend tension coefficient into
Row image procossing.
On the basis of above-mentioned implementation, according to described first on the basis of face mask in the target human face data
Target flexural drawing coefficient carries out image procossing, specifically includes:
Determine adjustment a reference value;
Point to be adjusted is chosen from the face mask in target human face data, determines the corresponding adjustment system of the point to be adjusted
Number;Wherein, to be adjusted number is two or more;
Using the point to be adjusted as the center of circle, the product of the adjustment a reference value and the regulation coefficient is radius, determines and adjusts
Whole range;
By the face mask in target human face data in the adjusting range according to the first object bend tension coefficient into
Row image procossing obtains intermediate face mask;
The corresponding intermediate face mask of each point to be adjusted is subjected to mixed processing, is obtained by the face after image procossing
Contouring.
On the basis of above-mentioned implementation, the target face characteristic includes target eye contour feature, the standard faces
Feature includes the standard eye contouring feature of standard faces;Feature comparison module is used at this time:
The target eye contour feature is compared with the standard eye contouring feature, and according to the result of the comparison
Image procossing is carried out to the eye profile in the target human face data.
On the basis of above-mentioned implementation, the target eye contour feature and the standard eye contouring feature are compared
Compared with, and image procossing is carried out to the eye profile in the target human face data according to the result of the comparison, with fit standard face
Eye profile in data, comprising:
The distance between described target eye contour feature is calculated, the objective eye obtained in the target human face data is big
Small and objective eye spacing;
Obtain the standard eyes size and standard eyes spacing in the standard faces data;
Calculate the difference in size between the objective eye size and the standard eyes size;
Calculate the distance between the objective eye spacing and the standard eyes spacing difference;
Difference in size is stretched function by zoom to calculate, obtains target zoom coefficient;
Distance difference is calculated by the second bend tension function, obtains the second target flexural drawing coefficient;
On the basis of eye profile in the target human face data simultaneously according to the target zoom coefficient with
The second target flexural drawing coefficient carries out image procossing.
On the basis of above-mentioned implementation, data flow generation module is specifically used for:
In described image data, by the target person before the target face data cover image procossing after image procossing
Face data, as destination image data;
The live data streams of the direct broadcasting room are generated based on the destination image data.
A kind of face processing device based on live streaming provided in this embodiment can be used for executing any of the above-described embodiment and provide
The face processing method based on live streaming, have corresponding function and beneficial effect.
Example IV
Fig. 4 is the structural schematic diagram for a kind of electronic equipment that the embodiment of the present invention four provides.As shown in figure 4, the electronics is set
Standby includes processor 40, memory 41, communication module 42, input unit 43 and output device 44;Processor 40 in electronic equipment
Quantity can be one or more, generally may be configured as including central processing unit and graphics processor;The central processing unit
Including image capture module 31, characteristic extracting module 32 and data flow generation module 33, the graphics processor 52 includes feature
Comparison module 33;In Fig. 4 by taking a processor 40 as an example;Processor 40, memory 41, communication module 42 in electronic equipment,
Input unit 43 can be connected with output device 44 by bus or other modes, in Fig. 4 for being connected by bus.
Memory 41 is used as a kind of computer readable storage medium, can be used for storing software program, journey can be performed in computer
Sequence and module, the corresponding module of face processing method such as one of the present embodiment based on live streaming is (for example, a kind of based on straight
The image capture module 31 in face processing device, characteristic extracting module 32, feature comparison module 33 and the data flow broadcast generate
Module 34).Software program, instruction and the module that processor 40 is stored in memory 41 by operation, thereby executing electronics
The various function application and data processing of equipment realize a kind of above-mentioned face processing method based on live streaming.
Memory 41 can mainly include storing program area and storage data area, wherein storing program area can store operation system
Application program needed for system, at least one function;Storage data area, which can be stored, uses created data according to electronic equipment
Deng.In addition, memory 41 may include high-speed random access memory, it can also include nonvolatile memory, for example, at least
One disk memory, flush memory device or other non-volatile solid state memory parts.In some instances, memory 41 can
It further comprise the memory remotely located relative to processor 40, these remote memories can pass through network connection to electronics
Equipment.The example of above-mentioned network includes but is not limited to internet, intranet, local area network, mobile radio communication and combinations thereof.
Communication module 42 for establishing connection with display screen, and realizes the data interaction with display screen.Input unit 43 can
Number for receiving input or character information, and generate key related with the user setting of electronic equipment and function control
Signal input.
The face based on live streaming that any embodiment of the present invention provides can be performed in a kind of electronic equipment provided in this embodiment
Processing method, specific corresponding function and beneficial effect.
Embodiment five
Fig. 5 is a kind of electronic equipment that the embodiment of the present invention five provides.As shown in figure 5, the electronic equipment includes center
Processor 51 and graphics processor 52;The central processing unit 51 includes image capture module 31,32 sum number of characteristic extracting module
According to stream generation module 33, the graphics processor 52 includes feature comparison module 33;
Described image acquisition module, for acquiring image data when starting direct broadcasting room;
The characteristic extracting module, for carrying out Face datection in described image data, obtain target human face data and
Target face characteristic in the target human face data;
The feature comparison module, for the target face characteristic to be compared with preset standard faces feature,
And image procossing is carried out to the target human face data according to the result of the comparison;
The data flow generation module, for generating the direct broadcasting room according to the target human face data after image procossing
Live data streams.
The face based on live streaming that any embodiment of the present invention provides can be performed in a kind of electronic equipment provided in this embodiment
Processing method, specific corresponding function and beneficial effect.
Embodiment six
The embodiment of the present invention six also provides a kind of storage medium comprising computer executable instructions, and the computer can be held
Row is instructed when being executed by computer processor for executing a kind of face processing method based on live streaming, this method comprises:
When starting direct broadcasting room, image data is acquired;
Face datection is carried out in described image data, obtains the mesh in target human face data and the target human face data
Mark face characteristic;
The target face characteristic is compared with preset standard faces feature, and according to the result of the comparison to described
Target human face data carries out image procossing;
The live data streams of the direct broadcasting room are generated according to the target human face data after image procossing.
Certainly, a kind of storage medium comprising computer executable instructions, computer provided by the embodiment of the present invention
The method operation that executable instruction is not limited to the described above, can also be performed provided by any embodiment of the present invention based on live streaming
Face processing method in relevant operation.
By the description above with respect to embodiment, it is apparent to those skilled in the art that, the present invention
It can be realized by software and required common hardware, naturally it is also possible to which by hardware realization, but in many cases, the former is more
Good embodiment.Based on this understanding, technical solution of the present invention substantially in other words contributes to the prior art
Part can be embodied in the form of software products, which can store in computer readable storage medium
In, floppy disk, read-only memory (Read-Only Memory, ROM), random access memory (Random such as computer
Access Memory, RAM), flash memory (FLASH), hard disk or CD etc., including some instructions are used so that a calculatings electromechanics
Sub- equipment (can be personal computer, server or network electronic devices etc.) executes described in each embodiment of the present invention
Method.
It is worth noting that, in a kind of embodiment of above-mentioned face processing device based on live streaming, included each list
Member and module are only divided according to the functional logic, but are not limited to the above division, as long as can be realized corresponding
Function;In addition, the specific name of each functional unit is also only for convenience of distinguishing each other, it is not intended to restrict the invention
Protection scope.
Note that the above is only a better embodiment of the present invention and the applied technical principle.It will be appreciated by those skilled in the art that
The invention is not limited to the specific embodiments described herein, be able to carry out for a person skilled in the art it is various it is apparent variation,
It readjusts and substitutes without departing from protection scope of the present invention.Therefore, although being carried out by above embodiments to the present invention
It is described in further detail, but the present invention is not limited to the above embodiments only, without departing from the inventive concept, also
It may include more other equivalent embodiments, and the scope of the invention is determined by the scope of the appended claims.
Claims (11)
1. a kind of face processing method based on live streaming characterized by comprising
When starting direct broadcasting room, image data is acquired;
Face datection is carried out in described image data, obtains the target person in target human face data and the target human face data
Face feature;
The target face characteristic is compared with preset standard faces feature, and according to the result of the comparison to the target
Human face data carries out image procossing;
The live data streams of the direct broadcasting room are generated according to the target human face data after image procossing.
2. the method according to claim 1, wherein the target face characteristic includes that target face profile is special
Sign, the standard faces feature includes the standard face mask feature in standard faces data;
It is described to be compared the target face characteristic with preset standard faces feature, and according to the result of the comparison to described
Target human face data carries out image procossing, comprising:
The target face contour feature is compared with the standard face mask feature, and according to the result of the comparison to institute
The face mask stated in target human face data carries out image procossing.
3. according to the method described in claim 2, it is characterized in that, described by the target face contour feature and the standard
Face mask feature is compared, and is carried out at image to the face mask in the target human face data according to the result of the comparison
Reason, comprising:
Pass through the gradient between the first gradient value of target face contour feature and the second gradient value of standard face mask feature
Difference determines first object bend tension coefficient;
Figure is carried out according to the first object bend tension coefficient on the basis of face mask in the target human face data
As processing.
4. according to the method described in claim 3, it is characterized in that, the basis of the face mask in the target human face data
On according to the first object bend tension coefficient carry out image procossing, specifically include:
Determine adjustment a reference value;
Point to be adjusted is chosen from the face mask in target human face data, determines the corresponding regulation coefficient of the point to be adjusted;
Wherein, to be adjusted number is two or more;
Using the point to be adjusted as the center of circle, the product of the adjustment a reference value and the regulation coefficient is radius, determines adjustment model
It encloses;
Face mask in target human face data in the adjusting range is subjected to figure according to the first object bend tension coefficient
As processing, intermediate face mask is obtained;
The corresponding intermediate face mask of each point to be adjusted is subjected to mixed processing, is obtained by face's wheel after image procossing
It is wide.
5. method according to claim 1 or 2 or 3 or 4, which is characterized in that the target face characteristic includes target eye
Contouring feature, the standard faces feature include the standard eye contouring feature of standard faces;
It is described to be compared the target face characteristic with preset standard faces feature, and according to the result of the comparison to described
Target human face data carries out image procossing, comprising:
The target eye contour feature is compared with the standard eye contouring feature, and according to the result of the comparison to institute
The eye profile stated in target human face data carries out image procossing.
6. according to the method described in claim 5, it is characterized in that, described by the target eye contour feature and the standard
Eye contour feature is compared, and is carried out at image to the eye profile in the target human face data according to the result of the comparison
Reason, with the eye profile in fit standard human face data, comprising:
Calculate the distance between described target eye contour feature, obtain objective eye size in the target human face data and
Objective eye spacing;
Obtain the standard eyes size and standard eyes spacing in the standard faces data;
Calculate the difference in size between the objective eye size and the standard eyes size;
Calculate the distance between the objective eye spacing and the standard eyes spacing difference;
Difference in size is stretched function by zoom to calculate, obtains target zoom coefficient;
Distance difference is calculated by the second bend tension function, obtains the second target flexural drawing coefficient;
On the basis of eye profile in the target human face data simultaneously according to the target zoom coefficient with it is described
Second target flexural drawing coefficient carries out image procossing.
7. the method according to claim 1, wherein described raw according to the target human face data after image procossing
At the live data streams of the direct broadcasting room, specifically include:
In described image data, by the target face number before the target face data cover image procossing after image procossing
According to as destination image data;
The live data streams of the direct broadcasting room are generated based on the destination image data.
8. a kind of face processing device based on live streaming characterized by comprising
Image capture module, for acquiring image data when starting direct broadcasting room;
Characteristic extracting module obtains target human face data and the target for carrying out Face datection in described image data
Target face characteristic in human face data;
Feature comparison module, for the target face characteristic to be compared with preset standard faces feature, and according to than
Compared with result to the target human face data carry out image procossing;
Data flow generation module, for generating the live data of the direct broadcasting room according to the target human face data after image procossing
Stream.
9. a kind of electronic equipment characterized by comprising
One or more processors;
Memory, for storing one or more programs;
When one or more of programs are executed by one or more of processors, so that one or more of processors are real
A kind of existing face processing method based on live streaming as claimed in claim 1.
10. a kind of electronic equipment, which is characterized in that the electronic equipment includes central processing unit and graphics processor;In described
Central processor includes image capture module, characteristic extracting module and data flow generation module, and the graphics processor includes feature
Comparison module;
Described image acquisition module, for acquiring image data when starting direct broadcasting room;
The characteristic extracting module obtains target human face data and described for carrying out Face datection in described image data
Target face characteristic in target human face data;
The feature comparison module, for the target face characteristic to be compared with preset standard faces feature, and root
Image procossing is carried out to the target human face data according to comparison result;
The data flow generation module, for generating the live streaming of the direct broadcasting room according to the target human face data after image procossing
Data flow.
11. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the program is by processor
A kind of face processing method based on live streaming as claimed in claim 1 is realized when execution.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110213455.1A CN113329252B (en) | 2018-10-24 | 2018-10-24 | Live broadcast-based face processing method, device, equipment and storage medium |
CN201811241860.9A CN109302628B (en) | 2018-10-24 | 2018-10-24 | Live broadcast-based face processing method, device, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811241860.9A CN109302628B (en) | 2018-10-24 | 2018-10-24 | Live broadcast-based face processing method, device, equipment and storage medium |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110213455.1A Division CN113329252B (en) | 2018-10-24 | 2018-10-24 | Live broadcast-based face processing method, device, equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109302628A true CN109302628A (en) | 2019-02-01 |
CN109302628B CN109302628B (en) | 2021-03-23 |
Family
ID=65158666
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110213455.1A Active CN113329252B (en) | 2018-10-24 | 2018-10-24 | Live broadcast-based face processing method, device, equipment and storage medium |
CN201811241860.9A Active CN109302628B (en) | 2018-10-24 | 2018-10-24 | Live broadcast-based face processing method, device, equipment and storage medium |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110213455.1A Active CN113329252B (en) | 2018-10-24 | 2018-10-24 | Live broadcast-based face processing method, device, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (2) | CN113329252B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110490828A (en) * | 2019-09-10 | 2019-11-22 | 广州华多网络科技有限公司 | Image processing method and system in net cast |
CN110706169A (en) * | 2019-09-26 | 2020-01-17 | 深圳市半冬科技有限公司 | Star portrait optimization method and device and storage device |
CN111402352A (en) * | 2020-03-11 | 2020-07-10 | 广州虎牙科技有限公司 | Face reconstruction method and device, computer equipment and storage medium |
CN111797754A (en) * | 2020-06-30 | 2020-10-20 | 上海掌门科技有限公司 | Image detection method, device, electronic equipment and medium |
CN112188234A (en) * | 2019-07-03 | 2021-01-05 | 广州虎牙科技有限公司 | Image processing and live broadcasting method and related device |
CN114760492A (en) * | 2022-04-22 | 2022-07-15 | 咪咕视讯科技有限公司 | Live broadcast special effect generation method, device and system and computer readable storage medium |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116109479B (en) * | 2023-04-17 | 2023-07-18 | 广州趣丸网络科技有限公司 | Face adjusting method, device, computer equipment and storage medium for virtual image |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130177219A1 (en) * | 2010-10-28 | 2013-07-11 | Telefonaktiebolaget L M Ericsson (Publ) | Face Data Acquirer, End User Video Conference Device, Server, Method, Computer Program And Computer Program Product For Extracting Face Data |
CN107680033A (en) * | 2017-09-08 | 2018-02-09 | 北京小米移动软件有限公司 | Image processing method and device |
CN107730445A (en) * | 2017-10-31 | 2018-02-23 | 广东欧珀移动通信有限公司 | Image processing method, device, storage medium and electronic equipment |
CN107818543A (en) * | 2017-11-09 | 2018-03-20 | 北京小米移动软件有限公司 | Image processing method and device |
CN108205795A (en) * | 2016-12-16 | 2018-06-26 | 北京酷我科技有限公司 | Face image processing process and system during a kind of live streaming |
CN108492247A (en) * | 2018-03-23 | 2018-09-04 | 成都品果科技有限公司 | A kind of eye make-up chart pasting method based on distortion of the mesh |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103605975B (en) * | 2013-11-28 | 2018-10-19 | 小米科技有限责任公司 | A kind of method, apparatus and terminal device of image procossing |
CN108021308A (en) * | 2016-10-28 | 2018-05-11 | 中兴通讯股份有限公司 | Image processing method, device and terminal |
CN107835367A (en) * | 2017-11-14 | 2018-03-23 | 维沃移动通信有限公司 | A kind of image processing method, device and mobile terminal |
-
2018
- 2018-10-24 CN CN202110213455.1A patent/CN113329252B/en active Active
- 2018-10-24 CN CN201811241860.9A patent/CN109302628B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130177219A1 (en) * | 2010-10-28 | 2013-07-11 | Telefonaktiebolaget L M Ericsson (Publ) | Face Data Acquirer, End User Video Conference Device, Server, Method, Computer Program And Computer Program Product For Extracting Face Data |
CN108205795A (en) * | 2016-12-16 | 2018-06-26 | 北京酷我科技有限公司 | Face image processing process and system during a kind of live streaming |
CN107680033A (en) * | 2017-09-08 | 2018-02-09 | 北京小米移动软件有限公司 | Image processing method and device |
CN107730445A (en) * | 2017-10-31 | 2018-02-23 | 广东欧珀移动通信有限公司 | Image processing method, device, storage medium and electronic equipment |
CN107818543A (en) * | 2017-11-09 | 2018-03-20 | 北京小米移动软件有限公司 | Image processing method and device |
CN108492247A (en) * | 2018-03-23 | 2018-09-04 | 成都品果科技有限公司 | A kind of eye make-up chart pasting method based on distortion of the mesh |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112188234A (en) * | 2019-07-03 | 2021-01-05 | 广州虎牙科技有限公司 | Image processing and live broadcasting method and related device |
CN110490828A (en) * | 2019-09-10 | 2019-11-22 | 广州华多网络科技有限公司 | Image processing method and system in net cast |
CN110490828B (en) * | 2019-09-10 | 2022-07-08 | 广州方硅信息技术有限公司 | Image processing method and system in video live broadcast |
CN110706169A (en) * | 2019-09-26 | 2020-01-17 | 深圳市半冬科技有限公司 | Star portrait optimization method and device and storage device |
CN111402352A (en) * | 2020-03-11 | 2020-07-10 | 广州虎牙科技有限公司 | Face reconstruction method and device, computer equipment and storage medium |
CN111402352B (en) * | 2020-03-11 | 2024-03-05 | 广州虎牙科技有限公司 | Face reconstruction method, device, computer equipment and storage medium |
CN111797754A (en) * | 2020-06-30 | 2020-10-20 | 上海掌门科技有限公司 | Image detection method, device, electronic equipment and medium |
CN114760492A (en) * | 2022-04-22 | 2022-07-15 | 咪咕视讯科技有限公司 | Live broadcast special effect generation method, device and system and computer readable storage medium |
CN114760492B (en) * | 2022-04-22 | 2023-10-20 | 咪咕视讯科技有限公司 | Live special effect generation method, device and system and computer readable storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN109302628B (en) | 2021-03-23 |
CN113329252A (en) | 2021-08-31 |
CN113329252B (en) | 2023-01-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109302628A (en) | A kind of face processing method based on live streaming, device, equipment and storage medium | |
CN110536151B (en) | Virtual gift special effect synthesis method and device and live broadcast system | |
CN110475150B (en) | Rendering method and device for special effect of virtual gift and live broadcast system | |
CN110493630B (en) | Processing method and device for special effect of virtual gift and live broadcast system | |
CN105187810B (en) | A kind of auto white balance method and electronic medium device based on face color character | |
CN105608715B (en) | online group photo method and system | |
CN109040576A (en) | The method and system of camera control and image procossing with the window based on multiframe counted for image data | |
US20130242127A1 (en) | Image creating device and image creating method | |
CN106303354B (en) | Face special effect recommendation method and electronic equipment | |
CN110418146B (en) | Face changing method, storage medium, electronic device and system applied to live scene | |
CN108111911B (en) | Video data real-time processing method and device based on self-adaptive tracking frame segmentation | |
CN111147873A (en) | Virtual image live broadcasting method and system based on 5G communication | |
US9030571B2 (en) | Abstract camera pipeline for uniform cross-device control of image capture and processing | |
CN107895161B (en) | Real-time attitude identification method and device based on video data and computing equipment | |
CN109274883A (en) | Posture antidote, device, terminal and storage medium | |
CN114979689B (en) | Multi-machine-position live broadcast guide method, equipment and medium | |
CN102542300B (en) | Method for automatically recognizing human body positions in somatic game and display terminal | |
CN111583415A (en) | Information processing method and device and electronic equipment | |
CN106357979A (en) | Photographing method, device and terminal | |
CN116188296A (en) | Image optimization method and device, equipment, medium and product thereof | |
CN112153240B (en) | Method and device for adjusting image quality and readable storage medium | |
CN109325926B (en) | Automatic filter implementation method, storage medium, device and system | |
US20170163852A1 (en) | Method and electronic device for dynamically adjusting gamma parameter | |
CN110971924B (en) | Method, device, storage medium and system for beautifying in live broadcast process | |
CN116681613A (en) | Illumination-imitating enhancement method, device, medium and equipment for face key point detection |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |