CN109325908A - Image processing method and device, electronic equipment and storage medium - Google Patents

Image processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN109325908A
CN109325908A CN201811278366.XA CN201811278366A CN109325908A CN 109325908 A CN109325908 A CN 109325908A CN 201811278366 A CN201811278366 A CN 201811278366A CN 109325908 A CN109325908 A CN 109325908A
Authority
CN
China
Prior art keywords
image
location information
key point
taking pictures
pictures
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811278366.XA
Other languages
Chinese (zh)
Other versions
CN109325908B (en
Inventor
荆锐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN201811278366.XA priority Critical patent/CN109325908B/en
Publication of CN109325908A publication Critical patent/CN109325908A/en
Application granted granted Critical
Publication of CN109325908B publication Critical patent/CN109325908B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/14Transformations for image registration, e.g. adjusting or mapping for alignment of images
    • G06T3/147Transformations for image registration, e.g. adjusting or mapping for alignment of images using affine transformations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)

Abstract

This disclosure relates to a kind of image processing method and device, electronic equipment and storage medium, which comprises in the case where receiving photographing instruction, obtain the first location information of each key point and image of taking pictures in preview image;Using the first location information of each key point obtain described in take pictures the direct picture of target object in image, and determine the second location information of each key point of the direct picture;Determine whether the second location information meets preset condition, in this way, then according to addition virtual effect in position corresponding with the second location information for meeting preset condition in image of taking pictures, virtual effect otherwise is added according to the third place information of each key point identified from the image of taking pictures.The embodiment of the present disclosure can accurately identify the virtual effect for image of taking pictures.

Description

Image processing method and device, electronic equipment and storage medium
Technical field
This disclosure relates to field of image processing, in particular to image processing method and device, electronic equipment and storage medium.
Background technique
Currently, virtual reality technology or augmented reality have obtained numerous users' due to its powerful interactivity Favor.For example, the electronic equipment with camera function, can add virtual effect, such as wear selection while taking pictures Cap, glasses, facial deformation, thin face, U.S. face, weight reducing and wearing ornament etc..But current most of electronic equipment is being clapped According to when, in order to obtain the effect of above-mentioned virtual feature, only save current preview frame image, the image resolution ratio of the preview frame Resolution ratio it is lower, virtual effect is inaccurate.
Summary of the invention
The embodiment of the present disclosure provide it is a kind of provide a kind of position that can accurately identify key point in image, with essence The image processing method and device, electronic equipment and storage medium of true realization virtual effect.
According to the one side of the disclosure, a kind of image processing method is provided comprising:
In the case where receiving photographing instruction, obtains the first location information of each key point in preview image and take pictures Image;
Using the first location information of each key point obtain described in take pictures the direct picture of target object in image, and determine The second location information of each key point of the direct picture;
Determine whether the second location information meets preset condition, if so, then according to default with satisfaction in image of taking pictures The corresponding position of second location information of condition add virtual effect, otherwise according to each pass identified from the image of taking pictures The third place information of key point adds virtual effect.
In a kind of possible embodiment, image of taking pictures described in the first location information acquisition using each key point The direct picture of middle target object, and determine the second location information of each key point of the direct picture, comprising:
According to the image parameter of the image of taking pictures, the first location information of each key point is converted into and the figure of taking pictures 4th location information of the image parameter adaptation of picture;
According to the 4th location information for each key point being adapted to the image parameter of the image of taking pictures, described in acquisition Take pictures the direct picture of target object in image and determine the direct picture each key point second location information.
In a kind of possible embodiment, the image parameter of the image of taking pictures according to, by the of each key point One location information is converted into the 4th location information being adapted to the image parameter of the image of taking pictures, including in following manner extremely Few one kind:
According to the resolution ratio of the image of taking pictures, contract to the first location information of each key point in the preview image Processing is put, each 4th location information is obtained;
According to the offset taken pictures between image and the corresponding key point in preview image, in the preview image The first location information of each key point carries out migration processing, obtains each 4th location information;
According to the resolution ratio of the image of taking pictures, contract to the first location information of each key point in the preview image Processing is put, and the first location information based on the key point after scaling processing is believed to the position for the corresponding key point taken pictures on image Offset between breath carries out migration processing to the first location information of each key point after the scaling processing, obtains described 4th location information.
In a kind of possible embodiment, each key point that the basis is adapted to the image parameter of the image of taking pictures The 4th location information, the direct picture of target object in image of taking pictures described in acquisition, comprising:
The image-region for the target object taken pictures in image described in determination;
Using the 4th location information and standard picture of each key point, obtained by way of affine transformation described Take pictures image target object direct picture, the standard picture includes standard facial image, standard limb image and standard At least one of pose presentation.
In a kind of possible embodiment, the 4th location information and standard drawing using each key point Picture obtains the direct picture of the target object of the image of taking pictures by way of affine transformation, comprising:
The standard location information of each key point in 4th location information of each key point and standard picture is carried out pair Than, adjusted described in take pictures the correction matrix of image, the correction matrix includes the adjusting parameter of each key point position;
Based on the correction matrix, the target object of the image of taking pictures is corrected, obtains the image of taking pictures The direct picture of target object.
In a kind of possible embodiment, the basis take pictures in image with the second confidence that meets preset condition Cease corresponding position addition virtual effect, comprising:
If the second location information meets preset condition, is determined and taken pictures in image with the according to the correction matrix Corresponding 5th location information of two location informations;
Virtual feature is added according to the 5th location information of each key point.
In a kind of possible embodiment, whether the determination second location information meets preset condition, comprising:
Obtain the confidence level of the second location information;
In the case where the confidence level of the second location information is more than or equal to the confidence threshold value, described in determination Second location information meets preset condition.
In a kind of possible embodiment, whether the determination second location information meets preset condition, also wraps It includes:
In the case where the confidence level of the second location information is less than the confidence threshold value, the basis is repeated The 4th location information for each key point being adapted to the image parameter of the image of taking pictures, mesh in image of taking pictures described in acquisition It marks the direct picture of object and identifies the second location information of each key point of the direct picture;
If after the number repeated reaches frequency threshold value, the confidence level of determining second location information is still less than confidence Spend threshold value, it is determined that the second location information is unsatisfactory for preset condition, and during repeating, and determines acquisition The confidence level of second location information is more than or equal to confidence threshold value, then the confidence level is more than or equal to confidence threshold value The second location information be determined as meeting preset condition.
In a kind of possible embodiment, the method also includes:
Virtual feature to be added is determined based on received selection information, with described virtual according to second location information addition Feature, or virtual feature is added according to the third place information.
In a kind of possible embodiment, the method also includes:
The first location information of each key point in preview image is obtained using neural network model, and in image of taking pictures The third place information of each key point.
According to the second aspect of the disclosure, a kind of image processing apparatus is provided comprising:
Module is obtained, is used in the case where receiving photographing instruction, first of each key point in preview image is obtained Location information and image of taking pictures;
Determining module, be used to obtain using the first location information of each key point described in take pictures target object in image Direct picture, and determine the second location information of each key point of the direct picture;
Virtual module, is used to determine whether the second location information to meet preset condition, if so, then according to figure of taking pictures Virtual effect is added in corresponding with the second location information for meeting preset condition position as in, otherwise according to from the figure of taking pictures The third place information addition virtual effect of each key point identified as in.
In a kind of possible embodiment, the determining module includes:
Converting unit, the image parameter for the image that is used to take pictures according to turn the first location information of each key point Change the 4th location information being adapted to the image parameter of the image of taking pictures into;And
Key point determination unit is used for according to each key point being adapted to the image parameter of the image of taking pictures 4th location information the direct picture of target object and determines each key point of the direct picture in image of taking pictures described in acquisition Second location information.
In a kind of possible embodiment, the converting unit is also used to be executed according at least one of following manner The first location information of each key point is converted into and the image of taking pictures by the image parameter of the image of taking pictures according to 4th location information of image parameter adaptation:
According to the resolution ratio of the image of taking pictures, contract to the first location information of each key point in the preview image Processing is put, each 4th location information is obtained;
According to the offset taken pictures between image and the corresponding key point in preview image, in the preview image The first location information of each key point carries out migration processing, obtains each 4th location information;
According to the resolution ratio of the image of taking pictures, contract to the first location information of each key point in the preview image Processing is put, and the first location information based on the key point after scaling processing is believed to the position for the corresponding key point taken pictures on image Offset between breath carries out migration processing to the first location information of each key point after the scaling processing, obtains described 4th location information.
In a kind of possible embodiment, the key point determination unit be also used to determine described in the mesh taken pictures in image The image-region of object is marked, and using the 4th location information and standard picture of each key point, passes through affine transformation Mode obtains the direct picture of the target object of the image of taking pictures, and the standard picture includes standard facial image, standard limb At least one of body image and standard pose presentation.
In a kind of possible embodiment, the key point determination unit is also used to the 4th of each key point Confidence breath and the standard location information of key point each in standard picture compare, adjusted described in take pictures the correction square of image Battle array, the correction matrix includes the adjusting parameter of each key point position;And
Based on the correction matrix, the target object of the image of taking pictures is corrected, obtains the image of taking pictures The direct picture of target object.
In a kind of possible embodiment, the virtual module is also used to meet default item in the second location information In the case where part, the 5th location information corresponding with second location information in image of taking pictures is determined according to the correction matrix;With And virtual feature is added according to the 5th location information of each key point.
In a kind of possible embodiment, the virtual module is also used to obtain the confidence of the second location information Degree determines described second in the case where the confidence level of the second location information is more than or equal to the confidence threshold value Location information meets preset condition.
In a kind of possible embodiment, the virtual module is also used to small in the confidence level of the second location information In the case where the confidence threshold value, each key that the basis is adapted to the image parameter of the image of taking pictures is repeated The 4th location information of point the direct picture of target object and identifies the direct picture in image of taking pictures described in acquisition The second location information of each key point;
When the number repeated reaches frequency threshold value, when the confidence level of determining second location information is still less than confidence It when spending threshold value, determines that the second location information is unsatisfactory for preset condition, and during repeating, is obtained when determining Second location information confidence level be more than or equal to confidence threshold value when, by the confidence level be more than or equal to confidence level threshold The second location information of value is determined as meeting preset condition.
In a kind of possible embodiment, described device further include:
Receiving module is used to receive selection information;
Virtual module is also used to determine virtual feature to be added based on received selection information, according to second confidence Breath adds the virtual feature, or adds virtual feature according to the third place information.
In a kind of possible embodiment, the acquisition module is also used to obtain preview image using neural network model In each key point first location information, and the third place information of each key point in image of taking pictures.
According to the third aspect of the disclosure, a kind of electronic equipment is provided characterized by comprising
Processor;
Memory for storage processor executable instruction;
Wherein, the processor is configured to: execute method described in any one of first aspect.
According to the fourth aspect of the disclosure, a kind of computer readable storage medium is provided, is stored thereon with computer journey Method described in any one of first aspect is realized in sequence instruction when the computer program instructions are executed by processor.
The tracking of preview image may be implemented in the embodiment of the present disclosure, realizes image of taking pictures in conjunction with the tracking result of preview image Virtual feature addition, can accurately realize virtual effect.It wherein first can be according to the image parameter pair for image of taking pictures The location information of each key point obtains direct picture corresponding with the target object in image of taking pictures in preview image, and determining should Whether the second location information of each key point meets preset condition in direct picture, if meeting preset condition, according to by pre- The determining second location information of tracking of image of looking at realizes virtual effect in image of taking pictures, if second location information is unsatisfactory for Preset condition then directly can realize that virtual effect, the disclosure are implemented according to the position of the key point identified in image of taking pictures Example can guarantee the accurate correspondence of the virtual feature of addition, even if in the moment taken pictures since the movement of reference object causes feature It can not accurately identify, and cannot accurately add corresponding virtual effect, also essence can be carried out to key point by preview image True tracks and identifies, and reduction can not effectively add virtual effect or cannot add the probability of virtual effect.
It should be understood that above general description and following detailed description is only exemplary and explanatory, rather than Limit the disclosure.
According to below with reference to the accompanying drawings to detailed description of illustrative embodiments, the other feature and aspect of the disclosure will become It is clear.
Detailed description of the invention
The drawings herein are incorporated into the specification and forms part of this specification, and those figures show meet this public affairs The embodiment opened, and together with specification it is used to illustrate the technical solution of the disclosure.
Fig. 1 shows the flow chart of the image processing method according to the embodiment of the present disclosure;
Fig. 2 shows the flow charts of step S200 in the image processing method according to the embodiment of the present disclosure;
Fig. 3 shows the process schematic of step S201 in the image processing method according to the embodiment of the present disclosure;
Fig. 4 shows the flow chart of step S202 in the image processing method according to the embodiment of the present disclosure;
Fig. 5 shows the flow chart of step S2022 in the image processing method according to the embodiment of the present disclosure;
Fig. 6 shows the front and back comparison diagram of the correction of the facial area according to embodiment of the present disclosure performance objective object;
Fig. 7, which is shown, determines whether second location information meets the flow chart of preset condition according in the embodiment of the present disclosure;
Fig. 8 is shown in the step S400 in the image processing method according to the embodiment of the present disclosure to be added according to second location information Add the flow chart of virtual effect;
Fig. 9 shows a kind of block diagram of image processing apparatus according to the embodiment of the present disclosure;
Figure 10 shows the block diagram of a kind of electronic equipment 800 according to the embodiment of the present disclosure;
Figure 11 shows the block diagram of a kind of electronic equipment 1900 according to the embodiment of the present disclosure.
Specific embodiment
Various exemplary embodiments, feature and the aspect of the disclosure are described in detail below with reference to attached drawing.It is identical in attached drawing Appended drawing reference indicate element functionally identical or similar.Although the various aspects of embodiment are shown in the attached drawings, remove It non-specifically points out, it is not necessary to attached drawing drawn to scale.
Dedicated word " exemplary " means " being used as example, embodiment or illustrative " herein.Here as " exemplary " Illustrated any embodiment should not necessarily be construed as preferred or advantageous over other embodiments.
The terms "and/or", only a kind of incidence relation for describing affiliated partner, indicates that there may be three kinds of passes System, for example, A and/or B, can indicate: individualism A exists simultaneously A and B, these three situations of individualism B.In addition, herein Middle term "at least one" indicate a variety of in any one or more at least two any combination, it may for example comprise A, B, at least one of C can indicate to include any one or more elements selected from the set that A, B and C are constituted.
In addition, giving numerous details in specific embodiment below in order to which the disclosure is better described. It will be appreciated by those skilled in the art that without certain details, the disclosure equally be can be implemented.In some instances, for Method, means, element and circuit well known to those skilled in the art are not described in detail, in order to highlight the purport of the disclosure.
It is appreciated that above-mentioned each embodiment of the method that the disclosure refers to, without prejudice to principle logic, To engage one another while the embodiment to be formed after combining, as space is limited, the disclosure is repeated no more.
Fig. 1 shows the flow chart of the image processing method according to the embodiment of the present disclosure, and wherein image processing method can wrap It includes:
S100: in the case where receiving photographing instruction, obtain preview image in each key point first location information with And image of taking pictures;
S200: the direct picture of target object in image of taking pictures described in the first location information acquisition using each key point, And determine the second location information of each key point of the direct picture;
S300: determining whether the second location information meets preset condition, if so, then according to take pictures in image with satisfaction Virtual effect is added in the corresponding position of the second location information of preset condition, otherwise according to the determination from the image of taking pictures The third place information of each key point adds virtual effect.
The image processing method that the embodiment of the present disclosure provides can be applied and be taken pictures or the electricity of camera operation being able to carry out In sub- equipment, more particularly to be added when taking pictures or imaging in electronic equipment of the virtual feature to realize virtual effect, the electricity Sub- equipment may include mobile phone, camera, U.S. face equipment, AR picture pick-up device etc., but the embodiment of the present disclosure is not limited this. In addition, the embodiment of the present disclosure can be applied in the embodiment of shooting character image, it can added by the identification of facial characteristics Add virtual feature, or the virtual special efficacy in hand, limbs key point can also be applied, can be applied in weight reducing body beautification, face In the embodiments such as portion's U.S. face, as long as being related to the case where taking pictures to personage, it can be realized using the embodiment of the present disclosure virtual Effect.
In the embodiment of the present disclosure, during executing photographing operation can in real time preview shooting image, Yong Huke It is taken pictures parameter with the status adjustment according to preview image, which may include angle, brightness, time for exposure etc..User Also it can choose the virtual effect to be presented, it can the virtual effect of preview in real time by preview image.Therefore, user can be with Take pictures parameter and selected virtual effect are adjusted in preview image, are based on this, the embodiment of the present disclosure can receive When photographing instruction, the preview image of the frame before executing photographing operation, and the image of taking pictures for result of taking pictures are obtained.The preview Image and image of taking pictures are closest in terms of similarity, therefore the embodiment of the present disclosure can determine image of taking pictures in conjunction with preview image In each key point position, have the characteristics that precision is higher.In addition, the image processing method in the embodiment of the present disclosure can answer Used in reference object be personage in the case where.
The process for obtaining preview image is described in detail herein, user can select the virtual effect to be realized before taking pictures Fruit, such as virtual feature of selection addition, the virtual scene at place, U.S. face, body beautification, weight reducing and other effects, and can also adjust Brightness, the parameter of taking pictures of photo angle, it is above-mentioned all to be presented in a manner of preview before photographing operation.User is to pre- When effect of looking at more is satisfied with, photographing operation is executed.Wherein, the embodiment of the present disclosure can be executed (is receiving during preview To before photographing instruction), cache preview image in real time, and when receiving photographing instruction, acquisition receive photographing instruction and Last preview image when taking pictures image has not been obtained, and executes the key point of the image of taking pictures obtained using the preview image Optimization.Wherein, since the resolution ratio and image size of preview image and image of taking pictures are distinct, preview is being obtained Image and take pictures after image, can based on the image parameter for image of taking pictures, to the location information of the key point in preview image into The adjustment of row adaptability, so that the position of the key point in preview image adjusted is adapted to the image parameter for image of taking pictures. Wherein, image parameter may include the size parameter etc. of resolution ratio and image, but the embodiment of the present disclosure is not limited this.
Wherein, in the case where receiving photographing instruction, it can identify the frame before obtaining image of taking pictures The first location information of each key point in preview image, and photographing operation can be executed according to photographing instruction and be clapped to obtain According to image.Wherein key point can be preconfigured key object, such as face-image may include eyes, eyebrow, The key points such as nose, ear, lip, or may include the key points such as finger, palm for hand images, for different bats The position of different key points can be identified according to image, such as can also include shoulder, neck, hip, wrist etc., and the disclosure is to this Without specifically limiting.Wherein it is possible to which preview image is input to neural network model, and identified according to the neural network model The first location information of each key point out, neural network model may include convolutional neural networks model.In addition, the position of key point Setting can indicate according to the form of coordinate, such as two dimension or three-dimensional coordinate, can specifically be arranged according to demand, the embodiment of the present disclosure It is not limited.
In addition, may include in received photographing instruction virtual effect to be added information.User takes pictures in triggering When key, electronic equipment can also obtain the information of currently selected virtual effect, and the information based on the virtual effect is raw At photographing instruction.To which virtual effect can be added according to the location information of determining key point when executing step S300.
After obtaining preview image and image of taking pictures, it can execute step S200.Fig. 2 shows according to the disclosure The flow chart of step S200 in the image processing method of embodiment, it is wherein described each according to the utilization in the embodiment of the present disclosure The first location information of key point is taken pictures the direct picture of target object in image described in obtaining, and determines the direct picture The second location information (step S200) of each key point may include:
S201: according to the image parameter of the image of taking pictures, by the first location information of each key point be converted into it is described The 4th location information that the image parameter of image of taking pictures is adapted to;
S202: it according to the 4th location information for each key point being adapted to the image parameter of the image of taking pictures, obtains Obtain the second location information of the direct picture of target object and each key point for determining the direct picture in the image of taking pictures
In the embodiment of the present disclosure, due to the ginseng of image (preview image) and the image of taking pictures for acquisition of taking pictures under preview state Number can be different, such as size, the resolution ratio of image may be different, and the parameter by preview image and image of taking pictures is needed to carry out at this time Adaptation, to facilitate the position for being associated with corresponding key point.Therefore, different with the image parameter of preview image in image of taking pictures In the case of, it needs first of each key point in the preview image obtained in step S100 according to the image parameter for image of taking pictures Confidence ceases the 4th location information of the image adaptation that is converted into and takes pictures.Wherein, according to the parameter difference of take pictures image and preview image Different, embodiment of the present disclosure step S201 may include at least one of following manner:
A) according to the resolution ratio of the image of taking pictures, the first location information of each key point in the preview image is carried out Scaling processing obtains each 4th location information;
It wherein, can be by a mode to first when the resolution ratio between image and preview image of taking pictures is in multiple proportion Location information is converted.For example, preview image resolution ratio be A*B, and the resolution ratio for image of taking pictures be KA*KB when, can It is zoomed in or out processing with each key point first location information to preview image.For example, when K is greater than 1, i.e., to the One location information amplifies processing, and when K is less than 1, then diminution processing is carried out to first location information.Work as first location information Coordinate be (x, y), then the coordinate of the 4th location information after scaling processing can be (Kx, Ky).
B) according to the offset taken pictures between image and the corresponding key point in preview image, to the preview image In each key point first location information carry out migration processing, obtain each 4th location information;
Wherein, take pictures length between image and preview image or it is of same size when, then can be converted by b mode Each first location information.It wherein, can be according in width direction when the length between image and preview image of taking pictures is identical Offset carries out migration processing to each first location information, can be with when taking pictures of same size between image and preview image Migration processing is carried out to first location information according to the offset on length direction.For example, the resolution ratio in preview image is A* B, and the resolution ratio for image of taking pictures be A* (B+C) when, then it represents that preview image is identical and of different size with the length for image of taking pictures, C is then offset between the two, the offset of as corresponding key point, can be carried out first location information according to C at this time Migration processing, the coordinate of the 4th obtained location information are (x, y+C).It is A*B in the resolution ratio of preview image, and image of taking pictures Resolution ratio be (A+C) * B when, then it represents that preview image is different with the of same size and length for image of taking pictures, and C is then the two Between offset, the offset of as corresponding key point, at this time can by first location information according to C carry out migration processing, obtain The coordinate of the 4th location information arrived is (x+C, y).Wherein, C can be positive number and be also possible to negative, according to different situations, Corresponding conversion process can be executed.
C) according to the resolution ratio of the image of taking pictures, the first location information of each key point in the preview image is carried out Scaling processing, and the position of the first location information based on the key point after scaling processing and the corresponding key point taken pictures on image Offset between information carries out migration processing to the first location information of each key point after the scaling processing, obtains institute State the 4th location information.
Wherein when the length and width of take pictures image and preview image is different, then the can be determined according to C mode Four location informations.It wherein, then can be to each for example, the resolution ratio for image of taking pictures is generally more much larger than the resolution ratio of preview image First location information amplifies processing, so that the width and/or length of the first location information after enhanced processing and image of taking pictures It spends identical.Fig. 3 shows the process schematic of step S201 in the image processing method according to the embodiment of the present disclosure.
The embodiment of the present disclosure can determine the multiple of scaling processing by the first formula, wherein the expression formula packet of the first formula It includes: scale=min (w2/h2, w1/h1).Wherein, scale is the multiple of scaling processing, and (w2, h2) is the resolution of image of taking pictures Rate (size), (w1, h1) are the resolution ratio (size) of preview image, and min is the function that gets the small value.That is, the multiple of scaling processing can The first ratio between the length for image of taking pictures and the length of preview image is thought, with the width of image of taking pictures and preview image Lesser value in the second ratio between width.In addition, it includes to pre- that the embodiment of the present disclosure, which zooms in and out processing to preview image, The coordinate for the key point look in image zooms in and out processing, i.e., according to first of each key point in scaling multiple scaling preview image The coordinate of location information.The quantity of key point in preview image can guarantee the feature of preview image, not make preview graph As undistorted.Such as the quantity of key point can be dozens of, the embodiment of the present disclosure is not limited this.
As shown in figure 3, the preview image that the embodiment of the present disclosure obtains can be A, wherein the resolution ratio of A can be 720* 1280, and the resolution ratio for image of taking pictures can be 3000*4000, the first position of each key point in available preview image Information (in the case where receiving photographing instruction or in preview image), thus in step s 201 can be according to figure of taking pictures The resolution ratio of picture zooms in and out processing to the first location information of each key point in preview image a.Wherein it is possible to according to taking pictures The first coordinate (first location information) of each key point on the resolution adjustment preview image of image, and guarantee the phase of each key point It is constant to position, realize first location information scaling processing.For example, can determine first scaling multiple be min (3000/720, It 4000/1280)=3.125, therefore, can be in such a way that scaling multiple be 3.125 times to each key point in preview image The first coordinate amplify processing.For example, the first coordinate of the key point on preview image is (x, y), after enhanced processing The coordinate of key point is (3.125*x, 3.125*y), is equivalent to and the key point of preview image A has been zoomed to resolution ratio is On the figure B of 2250*4000, the first location information (B) of each key point after obtaining scaling processing.Above-mentioned example is only the disclosure Embodiment is to the exemplary illustration of preview image scaling processing, not as the specific restriction to the embodiment of the present disclosure.
It, can also be according to the size or specification for image of taking pictures to scaling processing after the scaling processing for executing preview image The first location information of preview image afterwards carries out migration processing.Such as after above-mentioned scaling processing, obtained scaling processing The length-width ratio of preview image afterwards may be different with the length-width ratio for image of taking pictures, at this time can be by each pass after scaling processing The first location information of key point carries out migration processing, the first image of the images match that obtains and take pictures.For example, can be in preview graph Corresponding key point is determined respectively in picture and image of taking pictures, and the position offset between two key points is to execute at offset The offset of reason.For example, can be according to of the upper left corner in first key point in the preview image upper left corner and image of taking pictures Offset between one key point executes migration processing.Or can also be according to the key point at preview image center, and take pictures Offset between the key point at the center of image executes migration processing.It is adopted when executing migration processing i.e. in the embodiment of the present disclosure Offset can be the position offset between preview image two key points corresponding with image of taking pictures, and being based on should Position offset between two reference points can believe the first position of each key point in the preview image after scaling processing Breath is deviated all in accordance with the position offset, to obtain the 4th location information.Wherein, in the embodiment of the present disclosure, Ke Yitong It crosses the second formula and determines offset, wherein the second formula may include: the preview after the length-width ratio for image of taking pictures is greater than scaling When the length-width ratio of image, illustrate to need to be implemented the offset on length direction, at this time offset=(w2-w1*scale)/2, wherein Offset is offset, and w2 is the length of image of taking pictures, and w1 is the length of preview image, and scale is scaling multiple.In addition, Take pictures image length-width ratio be less than scaling after preview image length-width ratio when, illustrate to need to be implemented the offset in width direction, Offset=(h2-h1*scale)/2 at this time, wherein offset is offset, and h2 is the width of image of taking pictures, and h1 is preview graph The width of picture, scale are scaling multiple.First location information offset after each scaling processing can be executed through the above way Operation, so that obtaining matched 4th location information of image parameter with image of taking pictures.It at this time can be by each 4th location information It is mapped to and takes pictures on image, execute the displaying of the virtual effect for image of taking pictures.
As shown in figure 3, determination needs to be implemented inclined on length direction after the first location information for obtaining scaling processing It moves, can determine that offset is (3000-720*3.125)/2=375 at this time, the position coordinates of each key point in B are (3.25*x, 3.25*y), the 4th location information (position coordinates) for executing corresponding key point in the C obtained after offset operation are (3.25*x+375,3.25*y), at this time the 4th location information of each key point in C and the parameter adaptation of image of taking pictures.
Through the above configuration, it can complete the first location information of each key point in preview image and image of taking pictures 4th location information of the matching operation of image parameter, acquisition is identical as the resolution ratio for image of taking pictures, due to the embodiment of the present disclosure The coordinate of each key point in preview image can be zoomed in and out and be deviated, have data processing speed fast, time-consuming short spy Point.In addition, then may further determine that whether the 4th location information can be used to implement bat after obtaining the 4th location information According to the virtual effect of image.
After first location information is converted into the 4th location information compatible with image of taking pictures by step S201, The direct picture for image of taking pictures can be obtained according to the 4th location information, and determines the second of each key point of direct picture Confidence breath.
Fig. 4 shows the flow chart of step S202 in the image processing method according to the embodiment of the present disclosure.Wherein, the basis The 4th location information for each key point being adapted to the image parameter of the image of taking pictures, mesh in image of taking pictures described in acquisition It marks the direct picture of object and determines that the second location information (step S202) of each key point of the direct picture may include:
S2021: the image-region of the target object in image of taking pictures described in determining;
S2022: it using the 4th location information and standard picture of each key point, is obtained by way of affine transformation The direct picture of target object into the image of taking pictures, the standard picture include standard facial image, standard images of gestures At least one of with standard pose presentation.
Firstly, the target object in the embodiment of the present disclosure can be the position of the virtual effect of being shown in the image of shooting Object can only include the position object for the virtual effect realized, also may include whole user objects.Such as executing When thin face, U.S. face, addition virtual facial virtual feature, target object can be then face.The mesh determined in step S2021 at this time The image-region of mark object is the facial area taken pictures in image.Alternatively, when executing the virtual effect of the body parts such as gesture, Can be using hand region as target object, the image-region of the target object determined in step S2021 at this time can be to take pictures Hand region in image.That is, can choose out the image-region including target object in the embodiment of the present disclosure.
In step S2022, the mesh in above-mentioned image-region can be executed in conjunction with the 4th location information and standard picture The processing for marking object, obtains the direct picture of target object.Such as when target object is face, due to the process in shooting In, the image of shooting can be the side face of user or user is the case where facing upward head or bow, in image of taking pictures at this time only The angle of a part of face or face object has skew, and the target object of shooting is not the direct picture of standard, at this time may be used The front elevation of the target object in image of taking pictures is obtained by affine transformation in conjunction with the 4th location information and standard facial image Picture.Alternatively, in same shooting process, the hand of shooting may only include part hand images when target object is hand, this When can according to the 4th location information and standard limb image (such as standard hand images) of acquisition by affine transformation will clap Positive hand images are adjusted to according to the hand images in image.Wherein
Wherein, the embodiment of the present disclosure can obtain the direct picture of target object by way of affine transformation, and Fig. 5 is shown According to the flow chart of step S2022 in the image processing method of the embodiment of the present disclosure, wherein described using described in each key point 4th location information and standard picture obtain the front elevation of the target object of the image of taking pictures by way of affine transformation As (step S2022), may include:
S20221: the standard location information of each key point in the 4th location information of each key point and standard picture is carried out Comparison, adjusted described in take pictures the correction matrix of image, the correction matrix includes the adjusting parameter of each key point position;
S20222: being based on the correction matrix, be corrected to the target object of the image of taking pictures, and obtains described take pictures The direct picture of the target object of image.
Wherein it is possible to correction matrix is obtained according to the 4th location information and standard picture by way of affine transformation, it should Correction matrix can be used for generating direct picture.For example, when target object is face, it can be by the 4th position of each key point The standard location information of the key point of information and standard faces compares, such as can use each key point in standard faces The 4th location information and the coordinate position of corresponding key point on standard faces image between difference, constitute correction matrix.After And can use the correction matrix and the image-region of the target object in image of taking pictures is corrected to positive facial image, to people Operation is normalized in face image.For affine transformation the principle embodiment of the present disclosure herein without be described in detail, this field Technical staff can obtain the correction matrix for obtaining direct picture by affine transformation according to prior art means.
Fig. 6 shows the front and back comparison diagram of the correction of the facial area according to embodiment of the present disclosure performance objective object, wherein From fig. 6, it can be seen that the facial area in left-side images is tilted to the left, pass through the available right side embodiment of the present disclosure step S202 The diagram of side, wherein for the front face image after the face image correcting based on left hand view.
After obtaining the direct picture of normalized target object for image of taking pictures, then the direct picture can be identified In each key point second location information, such as each second location information can be identified by deep learning algorithm, in addition may be used To execute the identification, such as feature recognition algorithms etc. of key point by other means.Obtain each second location information it Afterwards, it can determine whether each second location information meets preset condition, such as meet preset condition, then it can be according to second confidence Breath addition virtual effect can directly execute the addition of virtual effect if being unsatisfactory for preset condition according to image of taking pictures.
Fig. 7, which is shown, determines whether second location information meets the flow chart of preset condition according in the embodiment of the present disclosure.Its In, step S300 may include:
S301: the confidence level of the second location information is obtained;
S302: in the case where the confidence level of the second location information is more than or equal to the confidence threshold value, really The fixed second location information meets preset condition.
In the embodiment of the present disclosure, neural network model can use to determine the direct picture of target object in image of taking pictures In each key point second location information, wherein in the second of neural network model each key point in determining direct picture , can be with the Score (confidence level) of each second location information of corresponding determination while confidence ceases, and then can determine each second The confidence level of location information is compared with confidence threshold value.Wherein, confidence level refers to neural network for each second confidence The accuracy of the determination of breath, about the method for determination of confidence level, those skilled in the art can realize according to prior art means, To this embodiment of the present disclosure without specifically limiting.
In the embodiment of the present disclosure, if the confidence level of the second location information obtained is greater than confidence threshold value, that is, represents and work as The confidence level of preceding second location information is high (precision is higher), and key point position has been tracked to, then can determine second confidence Breath meets preset condition.Otherwise, it if the confidence level of second location information is less than or equal to confidence threshold value, thens follow the steps S303, at this point, representing the second of current acquisition when the confidence level of second location information is less than or equal to confidence threshold value Location information is inaccurate, and needs to continue to pluck out facial area and enters in circulation next time, repeats convergence and (repeats step S202, until identifying that confidence level is higher than the second location information of threshold value.Wherein, it if repeating repeatedly, and repeats secondary Number reaches frequency threshold value (such as 10 times or other numerical value, the embodiment of the present disclosure are not limited to this), the second position of identification The confidence level of information is still less than confidence threshold value, it is determined that the second location information is unsatisfactory for preset condition, jumps out circulation.
Wherein, when repeating S202 every time, the image-region of determining target object is different, such as expands the figure of selection As the range etc. in region.
It, can be according to can determine to be greater than confidence level threshold in direct picture by the above-mentioned configuration of the embodiment of the present disclosure The second location information of value judges whether second location information meets preset condition, if the confidence level of second location information is big In or equal to confidence threshold value, then second location information meets preset condition, if when repeating to frequency threshold value, second The confidence level of confidence breath is still less than confidence threshold value, then second location information is unsatisfactory for preset condition.
It wherein, then can be according to the second confidence for meeting preset condition when second location information meets preset condition Breath adds virtual effect in image of taking pictures, if second location information is discontented with preset condition, according to image recognition of taking pictures The third place information adds virtual effect in image of taking pictures.
Wherein, Fig. 8 is shown in the step S300 in the image processing method according to the embodiment of the present disclosure according to the second position The flow chart of information addition virtual effect.Wherein in step 300, added according to the second location information of each key point virtual special Sign, comprising:
S304: if the second location information meets preset condition, image of taking pictures is determined according to the correction matrix In the 5th location information corresponding with second location information;
S305: virtual feature is added according to the 5th location information.
Due in the embodiment shown in Fig. 6, determining second location information is the front of the target object in image of taking pictures The key point of image, it is therefore desirable to the key point for the target object for being converted into taking pictures by the key point of direct picture in image.Its In, corresponding 5th position of second location information can be obtained according to the reversed operation of correction matrix that step S20221 is determined and believed Breath, and virtual effect is further added according to the position of the 5th location information.
Wherein, arbitrary AR effect may be implemented in institute's virtual feature to be added, such as wears cap, glasses, red lip, glasses Amplification, facial deformation etc..User can choose thought AR effect to be added, and the embodiment of the present disclosure is to this without limiting.Its In, the embodiment of the present disclosure is also based on received selection information and determines virtual feature to be added, according to second confidence The virtual feature is added in the position of breath, or adds virtual feature according to the position of the third place information.
Wherein, when second location information is unsatisfactory for preset condition, it can use and take pictures described in neural network model identification The third place information of each key point in image, and taken pictures the addition of virtual feature in image according to the third place information realization, The neural network model includes deep learning neural network model.
The identification of the key point of each image in the embodiment of the present disclosure can be held according to deep learning neural network model Row, has the characteristics that with high accuracy.But the embodiment of the present disclosure is to this without limiting.
In addition, in embodiment of the disclosure, the face feature information of the personage of shooting can be saved and divided for the personage With unique mark, such as face feature information may include meeting the second location information of preset condition, or image of taking pictures In who object the third place information.If in image of taking pictures including multiple who objects, it can including multiple faces, The face identified can be then identified according to the second location information for meeting preset condition or image of taking pictures, as be known Not Chu face distribute preset mark (ID), to distinguish each personage, and store each face feature information accordingly.
During subsequent take pictures, it can identify that the who object for including in image of taking pictures whether there is first and identify Personage out such as exists, then directly adds virtual feature according to corresponding location information, to guarantee the corresponding void of identical personage The additive effect of quasi character.
The tracking of preview image may be implemented in the embodiment of the present disclosure, realizes image of taking pictures in conjunction with the tracking result of preview image Virtual feature addition, can accurately realize virtual effect.It wherein first can be according to the image parameter pair for image of taking pictures The location information of each key point obtains the second position of each key point in direct picture corresponding with image of taking pictures in preview image Information then can be according to by preview image when second location information can be used in executing crucial positioning and meet preset condition The determining second location information of tracking realize virtual effect in image of taking pictures, if second location information is unsatisfactory for default item Part, then each key point the third place information in the image that can be taken pictures with Direct Recognition, and virtually imitated according to the third place information realization Fruit.The embodiment of the present disclosure can guarantee addition virtual feature accurate correspondence, even if in the prior art, the moment taken pictures by It can not be accurately identified in the movement of the reference object feature in image that causes to take pictures, and cannot accurately add corresponding virtual effect In the case where fruit, also tracking and positioning can be carried out to key point by preview image, to accurately track and identify each key point Position, reduction can not effectively add virtual effect or cannot add the probability of virtual effect.
It will be understood by those skilled in the art that each step writes sequence simultaneously in the above method of specific embodiment It does not mean that stringent execution sequence and any restriction is constituted to implementation process, the specific execution sequence of each step should be with its function It can be determined with possible internal logic.
In addition, the disclosure additionally provides image processing apparatus, electronic equipment, computer readable storage medium, program, it is above-mentioned It can be used to realize any image processing method that the disclosure provides, corresponding technical solution and description and referring to method part It is corresponding to record, it repeats no more.
Fig. 9 shows a kind of block diagram of image processing apparatus according to the embodiment of the present disclosure, as shown in figure 9, at described image Managing device includes:
Module 10 is obtained, is used in the case where receiving photographing instruction, the of each key point in preview image is obtained One location information and image of taking pictures;
Determining module 20, be used to obtain using the first location information of each key point described in take pictures target object in image Direct picture, and determine the second location information of each key point of the direct picture;
Virtual module 30, is used to determine whether the second location information to meet preset condition, if so, then basis is taken pictures Virtual effect is added in position corresponding with the second location information for meeting preset condition in image, otherwise takes pictures according to from described The third place information of each key point identified in image adds virtual effect.
In a kind of possible embodiment, the determining module includes:
Converting unit, the image parameter for the image that is used to take pictures according to turn the first location information of each key point Change the 4th location information being adapted to the image parameter of the image of taking pictures into;And
Key point determination unit is used for according to each key point being adapted to the image parameter of the image of taking pictures 4th location information the direct picture of target object and determines each key point of the direct picture in image of taking pictures described in acquisition Second location information.
In a kind of possible embodiment, the converting unit is also used to be executed according at least one of following manner The first location information of each key point is converted into and the image of taking pictures by the image parameter of the image of taking pictures according to 4th location information of image parameter adaptation:
According to the resolution ratio of the image of taking pictures, contract to the first location information of each key point in the preview image Processing is put, each 4th location information is obtained;
According to the offset taken pictures between image and the corresponding key point in preview image, in the preview image The first location information of each key point carries out migration processing, obtains each 4th location information;
According to the resolution ratio of the image of taking pictures, contract to the first location information of each key point in the preview image Processing is put, and the first location information based on the key point after scaling processing is believed to the position for the corresponding key point taken pictures on image Offset between breath carries out migration processing to the first location information of each key point after the scaling processing, obtains described 4th location information.
In a kind of possible embodiment, the key point determination unit be also used to determine described in the mesh taken pictures in image The image-region of object is marked, and using the 4th location information and standard picture of each key point, passes through affine transformation Mode obtains the direct picture of the target object of the image of taking pictures, and the standard picture includes standard facial image, standard limb At least one of body image and standard pose presentation.
In a kind of possible embodiment, the key point determination unit is also used to the 4th of each key point Confidence breath and the standard location information of key point each in standard picture compare, adjusted described in take pictures the correction square of image Battle array, the correction matrix includes the adjusting parameter of each key point position;And
Based on the correction matrix, the target object of the image of taking pictures is corrected, obtains the image of taking pictures The direct picture of target object.
In a kind of possible embodiment, the virtual module is also used to meet default item in the second location information In the case where part, the 5th location information corresponding with second location information in image of taking pictures is determined according to the correction matrix;With And virtual feature is added according to the 5th location information of each key point.
In a kind of possible embodiment, the virtual module is also used to obtain the confidence of the second location information Degree determines described second in the case where the confidence level of the second location information is more than or equal to the confidence threshold value Location information meets preset condition.
In a kind of possible embodiment, the virtual module is also used to small in the confidence level of the second location information In the case where the confidence threshold value, each key that the basis is adapted to the image parameter of the image of taking pictures is repeated The 4th location information of point the direct picture of target object and identifies the direct picture in image of taking pictures described in acquisition The second location information of each key point;
When the number repeated reaches frequency threshold value, when the confidence level of determining second location information is still less than confidence It when spending threshold value, determines that the second location information is unsatisfactory for preset condition, and during repeating, is obtained when determining Second location information confidence level be more than or equal to confidence threshold value when, by the confidence level be more than or equal to confidence level threshold The second location information of value is determined as meeting preset condition.
In a kind of possible embodiment, described device further include:
Receiving module is used to receive selection information;
Virtual module is also used to determine virtual feature to be added based on received selection information, according to second confidence Breath adds the virtual feature, or adds virtual feature according to the third place information.
In a kind of possible embodiment, the acquisition module is also used to obtain preview image using neural network model In each key point first location information, and the third place information of each key point in image of taking pictures.
In some embodiments, the embodiment of the present disclosure provides the function that has of device or comprising module can be used for holding The method of row embodiment of the method description above, specific implementation are referred to the description of embodiment of the method above, for sake of simplicity, this In repeat no more
The embodiment of the present disclosure also proposes a kind of computer readable storage medium, is stored thereon with computer program instructions, institute It states when computer program instructions are executed by processor and realizes the above method.Computer readable storage medium can be non-volatile meter Calculation machine readable storage medium storing program for executing.
The embodiment of the present disclosure also proposes a kind of electronic equipment, comprising: processor;For storage processor executable instruction Memory;Wherein, the processor is configured to the above method.
The equipment that electronic equipment may be provided as terminal, server or other forms.
Figure 10 shows the block diagram of a kind of electronic equipment 800 according to the embodiment of the present disclosure.For example, electronic equipment 800 can be with It is mobile phone, computer, digital broadcasting terminal, messaging device, game console, tablet device, Medical Devices, body-building Equipment, the terminals such as personal digital assistant.
Referring to Fig.1 0, electronic equipment 800 may include following one or more components: processing component 802, memory 804, Power supply module 806, multimedia component 808, audio component 810, the interface 812 of input/output (I/O), sensor module 814, And communication component 816.
The integrated operation of the usual controlling electronic devices 800 of processing component 802, such as with display, call, data are logical Letter, camera operation and record operate associated operation.Processing component 802 may include one or more processors 820 to hold Row instruction, to perform all or part of the steps of the methods described above.In addition, processing component 802 may include one or more moulds Block, convenient for the interaction between processing component 802 and other assemblies.For example, processing component 802 may include multi-media module, with Facilitate the interaction between multimedia component 808 and processing component 802.
Memory 804 is configured as storing various types of data to support the operation in electronic equipment 800.These data Example include any application or method for being operated on electronic equipment 800 instruction, contact data, telephone directory Data, message, picture, video etc..Memory 804 can by any kind of volatibility or non-volatile memory device or it Combination realize, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM) is erasable Except programmable read only memory (EPROM), programmable read only memory (PROM), read-only memory (ROM), magnetic memory, fastly Flash memory, disk or CD.
Power supply module 806 provides electric power for the various assemblies of electronic equipment 800.Power supply module 806 may include power supply pipe Reason system, one or more power supplys and other with for electronic equipment 800 generate, manage, and distribute the associated component of electric power.
Multimedia component 808 includes the screen of one output interface of offer between the electronic equipment 800 and user. In some embodiments, screen may include liquid crystal display (LCD) and touch panel (TP).If screen includes touch surface Plate, screen may be implemented as touch screen, to receive input signal from the user.Touch panel includes one or more touches Sensor is to sense the gesture on touch, slide, and touch panel.The touch sensor can not only sense touch or sliding The boundary of movement, but also detect duration and pressure associated with the touch or slide operation.In some embodiments, Multimedia component 808 includes a front camera and/or rear camera.When electronic equipment 800 is in operation mode, as clapped When taking the photograph mode or video mode, front camera and/or rear camera can receive external multi-medium data.It is each preposition Camera and rear camera can be a fixed optical lens system or have focusing and optical zoom capabilities.
Audio component 810 is configured as output and/or input audio signal.For example, audio component 810 includes a Mike Wind (MIC), when electronic equipment 800 is in operation mode, when such as call mode, recording mode, and voice recognition mode, microphone It is configured as receiving external audio signal.The received audio signal can be further stored in memory 804 or via logical Believe that component 816 is sent.In some embodiments, audio component 810 further includes a loudspeaker, is used for output audio signal.
I/O interface 812 provides interface between processing component 802 and peripheral interface module, and above-mentioned peripheral interface module can To be keyboard, click wheel, button etc..These buttons may include, but are not limited to: home button, volume button, start button and lock Determine button.
Sensor module 814 includes one or more sensors, for providing the state of various aspects for electronic equipment 800 Assessment.For example, sensor module 814 can detecte the state that opens/closes of electronic equipment 800, the relative positioning of component, example As the component be electronic equipment 800 display and keypad, sensor module 814 can also detect electronic equipment 800 or The position change of 800 1 components of electronic equipment, the existence or non-existence that user contacts with electronic equipment 800, electronic equipment 800 The temperature change of orientation or acceleration/deceleration and electronic equipment 800.Sensor module 814 may include proximity sensor, be configured For detecting the presence of nearby objects without any physical contact.Sensor module 814 can also include optical sensor, Such as CMOS or ccd image sensor, for being used in imaging applications.In some embodiments, which may be used also To include acceleration transducer, gyro sensor, Magnetic Sensor, pressure sensor or temperature sensor.
Communication component 816 is configured to facilitate the communication of wired or wireless way between electronic equipment 800 and other equipment. Electronic equipment 800 can access the wireless network based on communication standard, such as WiFi, 2G or 3G or their combination.Show at one In example property embodiment, communication component 816 receives broadcast singal or broadcast from external broadcasting management system via broadcast channel Relevant information.In one exemplary embodiment, the communication component 816 further includes near-field communication (NFC) module, short to promote Cheng Tongxin.For example, radio frequency identification (RFID) technology, Infrared Data Association (IrDA) technology, ultra wide band can be based in NFC module (UWB) technology, bluetooth (BT) technology and other technologies are realized.
In the exemplary embodiment, electronic equipment 800 can be by one or more application specific integrated circuit (ASIC), number Word signal processor (DSP), digital signal processing appts (DSPD), programmable logic device (PLD), field programmable gate array (FPGA), controller, microcontroller, microprocessor or other electronic components are realized, for executing the above method.
In the exemplary embodiment, a kind of non-volatile computer readable storage medium storing program for executing is additionally provided, for example including calculating The memory 804 of machine program instruction, above-mentioned computer program instructions can be executed by the processor 820 of electronic equipment 800 to complete The above method.
Figure 11 shows the block diagram of a kind of electronic equipment 1900 according to the embodiment of the present disclosure.For example, electronic equipment 1900 can To be provided as a server.Referring to Fig.1 1, electronic equipment 1900 includes processing component 1922, further comprises one or more A processor and memory resource represented by a memory 1932, can be by the execution of processing component 1922 for storing Instruction, such as application program.The application program stored in memory 1932 may include that one or more each is right The module of Ying Yuyi group instruction.In addition, processing component 1922 is configured as executing instruction, to execute the above method.
Electronic equipment 1900 can also include that a power supply module 1926 is configured as executing the power supply of electronic equipment 1900 Management, a wired or wireless network interface 1950 is configured as electronic equipment 1900 being connected to network and an input is defeated (I/O) interface 1958 out.Electronic equipment 1900 can be operated based on the operating system for being stored in memory 1932, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM or similar.
In the exemplary embodiment, a kind of non-volatile computer readable storage medium storing program for executing is additionally provided, for example including calculating The memory 1932 of machine program instruction, above-mentioned computer program instructions can by the processing component 1922 of electronic equipment 1900 execute with Complete the above method.
The disclosure can be system, method and/or computer program product.Computer program product may include computer Readable storage medium storing program for executing, containing for making processor realize the computer-readable program instructions of various aspects of the disclosure.
Computer readable storage medium, which can be, can keep and store the tangible of the instruction used by instruction execution equipment Equipment.Computer readable storage medium for example can be-- but it is not limited to-- storage device electric, magnetic storage apparatus, optical storage Equipment, electric magnetic storage apparatus, semiconductor memory apparatus or above-mentioned any appropriate combination.Computer readable storage medium More specific example (non exhaustive list) includes: portable computer diskette, hard disk, random access memory (RAM), read-only deposits It is reservoir (ROM), erasable programmable read only memory (EPROM or flash memory), static random access memory (SRAM), portable Compact disk read-only memory (CD-ROM), digital versatile disc (DVD), memory stick, floppy disk, mechanical coding equipment, for example thereon It is stored with punch card or groove internal projection structure and the above-mentioned any appropriate combination of instruction.Calculating used herein above Machine readable storage medium storing program for executing is not interpreted that instantaneous signal itself, the electromagnetic wave of such as radio wave or other Free propagations lead to It crosses the electromagnetic wave (for example, the light pulse for passing through fiber optic cables) of waveguide or the propagation of other transmission mediums or is transmitted by electric wire Electric signal.
Computer-readable program instructions as described herein can be downloaded to from computer readable storage medium it is each calculate/ Processing equipment, or outer computer or outer is downloaded to by network, such as internet, local area network, wide area network and/or wireless network Portion stores equipment.Network may include copper transmission cable, optical fiber transmission, wireless transmission, router, firewall, interchanger, gateway Computer and/or Edge Server.Adapter or network interface in each calculating/processing equipment are received from network to be counted Calculation machine readable program instructions, and the computer-readable program instructions are forwarded, for the meter being stored in each calculating/processing equipment In calculation machine readable storage medium storing program for executing.
Computer program instructions for executing disclosure operation can be assembly instruction, instruction set architecture (ISA) instructs, Machine instruction, machine-dependent instructions, microcode, firmware instructions, condition setup data or with one or more programming languages The source code or object code that any combination is write, the programming language include the programming language-of object-oriented such as Smalltalk, C++ etc., and conventional procedural programming languages-such as " C " language or similar programming language.Computer Readable program instructions can be executed fully on the user computer, partly execute on the user computer, be only as one Vertical software package executes, part executes on the remote computer or completely in remote computer on the user computer for part Or it is executed on server.In situations involving remote computers, remote computer can pass through network-packet of any kind It includes local area network (LAN) or wide area network (WAN)-is connected to subscriber computer, or, it may be connected to outer computer (such as benefit It is connected with ISP by internet).In some embodiments, by utilizing computer-readable program instructions Status information carry out personalized customization electronic circuit, such as programmable logic circuit, field programmable gate array (FPGA) or can Programmed logic array (PLA) (PLA), the electronic circuit can execute computer-readable program instructions, to realize each side of the disclosure Face.
Referring herein to according to the flow chart of the method, apparatus (system) of the embodiment of the present disclosure and computer program product and/ Or block diagram describes various aspects of the disclosure.It should be appreciated that flowchart and or block diagram each box and flow chart and/ Or in block diagram each box combination, can be realized by computer-readable program instructions.
These computer-readable program instructions can be supplied to general purpose computer, special purpose computer or other programmable datas The processor of processing unit, so that a kind of machine is produced, so that these instructions are passing through computer or other programmable datas When the processor of processing unit executes, function specified in one or more boxes in implementation flow chart and/or block diagram is produced The device of energy/movement.These computer-readable program instructions can also be stored in a computer-readable storage medium, these refer to It enables so that computer, programmable data processing unit and/or other equipment work in a specific way, thus, it is stored with instruction Computer-readable medium then includes a manufacture comprising in one or more boxes in implementation flow chart and/or block diagram The instruction of the various aspects of defined function action.
Computer-readable program instructions can also be loaded into computer, other programmable data processing units or other In equipment, so that series of operation steps are executed in computer, other programmable data processing units or other equipment, to produce Raw computer implemented process, so that executed in computer, other programmable data processing units or other equipment Instruct function action specified in one or more boxes in implementation flow chart and/or block diagram.
The flow chart and block diagram in the drawings show system, method and the computer journeys according to multiple embodiments of the disclosure The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation One module of table, program segment or a part of instruction, the module, program segment or a part of instruction include one or more use The executable instruction of the logic function as defined in realizing.In some implementations as replacements, function marked in the box It can occur in a different order than that indicated in the drawings.For example, two continuous boxes can actually be held substantially in parallel Row, they can also be executed in the opposite order sometimes, and this depends on the function involved.It is also noted that block diagram and/or The combination of each box in flow chart and the box in block diagram and or flow chart, can the function as defined in executing or dynamic The dedicated hardware based system made is realized, or can be realized using a combination of dedicated hardware and computer instructions.
The presently disclosed embodiments is described above, above description is exemplary, and non-exclusive, and It is not limited to disclosed each embodiment.Without departing from the scope and spirit of illustrated each embodiment, for this skill Many modifications and changes are obvious for the those of ordinary skill in art field.The selection of term used herein, purport In the principle, practical application or technological improvement to the technology in market for best explaining each embodiment, or lead this technology Other those of ordinary skill in domain can understand each embodiment disclosed herein.

Claims (10)

1. a kind of image processing method characterized by comprising
In the case where receiving photographing instruction, the first location information of each key point and figure of taking pictures in preview image are obtained Picture;
Using the first location information of each key point obtain described in take pictures the direct picture of target object in image, and described in determining The second location information of each key point of direct picture;
Determine whether the second location information meets preset condition, if so, then according to take pictures in image with meet preset item Virtual effect is added in the corresponding position of the second location information of part, otherwise according to each key point identified from the image of taking pictures The third place information add virtual effect.
2. the method according to claim 1, wherein the first location information using each key point obtains institute The direct picture of target object in image of taking pictures is stated, and determines the second location information of each key point of the direct picture, packet It includes:
According to the image parameter of the image of taking pictures, the first location information of each key point is converted into and the image of taking pictures 4th location information of image parameter adaptation;
According to the 4th location information for each key point being adapted to the image parameter of the image of taking pictures, take pictures described in acquisition The second location information of each key point of the direct picture of target object and the determining direct picture in image.
3., will be each according to the method described in claim 2, it is characterized in that, the image parameter of the image of taking pictures according to The first location information of key point is converted into the 4th location information being adapted to the image parameter of the image of taking pictures, including following At least one of mode:
According to the resolution ratio of the image of taking pictures, place is zoomed in and out to the first location information of each key point in the preview image Reason, obtains each 4th location information;
According to the offset taken pictures between image and the corresponding key point in preview image, to respectively being closed in the preview image The first location information of key point carries out migration processing, obtains each 4th location information;
According to the resolution ratio of the image of taking pictures, place is zoomed in and out to the first location information of each key point in the preview image Reason, and the first location information based on the key point after scaling processing and the location information for the corresponding key point taken pictures on image it Between offset, migration processing is carried out to the first location information of each key point after the scaling processing, obtains the described 4th Location information.
4. according to the method in claim 2 or 3, which is characterized in that the image parameter of the basis and the image of taking pictures The 4th location information of each key point of adaptation, the direct picture of target object in image of taking pictures described in acquisition, comprising:
The image-region for the target object taken pictures in image described in determination;
Using the 4th location information and standard picture of each key point, described take pictures is obtained by way of affine transformation The direct picture of the target object of image, the standard picture include standard facial image, standard limb image and standard posture At least one of image.
5. according to the method described in claim 4, it is characterized in that, the 4th location information using each key point with And standard picture, the direct picture of the target object of the image of taking pictures is obtained by way of affine transformation, comprising:
The standard location information of 4th location information of each key point and key point each in standard picture is compared, is obtained It takes pictures described in must adjusting the correction matrix of image, the correction matrix includes the adjusting parameter of each key point position;
Based on the correction matrix, the target object of the image of taking pictures is corrected, obtains the target of the image of taking pictures The direct picture of object.
6. according to the method described in claim 5, it is characterized in that, the basis is taken pictures in image with meet preset condition Add virtual effect in the corresponding position of second location information, comprising:
If the second location information meets preset condition, determined according to the correction matrix take pictures in image with second Confidence ceases corresponding 5th location information;
Virtual feature is added according to the 5th location information of each key point.
7. the method according to any one of claim 2-6, which is characterized in that the determination second location information Whether preset condition is met, comprising:
Obtain the confidence level of the second location information;
In the case where the confidence level of the second location information is more than or equal to the confidence threshold value, described second is determined Location information meets preset condition.
8. a kind of image processing apparatus characterized by comprising
Module is obtained, is used in the case where receiving photographing instruction, the first position of each key point in preview image is obtained Information and image of taking pictures;
Determining module, be used to obtain using the first location information of each key point described in take pictures the front of target object in image Image, and determine the second location information of each key point of the direct picture;
Virtual module, is used to determine whether the second location information to meet preset condition, if so, then according in image of taking pictures Virtual effect is added in position corresponding with the second location information for meeting preset condition, otherwise according to from the image of taking pictures The third place information of each key point of identification adds virtual effect.
9. a kind of electronic equipment characterized by comprising
Processor;
Memory for storage processor executable instruction;
Wherein, the processor is configured to: perform claim require any one of 1 to 7 described in method.
10. a kind of computer readable storage medium, is stored thereon with computer program instructions, which is characterized in that the computer Method described in any one of claim 1 to 7 is realized when program instruction is executed by processor.
CN201811278366.XA 2018-10-30 2018-10-30 Image processing method and device, electronic equipment and storage medium Active CN109325908B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811278366.XA CN109325908B (en) 2018-10-30 2018-10-30 Image processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811278366.XA CN109325908B (en) 2018-10-30 2018-10-30 Image processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109325908A true CN109325908A (en) 2019-02-12
CN109325908B CN109325908B (en) 2023-07-21

Family

ID=65259769

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811278366.XA Active CN109325908B (en) 2018-10-30 2018-10-30 Image processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN109325908B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110610178A (en) * 2019-10-09 2019-12-24 Oppo广东移动通信有限公司 Image recognition method, device, terminal and computer readable storage medium
CN110717452A (en) * 2019-10-09 2020-01-21 Oppo广东移动通信有限公司 Image recognition method, device, terminal and computer readable storage medium
CN112532875A (en) * 2020-11-24 2021-03-19 展讯通信(上海)有限公司 Terminal device, image processing method and device thereof, and storage medium
CN112530019A (en) * 2020-12-11 2021-03-19 中国科学院深圳先进技术研究院 Three-dimensional human body reconstruction method and device, computer equipment and storage medium
CN113096152A (en) * 2021-04-29 2021-07-09 北京百度网讯科技有限公司 Multi-object motion analysis method, device, equipment and medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104537630A (en) * 2015-01-22 2015-04-22 厦门美图之家科技有限公司 Method and device for image beautifying based on age estimation
US20170034409A1 (en) * 2015-07-31 2017-02-02 Xiaomi Inc. Method, device, and computer-readable medium for image photographing
CN106612396A (en) * 2016-11-15 2017-05-03 努比亚技术有限公司 Photographing device, photographing terminal and photographing method
CN107958439A (en) * 2017-11-09 2018-04-24 北京小米移动软件有限公司 Image processing method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104537630A (en) * 2015-01-22 2015-04-22 厦门美图之家科技有限公司 Method and device for image beautifying based on age estimation
US20170034409A1 (en) * 2015-07-31 2017-02-02 Xiaomi Inc. Method, device, and computer-readable medium for image photographing
CN106612396A (en) * 2016-11-15 2017-05-03 努比亚技术有限公司 Photographing device, photographing terminal and photographing method
CN107958439A (en) * 2017-11-09 2018-04-24 北京小米移动软件有限公司 Image processing method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
祁鹏: "《基于Android***的相机特效软件的设计与实现》", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑(月刊)2018年第04期》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110610178A (en) * 2019-10-09 2019-12-24 Oppo广东移动通信有限公司 Image recognition method, device, terminal and computer readable storage medium
CN110717452A (en) * 2019-10-09 2020-01-21 Oppo广东移动通信有限公司 Image recognition method, device, terminal and computer readable storage medium
CN110717452B (en) * 2019-10-09 2022-04-19 Oppo广东移动通信有限公司 Image recognition method, device, terminal and computer readable storage medium
CN112532875A (en) * 2020-11-24 2021-03-19 展讯通信(上海)有限公司 Terminal device, image processing method and device thereof, and storage medium
CN112530019A (en) * 2020-12-11 2021-03-19 中国科学院深圳先进技术研究院 Three-dimensional human body reconstruction method and device, computer equipment and storage medium
CN112530019B (en) * 2020-12-11 2023-03-14 中国科学院深圳先进技术研究院 Three-dimensional human body reconstruction method and device, computer equipment and storage medium
CN113096152A (en) * 2021-04-29 2021-07-09 北京百度网讯科技有限公司 Multi-object motion analysis method, device, equipment and medium

Also Published As

Publication number Publication date
CN109325908B (en) 2023-07-21

Similar Documents

Publication Publication Date Title
TWI775091B (en) Data update method, electronic device and storage medium thereof
CN109325908A (en) Image processing method and device, electronic equipment and storage medium
CN109618184A (en) Method for processing video frequency and device, electronic equipment and storage medium
CN105512605B (en) Face image processing process and device
CN109872297A (en) Image processing method and device, electronic equipment and storage medium
CN103688273B (en) Amblyopia user is aided in carry out image taking and image review
WO2020007241A1 (en) Image processing method and apparatus, electronic device, and computer-readable storage medium
CN111242090B (en) Human face recognition method, device, equipment and medium based on artificial intelligence
CN105357425B (en) Image capturing method and device
CN106295499B (en) Age estimation method and device
CN109977775B (en) Key point detection method, device, equipment and readable storage medium
CN111985268A (en) Method and device for driving animation by human face
CN109584362A (en) 3 D model construction method and device, electronic equipment and storage medium
CN110706339B (en) Three-dimensional face reconstruction method and device, electronic equipment and storage medium
CN109977868A (en) Image rendering method and device, electronic equipment and storage medium
WO2023279960A1 (en) Action processing method and apparatus for virtual object, and storage medium
CN109615593A (en) Image processing method and device, electronic equipment and storage medium
CN109284681A (en) Position and posture detection method and device, electronic equipment and storage medium
CN107944367A (en) Face critical point detection method and device
CN109377446A (en) Processing method and processing device, electronic equipment and the storage medium of facial image
CN111241887A (en) Target object key point identification method and device, electronic equipment and storage medium
CN109446912A (en) Processing method and processing device, electronic equipment and the storage medium of facial image
CN112509005B (en) Image processing method, image processing device, electronic equipment and storage medium
CN109711546A (en) Neural network training method and device, electronic equipment and storage medium
CN110458218A (en) Image classification method and device, sorter network training method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant