WO2019154231A1 - 图像处理方法、电子设备及存储介质 - Google Patents

图像处理方法、电子设备及存储介质 Download PDF

Info

Publication number
WO2019154231A1
WO2019154231A1 PCT/CN2019/073995 CN2019073995W WO2019154231A1 WO 2019154231 A1 WO2019154231 A1 WO 2019154231A1 CN 2019073995 W CN2019073995 W CN 2019073995W WO 2019154231 A1 WO2019154231 A1 WO 2019154231A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
feature points
facial image
facial
dimensional
Prior art date
Application number
PCT/CN2019/073995
Other languages
English (en)
French (fr)
Inventor
陈宇
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2019154231A1 publication Critical patent/WO2019154231A1/zh
Priority to US16/897,341 priority Critical patent/US11436779B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • G06V40/176Dynamic expression
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Definitions

  • the present application relates to the field of image technologies, and in particular, to an image processing method, an electronic device, and a storage medium.
  • the embodiment of the present application provides an image processing method, an electronic device, and a storage medium, which can achieve the effect of dynamically changing a static image.
  • the technical solution is as follows:
  • an image processing method for use in an electronic device, the method comprising:
  • an image processing apparatus for use in an electronic device, the apparatus comprising:
  • a three-dimensional image acquisition module configured to acquire a three-dimensional image of the first facial image according to the plurality of first feature points of the first facial image, where the plurality of first feature points are used to identify the first facial image The first face in the image;
  • a feature point acquiring module configured to acquire a plurality of second feature points of the second frame image of the second frame in real time, where the plurality of second feature points are used to identify the expression change of the second face;
  • a synchronization module configured to synchronize the three-dimensional image of the first facial image according to the change of the plurality of second feature points of the multi-frame second facial image to simulate the expression change of the second facial .
  • an electronic device including a processor and a memory, the memory storing at least one instruction loaded by the processor and executed to implement the image processing method described above operating.
  • a computer readable storage medium having stored therein at least one instruction loaded by a processor and executed to perform operations performed by the image processing method described above.
  • a three-dimensional image is obtained by using feature points of a two-dimensional facial image, so that feature points on the three-dimensional image are used to synchronize the displacement of the feature points on the other face to achieve the other face.
  • the purpose of synchronizing the expression onto the current facial image provides a method for real-time and dynamic change on the static image, so that the face in the image can change correspondingly with the expression change of the captured target.
  • Various dynamic expressions such as crying, smiling, laughing, and making faces reduce the technical threshold for synchronizing expressions.
  • FIG. 1 is a schematic diagram of an implementation environment provided by an embodiment of the present application.
  • FIG. 2 is a flowchart of an image processing method according to an embodiment of the present application.
  • FIG. 3 is a schematic diagram of a feature point identification manner provided by an embodiment of the present application.
  • FIG. 4 is an effect diagram of a three-dimensional face model provided by an embodiment of the present application before performing texture mapping
  • FIG. 5 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application.
  • FIG. 6 is a structural block diagram of a terminal 600 provided by an exemplary embodiment of the present application.
  • FIG. 1 is a schematic diagram of an implementation environment provided by an embodiment of the present application.
  • the implementation environment includes: an electronic device and an imaging device.
  • the imaging device may be an imaging device mounted on the electronic device, or may be an imaging device connected to the electronic device, and the connection may be a wired connection or a wireless connection.
  • the connection is not limited by the embodiment of the present application.
  • the first facial image related to the embodiment of the present application may be any image including a face, and the user of the electronic device may select any image including the face as the image to be processed, of course, the first face
  • the part image may be a face image or a face image of any animal.
  • the second facial image may be a facial image acquired by the imaging device in real time, and the real-time acquisition process may capture the face of the captured object as a synchronization target, thereby identifying by feature points by feature point extraction.
  • the expression on the face changes.
  • FIG. 2 is a flowchart of an image processing method provided by an embodiment of the present application.
  • the image processing method is applied to an electronic device, and the electronic device can be provided as a terminal.
  • the method includes:
  • the terminal acquires a first facial image to be synchronized with the expression.
  • the first facial image refers to an image that needs to be currently synchronized with the expression, and the user can select any image on the terminal as the first facial image. It should be noted that the first facial image may be a face image or other animal facial image. Since most animals have facial features, it is also possible to perform cross-species expression synchronization to achieve a better display effect. This embodiment of the present application does not limit this.
  • the terminal performs face detection on the first facial image to obtain a plurality of first feature points of the first facial image, where the plurality of first feature points are used to identify the first image in the first facial image. Face.
  • a plurality of feature points for representing the facial features can be detected.
  • the face detection as the face detection as an example, as shown in FIG. 3, the first facial image is passed in the embodiment of the present application.
  • the feature points obtained by the face detection are used as the first feature points, and the feature points obtained in real time in the subsequent steps are taken as the second feature points for distinction.
  • the feature point identification mode shown in FIG. 3 is only one possible implementation manner, and other identification methods may be used in face detection, and when different face detection methods are used, the number of feature points obtained may also be obtained. It is different.
  • the terminal acquires a three-dimensional model of the face of the first face according to the plurality of first feature points of the first facial image.
  • the terminal may first acquire a general facial three-dimensional model, and then adjust the universal facial three-dimensional model based on the plurality of first feature points, so that the universal facial three-dimensional model
  • the size, spacing, position, and the like of the facial organs on the model may conform to the size, spacing, and position of the facial organs in the first facial image in the first facial image, and the three-dimensional facial model obtained by the above adjustment may be achieved.
  • the general facial 3D model may be a universal face model, such as a CANDIDE-3 model.
  • the adjustment may include both an overall adjustment and a partial adjustment.
  • the overall adjustment is to process the general three-dimensional facial model such that the projection size and direction on the two-dimensional plane are consistent with the facial features in the facial image.
  • the local adjustment refers to adjusting corresponding vertices in the face model according to the plurality of first feature points to match the positions of the detected feature points in the face image.
  • only partial adjustment may be performed, which is not limited in this embodiment of the present application.
  • a general three-dimensional facial model when performing overall adjustment and partial adjustment, can be projected to a two-dimensional plane, and the adjustment of the projection points based on the plurality of second feature points can be performed.
  • Each of the plurality of first feature points has a feature point number
  • each vertex on the universal three-dimensional face model also has a number
  • the feature point number and the vertex number are in one-to-one correspondence, so that the universal three-dimensional face can be
  • the part model is projected onto a two-dimensional plane to obtain projection points of multiple vertices.
  • the plurality of vertices includes a plurality of vertices for identifying a contour of the face, and the plurality of projection points for identifying the vertices of the facial contour may determine a facial region, and the identification is based on the plurality of first feature points.
  • the first feature point of the facial contour adjusts the size and direction of the facial region, so that the adjusted plurality of projection points are obtained based on the adjustment of the facial region, for example, the adjustment may include proportional adjustment and width and height adjustment. Wait.
  • the adjusted position of each projection point may be further adjusted to coincide with the plurality of first feature points, and then the depth information of the corresponding vertex is applied to the adjusted projection point to obtain the adjusted three-dimensional face model.
  • the foregoing adjustment process is only an example of the adjustment, and the embodiment of the present application does not limit the specific manner of adjustment.
  • the terminal acquires four image vertices of the first facial image and texture coordinates of the plurality of first feature points.
  • the first facial image may include a background region in addition to the face region.
  • a background region in addition to the face region.
  • the specific process of obtaining the texture coordinates may include: converting the coordinates of the key points and the image vertices into a coordinate system of the image texture.
  • the terminal maps the first facial image as a texture to the facial three-dimensional model according to the texture coordinate, to obtain a three-dimensional image of the first facial image.
  • the texture mapping may refer to a color texture mapping, which is not described in this embodiment of the present application.
  • FIG. 4 is an effect diagram of a three-dimensional facial model provided by an embodiment of the present application before performing texture mapping.
  • mapping the image vertex and the texture coordinates of the first feature point to the already On the surface of the constructed three-dimensional face model a three-dimensional image of the first face with higher authenticity can be obtained.
  • the step 204 to the step 205 are to map the first facial image as a texture onto the facial three-dimensional model according to the texture coordinate of the first facial image, to obtain a three-dimensional image of the first facial image.
  • the texture mapping process in the process may be performed by mapping the feature points of the two-dimensional texture image with the coordinates on the three-dimensional model, which is not specifically limited in this embodiment of the present application.
  • the above steps 201 to 205 are an example of a process of acquiring a three-dimensional image of the first facial image based on the two-dimensional first facial image, and in the process of acquiring the three-dimensional image based on the two-dimensional image, This is not limited by the embodiment of the present application.
  • the terminal acquires, by using an imaging device, a plurality of second feature points of the second frame image of the second frame, where the plurality of second feature points are used to identify the expression change of the second face.
  • the user may select a target that is to be synchronized, for example, the target may be a face in a real scene or a face in a certain video, etc., so the user can use the camera to use the second face.
  • the second face in the part image is captured, so that a plurality of frames of the second face image are acquired in real time during the shooting process, and the facial expressions and/or actions in the multi-frame second face image may change with the shooting time.
  • the terminal performs face detection on the multi-frame second facial image acquired in real time to obtain a plurality of second feature points.
  • the second facial image is acquired in real time by the imaging device, if the offset between the face orientation of the second facial image and the direction facing the imaging device is detected to be excessive, no follow-up is performed.
  • the synchronization process avoids large distortions during synchronization, ensuring that the expression changes obtained by synchronization can simulate the expression changes of the target in real time.
  • the terminal may generate prompt information consistent with the offset according to the offset of the target, and the prompt information may be used to prompt the target to adjust the posture, so that the synchronization may be performed subsequently.
  • prompt information may be generated according to the offset situation, and the prompt information may be “rotate 20 degrees to the right”.
  • the terminal converts the displacements of the plurality of second feature points in the two frames of the second facial image acquired in real time into target displacements of the plurality of vertices on the three-dimensional image, and multiple vertices and locations in the three-dimensional image.
  • the plurality of first feature points in the first facial image are in one-to-one correspondence.
  • the converting process may include: using a distance between two target first feature points of the plurality of first feature points as a first unit distance; and placing a frame in the second facial image a distance between two second feature points corresponding to the two target first feature points as a second unit distance; acquiring a ratio between the second unit distance and the first unit distance; A bit of each of the plurality of second feature points in the second face image of the frame is removed at the ratio to obtain a target displacement of each of the vertices on the three-dimensional image.
  • the distance of the feature points in the three-dimensional image may be reflected based on the distance of the feature points in the first facial image, and thus may be based on both The ratio between the distance between two feature points in the second face image of the dimensional image and the distance between the corresponding two feature points in the first facial image as the three-dimensional image and the second facial image The basis for the displacement transformation.
  • the second facial image that is currently acquired and the second facial image of the previous frame may be used each time the second facial image is acquired.
  • the displacement of the two feature points is synchronized, and the second feature image of the second face image of the first frame and the last frame of the first frame and the last frame of the target number frame may be shifted according to the second face image of the target number frame
  • the embodiment of the present application does not specifically limit this.
  • the displacement of the plurality of second feature points may be a relative displacement, that is, the displacement in the subsequent frame is marked with reference to the position of the previous frame, and the displacement of the plurality of second feature points may be an absolute displacement. That is, the displacement in each frame is marked with reference to the position of the feature point in the second facial image of the first frame.
  • the displacement of the second feature point of the second facial image of one frame may be It refers to the displacement of the feature points in the second facial image of the one frame relative to the feature points in the second facial image of the first frame. This marking manner can ensure the consistency of the reference objects and reduce the error.
  • the terminal applies a target displacement of the plurality of vertices on the three-dimensional image.
  • the three-dimensional image of the first facial image is changed according to the plurality of second feature points to simulate the expression change of the second facial.
  • the process is: synchronizing the three-dimensional image of the first facial image according to the change of the plurality of second feature points of the multi-frame second facial image to simulate the process of changing the expression of the second facial .
  • the feature point No. 83 is used as the coordinate origin, and the direction from the point to the forehead is the positive direction of the x-axis, and the direction perpendicular to the x-axis is the y-axis. Positive direction.
  • the distance between the two first feature points of 83 and 85 in the three-dimensional image is defined as a first unit distance L1; the two first feature points of the 83 and 85 and the 83 and 85 on the second face image
  • the distance between the two second feature points of 83 and 85 in the second facial image of the first frame is obtained as the second unit distance L2
  • the ratio L2/L1 is acquired, when the two frames are
  • the displacement of any feature point in the two-face image is M
  • the target displacement of the corresponding vertex on the three-dimensional image can be obtained by M/(L2/L1), and the target displacement of other vertices can also be deduced in the three-dimensional image.
  • the displacement of the plurality of vertices is applied, the shape, position, and the like of the organ on the three-dimensional image are changed, and the purpose of synchronizing the expression on the second face can be achieved.
  • the face as the synchronization target may not face directly in front, and therefore, the displacement of the detected feature points may be asymmetrical due to the orientation of the organs on the left and right faces, and at this time, the displacement may be based on the face orientation data.
  • Make corrections may further include: correcting the displacement of the plurality of second feature points acquired in real time according to the face orientation data obtained by the face detection.
  • the size of the left eye in the captured second facial image is smaller than the size of the right eye, and if the synchronization target maintains the posture.
  • the expression changes even if the left and right eyes perform the same magnitude of motion, the motion amplitude of the left eye and the motion range of the right eye that the extracted feature points can determine will be different. Therefore, it is necessary to correct the displacement of the plurality of second feature points acquired in real time based on the face orientation data obtained by the face detection based on the angle data in the face orientation data.
  • the bit obtained by acquiring the second feature point in the two frames of the second face image is removed with the cosine value of the face orientation angle, thereby obtaining the actual displacement of the second feature point.
  • the modification process may be performed in other manners, which is not limited in this embodiment of the present application.
  • the method further includes: recording a change process of the three-dimensional image when receiving the recording instruction. If the user wants to make a special effect video or an expression package, the expression synchronization can be performed in the above manner, and the change process of the three-dimensional image can be recorded to obtain a video file or a moving picture, and the production method is simple and the operation is very simple and can be spread. Very strong.
  • the method provided by the embodiment of the present application obtains a three-dimensional image by using feature points of a two-dimensional facial image, thereby using feature points on the three-dimensional image to synchronize the displacement of the feature points on the other face, so as to achieve another
  • the purpose of synchronizing the expression of a face to the current facial image provides a method for real-time and dynamic change on the static image, so that the face in the image can be correspondingly changed according to the expression of the target being photographed.
  • the expression changes, reflecting various dynamic expressions such as crying, smiling, laughing and making faces, reducing the technical threshold for performing expression synchronization.
  • FIG. 5 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application.
  • the apparatus includes:
  • the three-dimensional image acquisition module 501 is configured to acquire a three-dimensional image of the first facial image according to the plurality of first feature points of the first facial image, where the plurality of first feature points are used to identify the first facial image The first face in the middle;
  • the feature point obtaining module 502 is configured to acquire a plurality of second feature points of the multi-frame second facial image in real time, and the plurality of second feature points are used to identify the expression change of the second facial face;
  • the synchronization module 503 is configured to synchronize the three-dimensional image of the first facial image according to the change of the plurality of second feature points of the multi-frame second facial image to simulate the expression of the second facial Variety.
  • the three-dimensional image acquisition module 501 is configured to:
  • the three-dimensional image acquisition module 501 is configured to:
  • the synchronization module 503 includes:
  • a conversion unit configured to convert displacements of the plurality of second feature points in the two frames of the second facial image acquired in real time into target displacements of the plurality of vertices on the three-dimensional image, and multiple vertices in the three-dimensional image One-to-one correspondence with a plurality of first feature points in the first facial image;
  • An application unit configured to apply a target displacement of the plurality of vertices on the three-dimensional image.
  • the conversion unit is configured to:
  • the bits of each of the plurality of second feature points in the two frames of the second facial image are removed at the ratio to obtain a target displacement of each of the vertices on the three-dimensional model.
  • the displacement of the second feature point of the second facial image of the frame refers to the feature point in the second facial image of the one frame relative to the feature in the second facial image of the first frame. The displacement of the point.
  • the device further includes:
  • a correction module configured to correct the displacement of the plurality of second feature points acquired in real time according to the facial orientation data obtained by the face detection.
  • the device further includes:
  • a recording module configured to record a change process of the three-dimensional image when receiving a recording instruction.
  • the image processing apparatus provided in the foregoing embodiment is only illustrated by the division of each functional module in the image processing. In actual applications, the function allocation may be completed by different functional modules as needed.
  • the internal structure of the electronic device is divided into different functional modules to perform all or part of the functions described above.
  • the image processing apparatus and the image processing method embodiment provided in the above embodiments are in the same concept, and the specific implementation process is described in detail in the method embodiment, and details are not described herein again.
  • FIG. 6 is a structural block diagram of a terminal 600 provided by an exemplary embodiment of the present application.
  • the terminal 600 can be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III), and an MP4 (Moving Picture Experts Group Audio Layer IV). Level 4) Player, laptop or desktop computer.
  • Terminal 600 may also be referred to as a user device, a portable terminal, a laptop terminal, a desktop terminal, and the like.
  • the terminal 600 includes a processor 601 and a memory 602.
  • Processor 601 can include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like.
  • the processor 601 may be configured by at least one of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). achieve.
  • the processor 601 may also include a main processor and a coprocessor.
  • the main processor is a processor for processing data in an awake state, which is also called a CPU (Central Processing Unit); the coprocessor is A low-power processor for processing data in standby.
  • the processor 601 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and rendering of the content that the display needs to display.
  • the processor 601 may also include an AI (Artificial Intelligence) processor for processing computational operations related to machine learning.
  • AI Artificial Intelligence
  • Memory 602 can include one or more computer readable storage media, which can be non-transitory. Memory 602 can also include high speed random access memory, as well as non-volatile memory, such as one or more disk storage devices, flash storage devices. In some embodiments, the non-transitory computer readable storage medium in memory 602 is for storing at least one instruction for execution by processor 601 to implement image processing provided by the method embodiments of the present application. The operation of the loading method:
  • the three-dimensional image of the first facial image is synchronized according to the change of the plurality of second feature points of the multi-frame second facial image to simulate the expression change of the second facial.
  • the at least one instruction is loaded and executed by the processor 601 to:
  • the at least one instruction is loaded and executed by the processor 601 to:
  • a target displacement of the plurality of vertices is applied to the three-dimensional image.
  • the at least one instruction is loaded and executed by the processor 601 to:
  • the bits of each of the plurality of second feature points in the two frame second face images are removed at the ratio to obtain a target displacement of each of the vertices on the three-dimensional model.
  • the displacement of the second feature point of the second facial image of the frame refers to the feature point of the second facial image of the one frame relative to the feature point of the second facial image of the first frame. Displacement.
  • the displacement of the second feature point of the second facial image of the frame refers to the feature point of the second facial image of the one frame relative to the feature point of the second facial image of the first frame. Displacement.
  • the at least one instruction is loaded and executed by the processor 601 to:
  • the displacement of the plurality of second feature points acquired in real time is corrected based on the face orientation data obtained by the face detection.
  • the at least one instruction is loaded and executed by the processor 601 to:
  • the terminal 600 optionally further includes: a peripheral device interface 603 and at least one peripheral device.
  • the processor 601, the memory 602, and the peripheral device interface 603 can be connected by a bus or a signal line.
  • Each peripheral device can be connected to the peripheral device interface 603 via a bus, signal line or circuit board.
  • the peripheral device includes at least one of a radio frequency circuit 604, a display screen 605, a camera 606, an audio circuit 607, a positioning component 608, and a power source 609.
  • the peripheral device interface 603 can be used to connect at least one peripheral device associated with an I/O (Input/Output) to the processor 601 and the memory 602.
  • processor 601, memory 602, and peripheral interface 603 are integrated on the same chip or circuit board; in some other embodiments, any of processor 601, memory 602, and peripheral interface 603 or The two can be implemented on a separate chip or circuit board, which is not limited in this embodiment.
  • the RF circuit 604 is configured to receive and transmit an RF (Radio Frequency) signal, also referred to as an electromagnetic signal.
  • Radio frequency circuit 604 communicates with the communication network and other communication devices via electromagnetic signals.
  • the RF circuit 604 converts the electrical signal into an electromagnetic signal for transmission, or converts the received electromagnetic signal into an electrical signal.
  • the radio frequency circuit 604 includes an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and the like.
  • Radio frequency circuitry 604 can communicate with other terminals via at least one wireless communication protocol.
  • the wireless communication protocol includes, but is not limited to, a metropolitan area network, various generations of mobile communication networks (2G, 3G, 4G, and 5G), a wireless local area network, and/or a WiFi (Wireless Fidelity) network.
  • the radio frequency circuit 604 may also include an NFC (Near Field Communication) related circuit, which is not limited by the embodiment of the present application.
  • the display screen 605 is used to display a UI (User Interface).
  • the UI can include graphics, text, icons, video, and any combination thereof.
  • display 605 is a touch display
  • display 605 also has the ability to capture touch signals over the surface or surface of display 605.
  • the touch signal can be input to the processor 601 as a control signal for processing.
  • the display screen 605 can also be used to provide virtual buttons and/or virtual keyboards, also referred to as soft buttons and/or soft keyboards.
  • the display screen 605 may be one, and the front panel of the terminal 600 is disposed; in other embodiments, the display screen 605 may be at least two, respectively disposed on different surfaces of the terminal 600 or in a folded design; In still other embodiments, the display screen 605 can be a flexible display screen disposed on a curved surface or a folded surface of the terminal 600. Even the display screen 605 can be set to a non-rectangular irregular pattern, that is, a profiled screen.
  • the display screen 605 can be prepared by using a material such as an LCD (Liquid Crystal Display) or an OLED (Organic Light-Emitting Diode).
  • Camera component 606 is used to capture images or video.
  • camera assembly 606 includes a front camera and a rear camera.
  • the front camera is placed on the front panel of the terminal, and the rear camera is placed on the back of the terminal.
  • the rear camera is at least two, which are respectively a main camera, a depth camera, a wide-angle camera, and a telephoto camera, so as to realize the background blur function of the main camera and the depth camera, and the main camera Combine with a wide-angle camera for panoramic shooting and VR (Virtual Reality) shooting or other integrated shooting functions.
  • camera assembly 606 can also include a flash.
  • the flash can be a monochrome temperature flash or a two-color temperature flash.
  • the two-color temperature flash is a combination of a warm flash and a cool flash that can be used for light compensation at different color temperatures.
  • the audio circuit 607 can include a microphone and a speaker.
  • the microphone is used to collect sound waves of the user and the environment, and convert the sound waves into electrical signals for processing to the processor 601 for processing, or input to the radio frequency circuit 604 for voice communication.
  • the microphones may be multiple, and are respectively disposed at different parts of the terminal 600.
  • the microphone can also be an array microphone or an omnidirectional acquisition microphone.
  • the speaker is then used to convert electrical signals from the processor 601 or the RF circuit 604 into sound waves.
  • the speaker can be a conventional film speaker or a piezoelectric ceramic speaker.
  • audio circuit 607 can also include a headphone jack.
  • the location component 608 is used to locate the current geographic location of the terminal 600 to implement navigation or LBS (Location Based Service).
  • the positioning component 608 can be a positioning component based on a US-based GPS (Global Positioning System), a Chinese Beidou system, a Russian Greiner system, or an EU Galileo system.
  • Power source 609 is used to power various components in terminal 600.
  • the power source 609 can be an alternating current, a direct current, a disposable battery, or a rechargeable battery.
  • the rechargeable battery can support wired charging or wireless charging.
  • the rechargeable battery can also be used to support fast charging technology.
  • terminal 600 also includes one or more sensors 610.
  • the one or more sensors 610 include, but are not limited to, an acceleration sensor 611, a gyro sensor 612, a pressure sensor 613, a fingerprint sensor 614, an optical sensor 615, and a proximity sensor 616.
  • the acceleration sensor 611 can detect the magnitude of the acceleration on the three coordinate axes of the coordinate system established by the terminal 600.
  • the acceleration sensor 611 can be used to detect components of gravity acceleration on three coordinate axes.
  • the processor 601 can control the display screen 605 to display the user interface in a landscape view or a portrait view according to the gravity acceleration signal collected by the acceleration sensor 611.
  • the acceleration sensor 611 can also be used for the acquisition of game or user motion data.
  • the gyro sensor 612 can detect the body direction and the rotation angle of the terminal 600, and the gyro sensor 612 can cooperate with the acceleration sensor 611 to collect the 3D motion of the user to the terminal 600. Based on the data collected by the gyro sensor 612, the processor 601 can realize functions such as motion sensing (such as changing the UI according to the user's tilting operation), image stabilization at the time of shooting, game control, and inertial navigation.
  • functions such as motion sensing (such as changing the UI according to the user's tilting operation), image stabilization at the time of shooting, game control, and inertial navigation.
  • the pressure sensor 613 may be disposed on a side frame of the terminal 600 and/or a lower layer of the display screen 605.
  • the pressure sensor 613 When the pressure sensor 613 is disposed on the side frame of the terminal 600, the user's holding signal to the terminal 600 can be detected, and the processor 601 performs left and right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 613.
  • the operability control on the UI interface is controlled by the processor 601 according to the user's pressure on the display screen 605.
  • the operability control includes at least one of a button control, a scroll bar control, an icon control, and a menu control.
  • the fingerprint sensor 614 is used to collect the fingerprint of the user.
  • the processor 601 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 614, or the fingerprint sensor 614 identifies the identity of the user according to the collected fingerprint. Upon identifying that the user's identity is a trusted identity, the processor 601 authorizes the user to perform related sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying and changing settings, and the like.
  • the fingerprint sensor 614 can be disposed on the front, back, or side of the terminal 600. When the physical button or vendor logo is provided on the terminal 600, the fingerprint sensor 614 can be integrated with the physical button or the manufacturer logo.
  • Optical sensor 615 is used to collect ambient light intensity.
  • the processor 601 can control the display brightness of the display screen 605 based on the ambient light intensity acquired by the optical sensor 615. Specifically, when the ambient light intensity is high, the display brightness of the display screen 605 is raised; when the ambient light intensity is low, the display brightness of the display screen 605 is lowered.
  • the processor 601 can also dynamically adjust the shooting parameters of the camera assembly 606 according to the ambient light intensity collected by the optical sensor 615.
  • Proximity sensor 616 also referred to as a distance sensor, is typically disposed on the front panel of terminal 600. Proximity sensor 616 is used to capture the distance between the user and the front of terminal 600. In one embodiment, when the proximity sensor 616 detects that the distance between the user and the front side of the terminal 600 is gradually decreasing, the display 605 is controlled by the processor 601 to switch from the bright screen state to the polyscreen state; when the proximity sensor 616 detects When the distance between the user and the front side of the terminal 600 gradually becomes larger, the processor 601 controls the display screen 605 to switch from the state of the screen to the state of the screen.
  • FIG. 6 does not constitute a limitation to the terminal 600, and may include more or less components than those illustrated, or may combine some components or adopt different component arrangements.
  • a computer readable storage medium such as a memory comprising instructions executable by a processor in a terminal to perform operations of an image processing method:
  • the three-dimensional image of the first facial image is synchronized according to the change of the plurality of second feature points of the multi-frame second facial image to simulate the expression change of the second facial.
  • the at least one instruction is loaded and executed by the processor to:
  • the at least one instruction is loaded and executed by the processor to:
  • a target displacement of the plurality of vertices is applied to the three-dimensional image.
  • the at least one instruction is loaded and executed by the processor to:
  • the bits of each of the plurality of second feature points in the two frame second face images are removed at the ratio to obtain a target displacement of each of the vertices on the three-dimensional model.
  • the displacement of the second feature point of the second facial image of the frame refers to the feature point of the second facial image of the one frame relative to the feature point of the second facial image of the first frame. Displacement.
  • the at least one instruction is loaded and executed by the processor to:
  • the displacement of the plurality of second feature points acquired in real time is corrected based on the face orientation data obtained by the face detection.
  • the at least one instruction is loaded and executed by the processor to:
  • the computer readable storage medium can be a ROM, a random access memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, and an optical data storage device.
  • a person skilled in the art may understand that all or part of the steps of implementing the above embodiments may be completed by hardware, or may be instructed by a program to execute related hardware, and the program may be stored in a computer readable storage medium.
  • the storage medium mentioned may be a read only memory, a magnetic disk or an optical disk or the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本申请实施例公开了一种图像处理方法、电子设备及存储介质,属于图像技术领域。所述方法通过利用二维的脸部图像的特征点,来得到三维图像,从而采用三维图像上的特征点来同步另一脸部上的特征点的位移,使得静态图像中的脸部能够实时、动态地随着被拍摄的目标的表情变化进行相应的表情变化,降低了表情同步的技术门槛。

Description

图像处理方法、电子设备及存储介质
本申请要求于2018年02月12日提交的申请号为2018101473142、发明名称为“图像处理方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像技术领域,特别涉及一种图像处理方法、电子设备及存储介质。
背景技术
随着图像技术的发展,越来越多的图像处理技术应运而生,例如对图像进行美化、对图像中的脸部进行美化以及添加滤镜以改变图像的色调等。这些处理方式,仅仅是对静态图像进行的处理,使得图像发生一些静态变化,并没有任一种技术可以使得静态图像能够发生动态变化,因此,亟需一种图像处理方法,能够使得静态图像能够进行动态变化。
发明内容
本申请实施例提供了一种图像处理方法、电子设备及存储介质,能够达到使得静态图像进行动态变化的效果。所述技术方案如下:
一方面,提供了一种图像处理方法,应用于电子设备,所述方法包括:
根据第一脸部图像的多个第一特征点,获取所述第一脸部图像的三维图像,所述多个第一特征点用于标识所述第一脸部图像中的第一脸部;
实时获取多帧第二脸部图像的多个第二特征点,所述多个第二特征点用于标识第二脸部的表情变化;
根据所述多帧第二脸部图像的多个第二特征点的变化,对所述第一脸部图像的三维图像进行同步,来模拟所述第二脸部的表情变化。
一方面,提供了一种图像处理装置,应用于电子设备,所述装置包括:
三维图像获取模块,用于根据第一脸部图像的多个第一特征点,获取所述第一脸部图像的三维图像,所述多个第一特征点用于标识所述第一脸部图像中的第一脸部;
特征点获取模块,用于实时获取多帧第二脸部图像的多个第二特征点,所述多个第二特征点用于标识第二脸部的表情变化;
同步模块,用于根据所述多帧第二脸部图像的多个第二特征点的变化,对所述第一脸部图像的三维图像进行同步,来模拟所述第二脸部的表情变化。
一方面,提供了一种电子设备,所述电子设备包括处理器和存储器,所述存储器中存储有至少一条指令,所述指令由所述处理器加载并执行以实现上述图像处理方法所执行的操作。
一方面,提供了一种计算机可读存储介质,所述存储介质中存储有至少一条指令,所述指令由处理器加载并执行以实现如上述图像处理方法所执行的操作。
本申请实施例通过利用二维的脸部图像的特征点,来得到三维图像,从而采用三维图像上的特征点来同步另一脸部上的特征点的位移,以达到将另一脸部的表情同步到当前脸部图像上的目的,提供了一种在静态图像上进行实时、动态变化的方法,使得图像中的脸部能够随着被拍摄的目标的表情变化进行相应的表情变化,体现出哭、微笑、大笑以及做鬼脸等各种动态的表情,降低了进行表情同步的技术门槛。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的一种实施环境的示意图;
图2是本申请实施例提供的一种图像处理方法的流程图;
图3是本申请实施例提供的一种特征点标识方式的示意图;
图4是本申请实施例提供的人脸三维模型在进行纹理映射之前的效果图;
图5是本申请实施例提供的一种图像处理装置的结构示意图。
图6示出了本申请一个示例性实施例提供的终端600的结构框图。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施方式作进一步地详细描述。
图1是本申请实施例提供的一种实施环境的示意图。参见图1,该实施环境包括:电子设备以及摄像装置,该摄像装置可以是安装于电子设备上的摄像装置,还可以是连接于电子设备的摄像装置,该连接可以是指有线连接或是无线连接,本申请实施例对此不做限定。本申请实施例涉及到的第一脸部图像可以是任一张包括脸部的图像,电子设备的用户可以选择任一张包括脸部的图像来作为待处理的图像,当然,该第一脸部图像可以是人脸图像,也可以是任一种动物的脸部图像。而第二脸部图像可以是通过摄像装置实时采集的脸部图像,该实时采集的过程可以捕捉到作为同步目标的被拍摄对象的脸部,从而通过特征点提取的方式,以特征点来标识脸部的表情变化。
图2是本申请实施例提供的一种图像处理方法的流程图。该图像处理方法应用于电子设备,该电子设备可以被提供为一终端,参见图2,该方法包括:
201、终端获取待进行表情同步的第一脸部图像。
该第一脸部图像是指当前需要进行表情同步的图像,用户可以选择终端上任一张图像作为第一脸部图像。需要说明的是,该第一脸部图像可以是人脸图像或是其他动物脸部图像,由于大多动物都具有五官,因此,也可以进行跨物种的表情同步,以达到更好的展示效果,本申请实施例对此不做限定。
202、终端对该第一脸部图像进行脸部检测,得到该第一脸部图像的多个第一特征点,该多个第一特征点用于标识该第一脸部图像中的第一脸部。
在脸部检测时,可以检测到多个用于表示脸部特征的特征点,以脸部检测为人脸检测为例,如图3所示,在本申请实施例中将第一脸部图像通过脸部检测所得到的特征点作为第一特征点,将后续步骤中实时采集所得到的特征点作为第二特征点,以便区别。当然,图3所示的特征点标识方式仅为一种可能实现方式,在脸部检测时还可以采用其他标识方式,且在采用不同的脸部检测方法时,所得到的特征点数目也可以有所不同。
203、终端根据第一脸部图像的多个第一特征点,获取第一脸部的脸部三维模型。
终端在基于多个第一特征点获取脸部三维模型时,可以先获取一个通用脸部三维模型,然后基于多个第一特征点对该通用脸部三维模型进行调整,以使得通用脸部三维模型上的脸部器官的大小、间距以及位置等可以符合该第一脸部图像中的第一脸部中脸部器官的大小、间距以及位置,通过上述调整所获得的脸部三维模型可以达到真实模拟第一脸部图像的效果。可选地,以脸部为人脸为例,该通用脸部三维模型可以为通用人脸模型,例如CANDIDE-3模型。
在本申请实施例中,该调整可以包括整体调整和局部调整两方面。其中,整体调整是将通用三维脸部模型进行缩放等处理,使其在二维平面上的投影大小和方向与脸部图像中的脸部一致。局部调整是指根据多个第一特征点,对脸部模型中相应的顶点进行调整,使其与脸部图像中检测到的特征点的位置相吻合。当然,如果第一脸部图像中的脸部大小和方向与通用脸部三维模型上脸部大小和方向一致,则可以仅进行局部调整,本申请实施例对此不做限定。
例如,在进行整体调整和局部调整时均可以采用将通用三维脸部模型投影至二维平面,并基于多个第二特征点对投影点的调整的方式进行。多个第一特征点中的每个特征点均具有特征点编号,通用三维脸部模型上的每个顶点也具有编号,该特征点编号和顶点编号一一对应,因此,可以将通用三维脸部模型投影至二维平面上,得到多个顶点的投影点。该多个顶点中包括多个用于标识脸部轮廓的顶点,该多个用于标识脸部轮廓的顶点的投影点可以确定一个脸部区域,则基于多个第一特征点中用于标识脸部轮廓的第一特征点对该脸部区域进行大小和方向的调整,从而基于对脸部区域的调整,得到调整后的多个投影点,例如,该调整可以包括比例调整以及宽高调整等。在局部调整时,可以将调整后的各个投影点的位置进一步调整至与多个第一特征点重合,再对调整后的投影点应用对应顶点的深度信息,以得到调整后的三维脸部模型。当然,上述调整过程仅是一种调整的示例,本申请实施例对具体采用何种方式进行调整不做限定。
204、终端获取该第一脸部图像的四个图像顶点和该多个第一特征点的纹理坐标。
第一脸部图像除了脸部区域以外,还可以包括背景区域,为了使得建模后的三维图像能够不仅反映脸部的形态,还能保留图像整体的显示效果,需要确定图像的顶点的纹理坐标,以保证构建的三维图像的完整性。
该获取纹理坐标的具体过程可以包括:将关键点和图像顶点的坐标转换成 图像纹理的坐标系。转换算法如下:纹理x轴坐标=(x轴坐标+图像宽度/2)/图像宽度;纹理y轴坐标=(y轴坐标+图像宽度/2)/图像宽度。需要说明的是,在纹理坐标获取过程中,可以采用任一种坐标转换方式,本申请实施例对此不做限定。
205、终端按照该纹理坐标,将该第一脸部图像作为纹理映射到脸部三维模型上,得到该第一脸部图像的三维图像。
通过纹理映射过程,可以在不改变脸部三维模型的几何信息的情况下达到真实的视觉效果。该纹理映射可以是指颜色纹理映射,本申请实施例对此不做赘述。
参见图4,图4是本申请实施例提供的脸部三维模型在进行纹理映射之前的效果图,通过将第一脸部图像作为纹理,按照图像顶点以及第一特征点的纹理坐标映射到已经构建的脸部三维模型的表面上,可以得到真实性较高的第一脸部的三维图像。
上述步骤204至步骤205是为根据所述第一脸部图像的纹理坐标,将所述第一脸部图像作为纹理映射到所述脸部三维模型上,得到所述第一脸部图像的三维图像的过程,在该过程中的纹理映射过程可以采用将二维纹理图像的特征点与三维模型上的坐标映射的方式进行,本申请实施例对此不做具体限定。
上述步骤201至步骤205是基于二维的第一脸部图像,来获取该第一脸部图像的三维图像的过程的一种示例,在该基于二维图像获取三维图像的过程中,还可以采取其他方式进行,本申请实施例对此不做限定。
206、终端通过摄像装置实时获取多帧第二脸部图像的多个第二特征点,该多个第二特征点用于标识第二脸部的表情变化。
本申请实施例中,用户可以选择想要同步的目标,例如,该目标可以为真实场景中的脸部或者某个视频中的脸部等,因此,用户可以通过摄像装置来对该第二脸部图像中的第二脸部进行拍摄,从而在拍摄过程中实时获取多帧第二脸部图像,该多帧第二脸部图像中的脸部表情和/或动作可以随着拍摄时间变化,终端对实时获取的多帧第二脸部图像进行脸部检测,以得到多个第二特征点。
需要说明的是,在通过摄像装置实时获取第二脸部图像时,如果检测到该第二脸部图像的脸部朝向与面向摄像装置的方向之间的偏移量过大,则不进行后续的同步过程,以避免同步时出现较大的失真,保证同步所得到的表情变化 能够实时模拟目标的表情变化。
当然,如果检测到偏移量过大,终端可以根据该目标的偏移情况,生成与偏移情况相符的提示信息,该提示信息可以用于提示目标对姿态进行调整,以便后续可以进行同步。例如,当目标的偏移情况为向左偏移了20度角,则可以根据该偏移情况,生成提示信息,该提示信息可以为“向右转动20度角”。
207、终端将实时获取到的两帧第二脸部图像中的多个第二特征点的位移,转换为所述三维图像上的多个顶点的目标位移,三维图像中的多个顶点与所述第一脸部图像中的多个第一特征点一一对应。
在一种可能实现方式中,该转换过程可以包括:将所述多个第一特征点中两个目标第一特征点之间的距离作为第一单位距离;将一帧第二脸部图像中与所述两个目标第一特征点对应的两个第二特征点之间的距离,作为第二单位距离;获取所述第二单位距离和第一单位距离之间的比例;将所述两帧第二脸部图像中的多个第二特征点中每个第二特征点的位移除以所述比例,得到所述三维图像上所述每个顶点的目标位移。
在本申请实施例中,由于三维图像是基于第一脸部图像得到,因此,可以基于第一脸部图像中特征点的距离来反映三维图像中特征点的距离,因此,可以基于均为二维图像的第二脸部图像中的两个特征点之间的距离和第一脸部图像中对应的两个特征点之间的距离之间的比例,作为三维图像和第二脸部图像之间用于进行位移转换的基础。
终端在将目标的表情变化同步至三维图像时,可以在每次获取到一帧第二脸部图像时,即采用当前获取到的第二脸部图像和上一帧第二脸部图像的第二特征点的位移来进行同步,还可以在每隔目标数目帧的第二脸部图像,根据该目标数目帧中的第一帧和最后一帧第二脸部图像的第二特征点的位移来进行同步,本申请实施例对此不做具体限定。
当然,该多个第二特征点的位移可以是相对位移,也即是,后一帧中的位移是以前一帧的位置为参照来标记,该多个第二特征点的位移可以是绝对位移,也即是,每一帧中的位移均是以第一帧第二脸部图像中的特征点位置为参照来标记,具体地,一帧第二脸部图像的第二特征点的位移可以是指所述一帧第二脸部图像中特征点相对于第一帧第二脸部图像中所述特征点的位移,这种标记方式可以保证参照物的一致性,减小误差。
208、终端在所述三维图像上应用该多个顶点的目标位移。
上述步骤207至208中,根据该多个第二特征点对该第一脸部图像的三维图像进行改变,来模拟该第二脸部的表情变化。该过程为根据所述多帧第二脸部图像的多个第二特征点的变化,对所述第一脸部图像的三维图像进行同步,来模拟所述第二脸部的表情变化的过程。
下面对上述转换过程进行举例说明,为了便于说明,将83号特征点作为坐标原点,并以该点到额头的方向为x轴的正方向,与x轴垂直向右的方向为y轴的正方向。然后,将三维图像中的83与85两个第一特征点之间的距离定义为第一单位距离L1;该83与85两个第一特征点与第二脸部图像上的83和85两个第二特征点相对应,将第一帧第二脸部图像中的83和85两个第二特征点之间的距离获取为第二单位距离L2,获取比例L2/L1,当两帧第二脸部图像中任一个特征点的位移为M时,则可以通过M/(L2/L1)得到三维图像上对应顶点的目标位移,其他顶点的目标位移也可以以此类推,当在三维图像上应用该多个顶点的位移时,三维图像上的器官的形状、位置等发生了变化,就可以达到同步第二脸部上的表情变化的目的。
进一步地,作为同步目标的脸部可能不是朝向正前方,因此,其左右脸上的器官由于其朝向,所检测到的特征点的位移可能不对称,此时,可以基于脸部朝向数据对位移进行修正。也即是,本申请实施例还可以包括:根据脸部检测所得到的脸部朝向数据,对实时获取到的所述多个第二特征点的位移进行修正。例如,当同步目标的脸部稍微朝向左侧时,则基于透视原理,所拍摄到的第二脸部图像中左眼的尺寸小于右眼的尺寸,而如果该同步目标保持该姿态不变,而表情发生变化时,即使左右眼进行了相同幅度的动作,其所提取的特征点所能够确定的左眼的动作幅度和右眼的动作幅度也会有所不同。因此,需要基于脸部朝向数据中的角度数据,根据脸部检测所得到的脸部朝向数据,对实时获取到的所述多个第二特征点的位移进行修正。例如,将获取到两帧第二脸部图像中的第二特征点的位移除以该脸部朝向角度的余弦值,从而得到该第二特征点的实际位移。当然,该修正过程还可以采取其他方式进行,本申请实施例对此不做限定。
进一步地,该方法还包括:当接收到录制指令时,对该三维图像的改变过程进行录制。如果用户想要制作特效视频或是表情包时,均可以通过上述方式进行表情同步,并录制该三维图像的改变过程,以得到视频文件或者动图,制作方法简单,操作也非常简单,可传播性很强。
本申请实施例提供的方法,通过利用二维的脸部图像的特征点,来得到三维图像,从而采用三维图像上的特征点来同步另一脸部上的特征点的位移,以达到将另一脸部的表情同步到当前脸部图像上的目的,提供了一种在静态图像上进行实时、动态变化的方法,使得图像中的脸部能够随着被拍摄的目标的表情变化进行相应的表情变化,体现出哭、微笑、大笑以及做鬼脸等各种动态的表情,降低了进行表情同步的技术门槛。
图5是本申请实施例提供的一种图像处理装置的结构示意图。参见图5,所述装置包括:
三维图像获取模块501,用于根据第一脸部图像的多个第一特征点,获取第一脸部图像的三维图像,所述多个第一特征点用于标识所述第一脸部图像中的第一脸部;
特征点获取模块502,用于实时获取多帧第二脸部图像的多个第二特征点,所述多个第二特征点用于标识第二脸部的表情变化;
同步模块503,用于根据所述多帧第二脸部图像的多个第二特征点的变化,对所述第一脸部图像的三维图像进行同步,来模拟所述第二脸部的表情变化。
在一种可能实现方式中,所述三维图像获取模块501用于:
对所述第一脸部图像进行脸部检测,得到所述第一脸部图像的多个第一特征点;
根据所述第一脸部图像的多个第一特征点,获取所述第一脸部的脸部三维模型;
根据所述第一脸部图像的纹理坐标,将所述第一脸部图像作为纹理映射到所述脸部三维模型上,得到所述第一脸部图像的三维图像。
在一种可能实现方式中,所述三维图像获取模块501用于:
获取所述第一脸部图像的四个图像顶点和所述多个第一特征点的纹理坐标;
按照所述纹理坐标,将所述脸部图像作为纹理映射到脸部三维模型上,得到所述第一脸部图像的三维图像。
在一种可能实现方式中,所述同步模块503包括:
转换单元,用于将实时获取到的两帧第二脸部图像中的多个第二特征点的位移,转换为所述三维图像上的多个顶点的目标位移,三维图像中的多个顶点 与所述第一脸部图像中的多个第一特征点一一对应;
应用单元,用于在所述三维图像上应用所述多个顶点的目标位移。
在一种可能实现方式中,所述转换单元用于:
将所述多个第一特征点中两个目标第一特征点之间的距离作为第一单位距离;
将一帧第二脸部图像中与所述两个目标第一特征点对应的两个第二特征点之间的距离,作为第二单位距离;
获取所述第二单位距离和第一单位距离之间的比例;
将所述两帧第二脸部图像中的多个第二特征点中每个第二特征点的位移除以所述比例,得到所述三维模型上所述每个顶点的目标位移。
在一种可能实现方式中,一帧第二脸部图像的第二特征点的位移是指所述一帧第二脸部图像中特征点相对于第一帧第二脸部图像中所述特征点的位移。
在一种可能实现方式中,所述装置还包括:
修正模块,用于根据脸部检测所得到的脸部朝向数据,对实时获取到的所述多个第二特征点的位移进行修正。
在一种可能实现方式中,所述装置还包括:
录制模块,用于当接收到录制指令时,对所述三维图像的改变过程进行录制。
需要说明的是:上述实施例提供的图像处理装置在图像处理时,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将电子设备的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。另外,上述实施例提供的图像处理装置与图像处理方法实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。
图6示出了本申请一个示例性实施例提供的终端600的结构框图。该终端600可以是:智能手机、平板电脑、MP3播放器(Moving Picture Experts Group Audio Layer III,动态影像专家压缩标准音频层面3)、MP4(Moving Picture Experts Group Audio Layer IV,动态影像专家压缩标准音频层面4)播放器、笔记本电脑或台式电脑。终端600还可能被称为用户设备、便携式终端、膝上型终端、台式终端等其他名称。
通常,终端600包括有:处理器601和存储器602。
处理器601可以包括一个或多个处理核心,比如4核心处理器、8核心处理器等。处理器601可以采用DSP(Digital Signal Processing,数字信号处理)、FPGA(Field-Programmable Gate Array,现场可编程门阵列)、PLA(Programmable Logic Array,可编程逻辑阵列)中的至少一种硬件形式来实现。处理器601也可以包括主处理器和协处理器,主处理器是用于对在唤醒状态下的数据进行处理的处理器,也称CPU(Central Processing Unit,中央处理器);协处理器是用于对在待机状态下的数据进行处理的低功耗处理器。在一些实施例中,处理器601可以在集成有GPU(Graphics Processing Unit,图像处理器),GPU用于负责显示屏所需要显示的内容的渲染和绘制。一些实施例中,处理器601还可以包括AI(Artificial Intelligence,人工智能)处理器,该AI处理器用于处理有关机器学习的计算操作。
存储器602可以包括一个或多个计算机可读存储介质,该计算机可读存储介质可以是非暂态的。存储器602还可包括高速随机存取存储器,以及非易失性存储器,比如一个或多个磁盘存储设备、闪存存储设备。在一些实施例中,存储器602中的非暂态的计算机可读存储介质用于存储至少一个指令,该至少一个指令用于被处理器601所执行以实现本申请中方法实施例提供的图像处理装方法的操作:
根据第一脸部图像的多个第一特征点,获取该第一脸部图像的三维图像,该多个第一特征点用于标识该第一脸部图像中的第一脸部;
实时获取多帧第二脸部图像的多个第二特征点,该多个第二特征点用于标识第二脸部的表情变化;
根据该多帧第二脸部图像的多个第二特征点的变化,对该第一脸部图像的三维图像进行同步,来模拟该第二脸部的表情变化。
在一种可能实现方式中,该至少一条指令由该处理器601加载并执行以实现以下操作:
对该第一脸部图像进行脸部检测,得到该第一脸部图像的多个第一特征点;
根据该第一脸部图像的多个第一特征点,获取该第一脸部的脸部三维模型;
根据该第一脸部图像的纹理坐标,将该第一脸部图像作为纹理映射到该脸 部三维模型上,得到该第一脸部图像的三维图像。
在一种可能实现方式中,该至少一条指令由该处理器601加载并执行以实现以下操作:
将实时获取到的两帧第二脸部图像中的多个第二特征点的位移,转换为该三维图像上的多个顶点的目标位移,三维图像中的多个顶点与该第一脸部图像中的多个第一特征点一一对应;
在该三维图像上应用该多个顶点的目标位移。
在一种可能实现方式中,该至少一条指令由该处理器601加载并执行以实现以下操作:
将该多个第一特征点中两个目标第一特征点之间的距离作为第一单位距离;
将一帧第二脸部图像中与该两个目标第一特征点对应的两个第二特征点之间的距离,作为第二单位距离;
获取该第二单位距离和第一单位距离之间的比例;
将该两帧第二脸部图像中的多个第二特征点中每个第二特征点的位移除以该比例,得到该三维模型上该每个顶点的目标位移。
在一种可能实现方式中,一帧第二脸部图像的第二特征点的位移是指该一帧第二脸部图像中特征点相对于第一帧第二脸部图像中该特征点的位移。
在一种可能实现方式中,一帧第二脸部图像的第二特征点的位移是指该一帧第二脸部图像中特征点相对于第一帧第二脸部图像中该特征点的位移。
在一种可能实现方式中,该至少一条指令由该处理器601加载并执行以实现以下操作:
根据脸部检测所得到的脸部朝向数据,对该实时获取到的该多个第二特征点的位移进行修正。
在一种可能实现方式中,该至少一条指令由该处理器601加载并执行以实现以下操作:
当接收到录制指令时,对该三维图像的改变过程进行录制。
在一些实施例中,终端600还可选包括有:***设备接口603和至少一个***设备。处理器601、存储器602和***设备接口603之间可以通过总线或信号线相连。各个***设备可以通过总线、信号线或电路板与***设备接口603相连。具体地,***设备包括:射频电路604、显示屏605、摄像头606、音频 电路607、定位组件608和电源609中的至少一种。
***设备接口603可被用于将I/O(Input/Output,输入/输出)相关的至少一个***设备连接到处理器601和存储器602。在一些实施例中,处理器601、存储器602和***设备接口603被集成在同一芯片或电路板上;在一些其他实施例中,处理器601、存储器602和***设备接口603中的任意一个或两个可以在单独的芯片或电路板上实现,本实施例对此不加以限定。
射频电路604用于接收和发射RF(Radio Frequency,射频)信号,也称电磁信号。射频电路604通过电磁信号与通信网络以及其他通信设备进行通信。射频电路604将电信号转换为电磁信号进行发送,或者,将接收到的电磁信号转换为电信号。可选地,射频电路604包括:天线***、RF收发器、一个或多个放大器、调谐器、振荡器、数字信号处理器、编解码芯片组、用户身份模块卡等等。射频电路604可以通过至少一种无线通信协议来与其它终端进行通信。该无线通信协议包括但不限于:城域网、各代移动通信网络(2G、3G、4G及5G)、无线局域网和/或WiFi(Wireless Fidelity,无线保真)网络。在一些实施例中,射频电路604还可以包括NFC(Near Field Communication,近距离无线通信)有关的电路,本申请实施例对此不加以限定。
显示屏605用于显示UI(User Interface,用户界面)。该UI可以包括图形、文本、图标、视频及其它们的任意组合。当显示屏605是触摸显示屏时,显示屏605还具有采集在显示屏605的表面或表面上方的触摸信号的能力。该触摸信号可以作为控制信号输入至处理器601进行处理。此时,显示屏605还可以用于提供虚拟按钮和/或虚拟键盘,也称软按钮和/或软键盘。在一些实施例中,显示屏605可以为一个,设置终端600的前面板;在另一些实施例中,显示屏605可以为至少两个,分别设置在终端600的不同表面或呈折叠设计;在再一些实施例中,显示屏605可以是柔性显示屏,设置在终端600的弯曲表面上或折叠面上。甚至,显示屏605还可以设置成非矩形的不规则图形,也即异形屏。显示屏605可以采用LCD(Liquid Crystal Display,液晶显示屏)、OLED(Organic Light-Emitting Diode,有机发光二极管)等材质制备。
摄像头组件606用于采集图像或视频。可选地,摄像头组件606包括前置摄像头和后置摄像头。通常,前置摄像头设置在终端的前面板,后置摄像头设置在终端的背面。在一些实施例中,后置摄像头为至少两个,分别为主摄像头、景深摄像头、广角摄像头、长焦摄像头中的任意一种,以实现主摄像头和景深 摄像头融合实现背景虚化功能、主摄像头和广角摄像头融合实现全景拍摄以及VR(Virtual Reality,虚拟现实)拍摄功能或者其它融合拍摄功能。在一些实施例中,摄像头组件606还可以包括闪光灯。闪光灯可以是单色温闪光灯,也可以是双色温闪光灯。双色温闪光灯是指暖光闪光灯和冷光闪光灯的组合,可以用于不同色温下的光线补偿。
音频电路607可以包括麦克风和扬声器。麦克风用于采集用户及环境的声波,并将声波转换为电信号输入至处理器601进行处理,或者输入至射频电路604以实现语音通信。出于立体声采集或降噪的目的,麦克风可以为多个,分别设置在终端600的不同部位。麦克风还可以是阵列麦克风或全向采集型麦克风。扬声器则用于将来自处理器601或射频电路604的电信号转换为声波。扬声器可以是传统的薄膜扬声器,也可以是压电陶瓷扬声器。当扬声器是压电陶瓷扬声器时,不仅可以将电信号转换为人类可听见的声波,也可以将电信号转换为人类听不见的声波以进行测距等用途。在一些实施例中,音频电路607还可以包括耳机插孔。
定位组件608用于定位终端600的当前地理位置,以实现导航或LBS(Location Based Service,基于位置的服务)。定位组件608可以是基于美国的GPS(Global Positioning System,全球定位***)、中国的北斗***、俄罗斯的格雷纳斯***或欧盟的伽利略***的定位组件。
电源609用于为终端600中的各个组件进行供电。电源609可以是交流电、直流电、一次性电池或可充电电池。当电源609包括可充电电池时,该可充电电池可以支持有线充电或无线充电。该可充电电池还可以用于支持快充技术。
在一些实施例中,终端600还包括有一个或多个传感器610。该一个或多个传感器610包括但不限于:加速度传感器611、陀螺仪传感器612、压力传感器613、指纹传感器614、光学传感器615以及接近传感器616。
加速度传感器611可以检测以终端600建立的坐标系的三个坐标轴上的加速度大小。比如,加速度传感器611可以用于检测重力加速度在三个坐标轴上的分量。处理器601可以根据加速度传感器611采集的重力加速度信号,控制显示屏605以横向视图或纵向视图进行用户界面的显示。加速度传感器611还可以用于游戏或者用户的运动数据的采集。
陀螺仪传感器612可以检测终端600的机体方向及转动角度,陀螺仪传感器612可以与加速度传感器611协同采集用户对终端600的3D动作。处理器 601根据陀螺仪传感器612采集的数据,可以实现如下功能:动作感应(比如根据用户的倾斜操作来改变UI)、拍摄时的图像稳定、游戏控制以及惯性导航。
压力传感器613可以设置在终端600的侧边框和/或显示屏605的下层。当压力传感器613设置在终端600的侧边框时,可以检测用户对终端600的握持信号,由处理器601根据压力传感器613采集的握持信号进行左右手识别或快捷操作。当压力传感器613设置在显示屏605的下层时,由处理器601根据用户对显示屏605的压力操作,实现对UI界面上的可操作性控件进行控制。可操作性控件包括按钮控件、滚动条控件、图标控件、菜单控件中的至少一种。
指纹传感器614用于采集用户的指纹,由处理器601根据指纹传感器614采集到的指纹识别用户的身份,或者,由指纹传感器614根据采集到的指纹识别用户的身份。在识别出用户的身份为可信身份时,由处理器601授权该用户执行相关的敏感操作,该敏感操作包括解锁屏幕、查看加密信息、下载软件、支付及更改设置等。指纹传感器614可以被设置终端600的正面、背面或侧面。当终端600上设置有物理按键或厂商Logo时,指纹传感器614可以与物理按键或厂商Logo集成在一起。
光学传感器615用于采集环境光强度。在一个实施例中,处理器601可以根据光学传感器615采集的环境光强度,控制显示屏605的显示亮度。具体地,当环境光强度较高时,调高显示屏605的显示亮度;当环境光强度较低时,调低显示屏605的显示亮度。在另一个实施例中,处理器601还可以根据光学传感器615采集的环境光强度,动态调整摄像头组件606的拍摄参数。
接近传感器616,也称距离传感器,通常设置在终端600的前面板。接近传感器616用于采集用户与终端600的正面之间的距离。在一个实施例中,当接近传感器616检测到用户与终端600的正面之间的距离逐渐变小时,由处理器601控制显示屏605从亮屏状态切换为息屏状态;当接近传感器616检测到用户与终端600的正面之间的距离逐渐变大时,由处理器601控制显示屏605从息屏状态切换为亮屏状态。
本领域技术人员可以理解,图6中示出的结构并不构成对终端600的限定,可以包括比图示更多或更少的组件,或者组合某些组件,或者采用不同的组件布置。
在示例性实施例中,还提供了一种计算机可读存储介质,例如包括指令的 存储器,上述指令可由终端中的处理器执行以完成图像处理方法的操作:
根据第一脸部图像的多个第一特征点,获取该第一脸部图像的三维图像,该多个第一特征点用于标识该第一脸部图像中的第一脸部;
实时获取多帧第二脸部图像的多个第二特征点,该多个第二特征点用于标识第二脸部的表情变化;
根据该多帧第二脸部图像的多个第二特征点的变化,对该第一脸部图像的三维图像进行同步,来模拟该第二脸部的表情变化。
在一种可能实现方式中,该至少一条指令由该处理器加载并执行以实现以下操作:
对该第一脸部图像进行脸部检测,得到该第一脸部图像的多个第一特征点;
根据该第一脸部图像的多个第一特征点,获取该第一脸部的脸部三维模型;
根据该第一脸部图像的纹理坐标,将该第一脸部图像作为纹理映射到该脸部三维模型上,得到该第一脸部图像的三维图像。
在一种可能实现方式中,该至少一条指令由该处理器加载并执行以实现以下操作:
将实时获取到的两帧第二脸部图像中的多个第二特征点的位移,转换为该三维图像上的多个顶点的目标位移,三维图像中的多个顶点与该第一脸部图像中的多个第一特征点一一对应;
在该三维图像上应用该多个顶点的目标位移。
在一种可能实现方式中,该至少一条指令由该处理器加载并执行以实现以下操作:
将该多个第一特征点中两个目标第一特征点之间的距离作为第一单位距离;
将一帧第二脸部图像中与该两个目标第一特征点对应的两个第二特征点之间的距离,作为第二单位距离;
获取该第二单位距离和第一单位距离之间的比例;
将该两帧第二脸部图像中的多个第二特征点中每个第二特征点的位移除以该比例,得到该三维模型上该每个顶点的目标位移。
在一种可能实现方式中,一帧第二脸部图像的第二特征点的位移是指该一 帧第二脸部图像中特征点相对于第一帧第二脸部图像中该特征点的位移。
在一种可能实现方式中,该至少一条指令由该处理器加载并执行以实现以下操作:
根据脸部检测所得到的脸部朝向数据,对该实时获取到的该多个第二特征点的位移进行修正。
在一种可能实现方式中,该至少一条指令由该处理器加载并执行以实现以下操作:
当接收到录制指令时,对该三维图像的改变过程进行录制。
例如,所述计算机可读存储介质可以是ROM、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。
本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘或光盘等。
以上所述仅为本申请的较佳实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (20)

  1. 一种图像处理方法,其特征在于,应用于电子设备,所述方法包括:
    根据第一脸部图像的多个第一特征点,获取所述第一脸部图像的三维图像,所述多个第一特征点用于标识所述第一脸部图像中的第一脸部;
    实时获取多帧第二脸部图像的多个第二特征点,所述多个第二特征点用于标识第二脸部的表情变化;
    根据所述多帧第二脸部图像的多个第二特征点的变化,对所述第一脸部图像的三维图像进行同步,来模拟所述第二脸部的表情变化。
  2. 根据权利要求1所述的方法,其特征在于,所述根据第一脸部图像的多个第一特征点,获取所述第一脸部图像的三维图像包括:
    对所述第一脸部图像进行脸部检测,得到所述第一脸部图像的多个第一特征点;
    根据所述第一脸部图像的多个第一特征点,获取所述第一脸部的脸部三维模型;
    根据所述第一脸部图像的纹理坐标,将所述第一脸部图像作为纹理映射到所述脸部三维模型上,得到所述第一脸部图像的三维图像。
  3. 根据权利要求1所述的方法,其特征在于,所述根据所述多帧第二脸部图像的多个第二特征点的变化,对所述第一脸部图像的三维图像进行同步包括:
    将实时获取到的两帧第二脸部图像中的多个第二特征点的位移,转换为所述三维图像上的多个顶点的目标位移,三维图像中的多个顶点与所述第一脸部图像中的多个第一特征点一一对应;
    在所述三维图像上应用所述多个顶点的目标位移。
  4. 根据权利要求3所述的方法,其特征在于,所述将实时获取到的两帧第二脸部图像中的多个第二特征点的位移,转换为所述三维图像上的多个顶点的目标位移包括:
    将所述多个第一特征点中两个目标第一特征点之间的距离作为第一单位距 离;
    将一帧第二脸部图像中与所述两个目标第一特征点对应的两个第二特征点之间的距离,作为第二单位距离;
    获取所述第二单位距离和第一单位距离之间的比例;
    将所述两帧第二脸部图像中的多个第二特征点中每个第二特征点的位移除以所述比例,得到所述三维模型上所述每个顶点的目标位移。
  5. 根据权利要求4所述的方法,其特征在于,一帧第二脸部图像的第二特征点的位移是指所述一帧第二脸部图像中特征点相对于第一帧第二脸部图像中所述特征点的位移。
  6. 根据权利要求3所述的方法,其特征在于,一帧第二脸部图像的第二特征点的位移是指所述一帧第二脸部图像中特征点相对于第一帧第二脸部图像中所述特征点的位移。
  7. 根据权利要求3所述的方法,其特征在于,所述方法还包括:
    根据脸部检测所得到的脸部朝向数据,对实时获取到的所述多个第二特征点的位移进行修正。
  8. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    当接收到录制指令时,对所述三维图像的改变过程进行录制。
  9. 一种电子设备,其特征在于,所述电子设备包括处理器和存储器,所述存储器中存储有至少一条指令,所述至少一条指令由所述处理器加载并执行以实现以下图像处理方法的所执行的操作:
    根据第一脸部图像的多个第一特征点,获取所述第一脸部图像的三维图像,所述多个第一特征点用于标识所述第一脸部图像中的第一脸部;
    实时获取多帧第二脸部图像的多个第二特征点,所述多个第二特征点用于标识第二脸部的表情变化;
    根据所述多帧第二脸部图像的多个第二特征点的变化,对所述第一脸部图像的三维图像进行同步,来模拟所述第二脸部的表情变化。
  10. 根据权利要求9所述的电子设备,其特征在于,所述至少一条指令由所述处理器加载并执行以实现以下操作:
    对所述第一脸部图像进行脸部检测,得到所述第一脸部图像的多个第一特征点;
    根据所述第一脸部图像的多个第一特征点,获取所述第一脸部的脸部三维模型;
    根据所述第一脸部图像的纹理坐标,将所述第一脸部图像作为纹理映射到所述脸部三维模型上,得到所述第一脸部图像的三维图像。
  11. 根据权利要求9所述的电子设备,其特征在于,所述至少一条指令由所述处理器加载并执行以实现以下操作:
    将实时获取到的两帧第二脸部图像中的多个第二特征点的位移,转换为所述三维图像上的多个顶点的目标位移,三维图像中的多个顶点与所述第一脸部图像中的多个第一特征点一一对应;
    在所述三维图像上应用所述多个顶点的目标位移。
  12. 根据权利要求11所述的电子设备,其特征在于,所述至少一条指令由所述处理器加载并执行以实现以下操作:
    将所述多个第一特征点中两个目标第一特征点之间的距离作为第一单位距离;
    将一帧第二脸部图像中与所述两个目标第一特征点对应的两个第二特征点之间的距离,作为第二单位距离;
    获取所述第二单位距离和第一单位距离之间的比例;
    将所述两帧第二脸部图像中的多个第二特征点中每个第二特征点的位移除以所述比例,得到所述三维模型上所述每个顶点的目标位移。
  13. 根据权利要求12所述的电子设备,其特征在于,一帧第二脸部图像的第二特征点的位移是指所述一帧第二脸部图像中特征点相对于第一帧第二脸部图像中所述特征点的位移。
  14. 根据权利要求11所述的电子设备,其特征在于,一帧第二脸部图像的第二特征点的位移是指所述一帧第二脸部图像中特征点相对于第一帧第二脸部图像中所述特征点的位移。
  15. 根据权利要求11所述的电子设备,其特征在于,所述至少一条指令由所述处理器加载并执行以实现以下操作:
    根据脸部检测所得到的脸部朝向数据,对实时获取到的所述多个第二特征点的位移进行修正。
  16. 根据权利要求9所述的电子设备,其特征在于,所述至少一条指令由所述处理器加载并执行以实现以下操作:
    当接收到录制指令时,对所述三维图像的改变过程进行录制。
  17. 一种计算机可读存储介质,其特征在于,所述存储介质中存储有至少一条指令,所述至少一条指令由处理器加载并执行以实现以下图像处理方法所执行的操作:
    根据第一脸部图像的多个第一特征点,获取所述第一脸部图像的三维图像,所述多个第一特征点用于标识所述第一脸部图像中的第一脸部;
    实时获取多帧第二脸部图像的多个第二特征点,所述多个第二特征点用于标识第二脸部的表情变化;
    根据所述多帧第二脸部图像的多个第二特征点的变化,对所述第一脸部图像的三维图像进行同步,来模拟所述第二脸部的表情变化。
  18. 根据权利要求17所述的计算机可读存储介质,其特征在于,所述至少一条指令由所述处理器加载并执行以实现以下操作:
    对所述第一脸部图像进行脸部检测,得到所述第一脸部图像的多个第一特征点;
    根据所述第一脸部图像的多个第一特征点,获取所述第一脸部的脸部三维模型;
    根据所述第一脸部图像的纹理坐标,将所述第一脸部图像作为纹理映射到 所述脸部三维模型上,得到所述第一脸部图像的三维图像。
  19. 根据权利要求17所述的计算机可读存储介质,其特征在于,所述至少一条指令由所述处理器加载并执行以实现以下操作:
    将实时获取到的两帧第二脸部图像中的多个第二特征点的位移,转换为所述三维图像上的多个顶点的目标位移,三维图像中的多个顶点与所述第一脸部图像中的多个第一特征点一一对应;
    在所述三维图像上应用所述多个顶点的目标位移。
  20. 根据权利要求19所述的计算机可读存储介质,其特征在于,所述至少一条指令由所述处理器加载并执行以实现以下操作:
    将所述多个第一特征点中两个目标第一特征点之间的距离作为第一单位距离;
    将一帧第二脸部图像中与所述两个目标第一特征点对应的两个第二特征点之间的距离,作为第二单位距离;
    获取所述第二单位距离和第一单位距离之间的比例;
    将所述两帧第二脸部图像中的多个第二特征点中每个第二特征点的位移除以所述比例,得到所述三维模型上所述每个顶点的目标位移。
PCT/CN2019/073995 2018-02-12 2019-01-30 图像处理方法、电子设备及存储介质 WO2019154231A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/897,341 US11436779B2 (en) 2018-02-12 2020-06-10 Image processing method, electronic device, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810147314.2 2018-02-12
CN201810147314.2A CN108256505A (zh) 2018-02-12 2018-02-12 图像处理方法及装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/897,341 Continuation US11436779B2 (en) 2018-02-12 2020-06-10 Image processing method, electronic device, and storage medium

Publications (1)

Publication Number Publication Date
WO2019154231A1 true WO2019154231A1 (zh) 2019-08-15

Family

ID=62745160

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/073995 WO2019154231A1 (zh) 2018-02-12 2019-01-30 图像处理方法、电子设备及存储介质

Country Status (3)

Country Link
US (1) US11436779B2 (zh)
CN (1) CN108256505A (zh)
WO (1) WO2019154231A1 (zh)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108256505A (zh) 2018-02-12 2018-07-06 腾讯科技(深圳)有限公司 图像处理方法及装置
US10706577B2 (en) * 2018-03-06 2020-07-07 Fotonation Limited Facial features tracker with advanced training for natural rendering of human faces in real-time
US11106898B2 (en) * 2018-03-19 2021-08-31 Buglife, Inc. Lossy facial expression training data pipeline
CN108985241B (zh) * 2018-07-23 2023-05-02 腾讯科技(深圳)有限公司 图像处理方法、装置、计算机设备及存储介质
CN110163054B (zh) * 2018-08-03 2022-09-27 腾讯科技(深圳)有限公司 一种人脸三维图像生成方法和装置
CN109147017A (zh) * 2018-08-28 2019-01-04 百度在线网络技术(北京)有限公司 动态图像生成方法、装置、设备及存储介质
CN109523628A (zh) * 2018-11-13 2019-03-26 盎锐(上海)信息科技有限公司 影像生成装置及方法
CN109218700A (zh) * 2018-11-13 2019-01-15 盎锐(上海)信息科技有限公司 影像处理装置及方法
CN110163063B (zh) * 2018-11-28 2024-05-28 腾讯数码(天津)有限公司 表情处理方法、装置、计算机可读存储介质和计算机设备
JP7218215B2 (ja) * 2019-03-07 2023-02-06 株式会社日立製作所 画像診断装置、画像処理方法及びプログラム
CN111445417B (zh) * 2020-03-31 2023-12-19 维沃移动通信有限公司 图像处理方法、装置、电子设备及介质
CN111530086B (zh) * 2020-04-17 2022-04-22 完美世界(重庆)互动科技有限公司 一种生成游戏角色的表情的方法和装置
CN113240784B (zh) * 2021-05-25 2024-01-02 北京达佳互联信息技术有限公司 图像处理方法、装置、终端及存储介质
CN113345079B (zh) * 2021-06-18 2024-02-27 厦门美图宜肤科技有限公司 面部三维模型可视化方法、装置、电子设备及存储介质
CN114973349A (zh) * 2021-08-20 2022-08-30 腾讯科技(深圳)有限公司 面部图像处理方法和面部图像处理模型的训练方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150235372A1 (en) * 2011-11-11 2015-08-20 Microsoft Technology Licensing, Llc Computing 3d shape parameters for face animation
CN106709975A (zh) * 2017-01-11 2017-05-24 山东财经大学 一种交互式三维人脸表情动画编辑方法、***及扩展方法
CN106778628A (zh) * 2016-12-21 2017-05-31 张维忠 一种基于tof深度相机的面部表情捕捉方法
CN106920274A (zh) * 2017-01-20 2017-07-04 南京开为网络科技有限公司 移动端2d关键点快速转换为3d融合变形的人脸建模方法
CN108256505A (zh) * 2018-02-12 2018-07-06 腾讯科技(深圳)有限公司 图像处理方法及装置

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007213378A (ja) * 2006-02-10 2007-08-23 Fujifilm Corp 特定表情顔検出方法、撮像制御方法および装置並びにプログラム
US8219438B1 (en) * 2008-06-30 2012-07-10 Videomining Corporation Method and system for measuring shopper response to products based on behavior and facial expression
KR101828201B1 (ko) * 2014-06-20 2018-02-09 인텔 코포레이션 3d 얼굴 모델 재구성 장치 및 방법
US10089522B2 (en) * 2015-09-29 2018-10-02 BinaryVR, Inc. Head-mounted display with facial expression detecting capability
CN106599817A (zh) * 2016-12-07 2017-04-26 腾讯科技(深圳)有限公司 一种人脸替换方法及装置
US10332312B2 (en) * 2016-12-25 2019-06-25 Facebook, Inc. Shape prediction model compression for face alignment
US10572720B2 (en) * 2017-03-01 2020-02-25 Sony Corporation Virtual reality-based apparatus and method to generate a three dimensional (3D) human face model using image and depth data
CN107680069B (zh) * 2017-08-30 2020-09-11 歌尔股份有限公司 一种图像处理方法、装置和终端设备
US11114086B2 (en) * 2019-01-18 2021-09-07 Snap Inc. Text and audio-based real-time face reenactment
US11049310B2 (en) * 2019-01-18 2021-06-29 Snap Inc. Photorealistic real-time portrait animation
US11113859B1 (en) * 2019-07-10 2021-09-07 Facebook Technologies, Llc System and method for rendering three dimensional face model based on audio stream and image data

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150235372A1 (en) * 2011-11-11 2015-08-20 Microsoft Technology Licensing, Llc Computing 3d shape parameters for face animation
CN106778628A (zh) * 2016-12-21 2017-05-31 张维忠 一种基于tof深度相机的面部表情捕捉方法
CN106709975A (zh) * 2017-01-11 2017-05-24 山东财经大学 一种交互式三维人脸表情动画编辑方法、***及扩展方法
CN106920274A (zh) * 2017-01-20 2017-07-04 南京开为网络科技有限公司 移动端2d关键点快速转换为3d融合变形的人脸建模方法
CN108256505A (zh) * 2018-02-12 2018-07-06 腾讯科技(深圳)有限公司 图像处理方法及装置

Also Published As

Publication number Publication date
US20200302670A1 (en) 2020-09-24
US11436779B2 (en) 2022-09-06
CN108256505A (zh) 2018-07-06

Similar Documents

Publication Publication Date Title
WO2019154231A1 (zh) 图像处理方法、电子设备及存储介质
US11205282B2 (en) Relocalization method and apparatus in camera pose tracking process and storage medium
CN110544280B (zh) Ar***及方法
WO2019205851A1 (zh) 位姿确定方法、装置、智能设备及存储介质
US11517099B2 (en) Method for processing images, electronic device, and storage medium
WO2019205850A1 (zh) 位姿确定方法、装置、智能设备及存储介质
US11276183B2 (en) Relocalization method and apparatus in camera pose tracking process, device, and storage medium
CN110427110B (zh) 一种直播方法、装置以及直播服务器
CN109947338B (zh) 图像切换显示方法、装置、电子设备及存储介质
CN111028144B (zh) 视频换脸方法及装置、存储介质
CN109886208B (zh) 物体检测的方法、装置、计算机设备及存储介质
CN110956580B (zh) 图像换脸的方法、装置、计算机设备以及存储介质
CN111897429A (zh) 图像显示方法、装置、计算机设备及存储介质
CN112565806B (zh) 虚拟礼物赠送方法、装置、计算机设备及介质
CN110796083A (zh) 图像显示方法、装置、终端及存储介质
CN112135191A (zh) 视频编辑方法、装置、终端及存储介质
CN112967261B (zh) 图像融合方法、装置、设备及存储介质
CN110443841B (zh) 地面深度的测量方法、装置及***
CN112052806A (zh) 图像处理方法、装置、设备及存储介质
CN114093020A (zh) 动作捕捉方法、装置、电子设备及存储介质
CN113592874A (zh) 图像显示方法、装置和计算机设备
CN112184802A (zh) 标定框的调整方法、装置及存储介质
CN113065457B (zh) 人脸检测点处理方法、装置、计算机设备及存储介质
CN108881715A (zh) 拍摄模式的启用方法、装置、终端及存储介质
CN111064994B (zh) 视频图像处理方法及装置、存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19751600

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19751600

Country of ref document: EP

Kind code of ref document: A1