CN108830901B - Image processing method and electronic equipment - Google Patents

Image processing method and electronic equipment Download PDF

Info

Publication number
CN108830901B
CN108830901B CN201810651981.4A CN201810651981A CN108830901B CN 108830901 B CN108830901 B CN 108830901B CN 201810651981 A CN201810651981 A CN 201810651981A CN 108830901 B CN108830901 B CN 108830901B
Authority
CN
China
Prior art keywords
image
dimensional
eye
eye model
processed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810651981.4A
Other languages
Chinese (zh)
Other versions
CN108830901A (en
Inventor
唐文峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201810651981.4A priority Critical patent/CN108830901B/en
Publication of CN108830901A publication Critical patent/CN108830901A/en
Application granted granted Critical
Publication of CN108830901B publication Critical patent/CN108830901B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/06Topological mapping of higher dimensional structures onto lower dimensional surfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an image processing method and electronic equipment. The method comprises the following steps: acquiring eye position information from an image to be processed; according to the eye position information, extracting eye depth features from a depth image corresponding to the image to be processed; constructing a first three-dimensional eye model according to the eye depth features; optimizing the first three-dimensional eye model to obtain a second three-dimensional eye model; and performing dimensionality reduction transformation on the second three-dimensional eye model to obtain a processed image. The embodiment of the invention ensures that the eye transition coordination of the photo is better and the beautifying effect is good when the photo is taken.

Description

Image processing method and electronic equipment
Technical Field
The present invention relates to the field of communications technologies, and in particular, to an image processing method and an electronic device.
Background
With the rapid development of communication technology, electronic devices have not been popularized, and have become an indispensable part of people's lives. With the improvement of the photographing function of the electronic device, more and more users want to photograph real and beautiful self with the electronic device, so that beauty becomes a trend on the electronic device. Because eyes show the mental appearance of people, most users want to shoot a pair of beautiful eyes when taking a picture, the existing electronic equipment mainly realizes the eye beautifying function through the big eyes and the like, and the picture processed by the eye beautifying function can visually sense the effect that eyes are locally magnified. However, after the eye beautifying function treatment, the shot photos have poor eye transition coordination, so that the beautifying effect is poor.
Disclosure of Invention
The invention relates to an image processing method and electronic equipment, and aims to solve the problem of poor beautifying effect caused by poor eye transition coordination of photos when the electronic equipment takes photos.
In order to solve the technical problem, the invention is realized as follows: an image processing method applied to an electronic device includes:
acquiring eye position information from an image to be processed;
according to the eye position information, extracting eye depth features from a depth image corresponding to the image to be processed;
constructing a first three-dimensional eye model according to the eye depth features;
optimizing the first three-dimensional eye model to obtain a second three-dimensional eye model;
and performing dimensionality reduction transformation on the second three-dimensional eye model to obtain a processed image.
In a first aspect, an embodiment of the present invention provides an image processing method, including:
acquiring eye position information from an image to be processed;
according to the eye position information, extracting eye depth features from a depth image corresponding to the image to be processed;
constructing a first three-dimensional eye model according to the eye depth features;
optimizing the first three-dimensional eye model to obtain a second three-dimensional eye model;
and performing dimensionality reduction transformation on the second three-dimensional eye model to obtain a processed image.
In a second aspect, an embodiment of the present invention further provides an electronic device, including:
the acquisition module is used for acquiring eye position information from the image to be processed;
the extraction module is used for extracting eye depth features from the depth image corresponding to the image to be processed according to the eye position information;
the construction module is used for constructing a first three-dimensional eye model according to the eye depth characteristics;
the optimization module is used for optimizing the first three-dimensional eye model to obtain a second three-dimensional eye model;
and the transformation module is used for carrying out dimensionality reduction transformation on the second three-dimensional eye model to obtain a processed image.
In a third aspect, an embodiment of the present invention further provides an electronic device, including: the image processing method comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the computer program to realize the steps in the image processing method provided by the embodiment of the invention.
In a fourth aspect, the embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements the steps in the image processing method provided by the embodiment of the present invention.
According to the embodiment of the invention, the eye position information is obtained from the image to be processed; according to the eye position information, extracting eye depth features from a depth image corresponding to the image to be processed; constructing a first three-dimensional eye model according to the eye depth features; optimizing the first three-dimensional eye model to obtain a second three-dimensional eye model; and performing dimensionality reduction transformation on the second three-dimensional eye model to obtain a processed image, so that eye transition coordination of the picture is better during photographing, and the beautifying effect is good.
Drawings
Fig. 1 is a flowchart illustrating an image processing method according to an embodiment of the present invention;
FIG. 2 is a flow chart of another image processing method according to an embodiment of the present invention;
FIG. 3 is a flow chart of another image processing method according to an embodiment of the invention;
FIG. 4 is a flowchart illustrating another glasses image removing method according to an embodiment of the present invention;
FIG. 5 is a flow chart of another image processing method according to an embodiment of the invention;
fig. 6 is a block diagram of an electronic device according to an embodiment of the present invention;
FIG. 7 is a block diagram of another electronic device provided by an embodiment of the invention;
FIG. 8 is a block diagram of another electronic device provided by an embodiment of the invention;
fig. 9 is a block diagram of another electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, fig. 1 is a flowchart of an image processing method according to an embodiment of the present invention, where the method is applied to an electronic device, and as shown in fig. 1, the method includes the following steps:
step 101, obtaining eye position information from an image to be processed.
Wherein, obtaining eye position information from the image to be processed comprises: the method comprises the steps of shooting an original image and a corresponding depth map by using electronic equipment, obtaining a face position in the original image by using a face detection method, and obtaining eye position information by using an eye detection method according to the face position.
The face detection method may use a method based on template matching, a method based on singular value features, a method based on eigen face, a method based on integral image features, and the like, which is not limited in the embodiment of the present invention.
The eye detection method may use a Hough transform circle detection method, a correlation filter bank based method, a template matching based method, and the like, which is not limited in the embodiment of the present invention.
The electronic device with the binocular camera may be used to capture an original image and a corresponding depth map, or may be used with other cameras capable of capturing an original image and a corresponding depth map, which is not limited in the present invention.
And step 102, extracting eye depth features from the depth image corresponding to the image to be processed according to the eye position information.
After the eye position information is obtained in step 101, an eye image of the depth map is extracted according to the eye position information, so as to obtain an eye depth feature.
Step 103, constructing a first three-dimensional eye model according to the eye depth features.
Each pixel point in the depth map represents the distance between the current object and the lens, wherein each pixel point in the depth map can be represented as p (x, y, z), wherein x and y represent two-dimensional space information of a horizontal coordinate and a vertical coordinate of the pixel point, and z represents depth information of the object. Constructing a first three-dimensional eye model using the eye depth information may include: and sequentially tracing points in a three-dimensional space by using the spatial information of each pixel point in the eye image of the depth map to form the first three-dimensional eye model.
And 104, optimizing the first three-dimensional eye model to obtain a second three-dimensional eye model.
The first three-dimensional eye model obtained in step 103 is used to perform optimization processing, for example, a machine learning method, a support vector machine, and the like are used to perform optimization processing on the first three-dimensional eye model, so as to obtain a second three-dimensional eye model, which is not limited in the embodiment of the present invention.
And 105, performing dimensionality reduction transformation on the second three-dimensional eye model to obtain a processed image.
Performing dimensionality reduction transformation on the second three-dimensional eye model to obtain a processed image may include: and converting the second three-dimensional eye model into a two-dimensional eye model, and adjusting the eye image of the target image by using the two-dimensional eye model, which is not limited in the embodiment of the invention.
The eye image can be optimized by utilizing the depth characteristics of the eye image through the steps, and compared with the prior art that the eye image is beautified by utilizing the two-dimensional characteristics of the eye image, the eye image is harmonious and natural in transition and has a better beautification effect after the eye image is optimized by utilizing the three-dimensional characteristics of the eye image.
In addition, the electronic Device may be a Mobile phone, a Tablet Personal Computer (Tablet Personal Computer), a Laptop Computer (Laptop Computer), a Personal Digital Assistant (PDA), a Mobile Internet Device (MID), a Wearable Device (Wearable Device), a Computer, a notebook Computer, or the like.
According to the embodiment of the invention, the eye position information is obtained from the image to be processed; according to the eye position information, extracting eye depth features from a depth image corresponding to the image to be processed; constructing a first three-dimensional eye model according to the eye depth features; optimizing the first three-dimensional eye model to obtain a second three-dimensional eye model; and performing dimensionality reduction transformation on the second three-dimensional eye model to obtain a processed picture.
Referring to fig. 2, fig. 2 is a flowchart of an eye image processing method according to an embodiment of the present invention, where the method is applied to an electronic device, and as shown in fig. 2, the method includes the following steps:
step 201, obtaining eye position information from an image to be processed.
Step 201 in this embodiment is the same as step 101 in the first embodiment of the present invention, and is not described herein again.
Step 202, according to the eye position information, extracting eye depth features from a depth image corresponding to the image to be processed.
Step 202 in this embodiment is the same as step 102 in the first embodiment of the present invention, and is not described herein again.
Step 203, constructing a first three-dimensional eye model according to the eye depth features.
Step 203 in this embodiment is the same as step 103 in the first embodiment of the present invention, and is not described herein again.
Step 204, detecting three-dimensional key point characteristics of the first three-dimensional eye model, and optimizing the three-dimensional key point characteristics by adopting a preset neural network to obtain a second three-dimensional eye model.
Performing machine learning on the three-dimensional key point features through the preset neural network to obtain optimized eye features; and adjusting the first three-dimensional eye model according to the three-dimensional key point characteristics and the optimized eye characteristics to obtain the second three-dimensional eye model.
And optimizing the first three-dimensional eye model by adopting an adjusting method according to the three-dimensional key point characteristics and the optimized eye characteristics so as to obtain the second three-dimensional eye model. The adjustment method may be a warp (triangular transform) method or the like, and the present invention is not particularly limited thereto.
Alternatively, the preset neural network may be obtained by: and establishing a neural network, and performing optimization training on the neural network by adopting a training sample set to obtain the preset neural network for optimizing the three-dimensional key point characteristics of the first three-dimensional eye model.
A neural network is an important branch of machine learning, which has an input layer, an output layer, and an implied layer. The input feature vectors reach an output layer through hidden layer transformation, and classification results are obtained at the output layer. Neural networks commonly used in machine learning at present include deep neural networks, convolutional neural networks, cyclic neural networks, and the like.
Taking the deep neural network as an example, the training sample set includes a first training sample subset and a second training sample subset, wherein the first training sample subset includes an image set of eye deformation or distortion, and the like, and the second training sample subset includes an image set of an eye which has been artificially repaired. And inputting the first training sample subset and the second training sample subset into the deep neural network by using the deep neural network as a training model, extracting eye three-dimensional key information of the image by using the deep neural network for training, and obtaining a mapping relation between the image with deformed or distorted eyes and the image with the artificially repaired eyes, namely finishing the optimization training of the deep neural network. And inputting the eye distortion image to be tested into the trained deep neural network to obtain the optimized eye image.
In the embodiment of the present invention, a deep neural network is taken as an example, and other neural networks are also applicable to the embodiment of the present invention, which is not particularly limited in this embodiment of the present invention.
The preset neural network may be obtained in the above manner, or may be directly obtained by a cloud or a server, which is not specifically limited in the embodiment of the present invention.
Optionally, the obtaining the optimized eye feature by performing machine learning on the three-dimensional key point feature through the preset neural network may include: inputting the three-dimensional key point characteristics into a trained preset neural network, and obtaining optimized eye characteristics according to the mapping relation obtained by training.
Optionally, the three-dimensional key point feature may be a corner feature, and the method for detecting the three-dimensional key point feature of the first three-dimensional eye model may be corner detection based on a gray-scale image, corner detection based on a binary image, corner detection based on a contour curve, and the like, which is not specifically limited in this embodiment of the present invention.
Step 205, converting the second three-dimensional eye model into a two-dimensional eye model, and adjusting the eye image of the image to be processed by using the two-dimensional eye model.
Optionally, as shown in fig. 3, an image processing method includes first obtaining an image to be processed and a corresponding depth map, detecting a face position in a target image by using a human face detection method, and then locating an eye position in the face by using an eye position detection method. And extracting corresponding eye depth features in the depth map according to the eye position information, and further constructing a three-dimensional eye model. And optimizing the three-dimensional eye model, converting the optimized three-dimensional eye model into a two-dimensional eye model, and adjusting the eye image in the image to be processed by using the two-dimensional eye model. The implementation of the specific steps in this embodiment is the same as the corresponding steps in the foregoing embodiments, and details are not repeated here.
The adjusting the eye image of the image to be processed by using the two-dimensional eye model as shown in fig. 3 specifically includes: optimizing the two-dimensional eye model according to the optimized eye features and the three-dimensional key point features to obtain a second two-dimensional eye model; and adjusting the eye image of the image to be processed by using the second two-dimensional eye model.
Performing optimization processing on the two-dimensional eye model according to the optimized eye features and the three-dimensional key point features to obtain a second two-dimensional eye model may include: and converting the optimized eye features and the three-dimensional key point features into corresponding two-dimensional features, and adjusting the two-dimensional eye model by adopting an adjusting method according to corresponding two-dimensional information to obtain the second two-dimensional eye model. The adjustment method may be a warp (triangular transform) method or the like, and the present invention is not particularly limited thereto.
Optionally, the image to be processed includes an eyeglass image, and before the obtaining of the eye depth information of the image to be processed, the method further includes: and identifying the distribution position of the glasses image in the image to be processed. The method and the device for removing the glasses can provide a method for removing the glasses for a user wearing the glasses, so that personalized requirements of different users are met.
A specific method for implementing the above steps may be as shown in the flowchart of fig. 4. Before the method flow shown in fig. 4 is implemented, a large number of training sample sets including glasses are first constructed, and a corner feature training classifier is extracted, where the classifier may be a Haar classifier. After training the Haar classifier, the flow shown in fig. 4 begins: acquiring an image to be processed, extracting the corner feature of a glasses image in the image to be processed by using a trained Haar classifier, and detecting a glasses frame of the glasses image. And judging the eyeglass frame inflection point in the corner point characteristic according to the classifier, and obtaining the position of the eyeglass frame according to the eyeglass frame inflection point. Then, the rough distribution position of the eyeglass image is estimated based on the position of the eyeglass frame. And inputting the image corresponding to the distribution position into a skin color segmentation model, marking pixels of a non-skin color region as a glasses region, thereby obtaining a second distribution position of glasses, and outputting a glasses image. And finally, the glasses image can be removed by calling a repairing method according to the second distribution position.
Optionally, skin color information of different races in a large number of scenes is extracted, and a skin color segmentation model is constructed by using the skin color information. Inputting any image pixel into the skin color segmentation model can obtain the probability value p of each pixel belonging to skin color, wherein the larger the p is, the more the pixel is, the skin color area of a person is shown, and the lower the p is, the pixel is, the non-skin color area is shown. And the glasses belong to a non-skin color region, inputting image pixels corresponding to the distribution positions of the glasses into a skin color segmentation model, acquiring a probability value p corresponding to the image pixels of the distribution positions of the glasses, judging whether the glasses belong to the skin color region according to the size of the probability value p, and marking the pixels which do not belong to the skin color region as the glasses region.
Optionally, a specific flow of the method for processing the eye image when the user wears glasses is shown in fig. 5, the image to be processed and the corresponding depth map are first obtained, the glasses image in the image to be processed is removed by using the glasses removing method shown in fig. 4, and then the to-be-processed image from which the glasses image is removed and the depth map are optimized by using the eye image processing method in the embodiment, so that the eye beautifying of the user wearing glasses can be realized.
According to the embodiment of the invention, the three-dimensional key point characteristics are optimized through the preset neural network to obtain a second three-dimensional eye model, the second three-dimensional eye model is converted into a two-dimensional eye model, and the two-dimensional eye model is used for adjusting the eye image of the image to be processed, so that the eye transition of the picture is coordinated and natural during photographing, and the beautifying effect is better.
Referring to fig. 6, fig. 6 is a structural diagram of an electronic device according to an embodiment of the present invention, and as shown in fig. 6, the electronic device 300 includes:
an obtaining module 301, configured to obtain eye position information from an image to be processed;
an extracting module 302, configured to extract an eye depth feature from a depth image corresponding to the image to be processed according to the eye position information;
a building module 303, configured to build a first three-dimensional eye model according to the eye depth feature;
an optimization module 304, configured to perform optimization processing on the first three-dimensional eye model to obtain a second three-dimensional eye model;
a transformation module 305, configured to perform dimension reduction transformation on the second three-dimensional eye model to obtain a processed image.
As shown in fig. 7, the electronic device 300 further includes:
the training module 306 is configured to establish a neural network, and perform optimization training on the neural network by using a training sample set to obtain the preset neural network for optimizing the three-dimensional key point features of the first three-dimensional eye model.
Optionally, the optimization module 304 is configured to detect a three-dimensional key point feature of the first three-dimensional eye model, and perform optimization processing on the three-dimensional key point feature by using a preset neural network to obtain a second three-dimensional eye model.
Optionally, the transformation module 305 is configured to convert the second three-dimensional eye model into a two-dimensional eye model, and adjust the eye image of the image to be processed by using the two-dimensional eye model.
Optionally, the optimization module 304 is configured to perform machine learning on the three-dimensional key point features through the preset neural network to obtain optimized eye features; adjusting the first three-dimensional eye model according to the three-dimensional key point characteristics and the optimized eye characteristics to obtain a second three-dimensional eye model;
optionally, the transformation module 305 is configured to perform optimization processing on the two-dimensional eye model according to the optimized eye feature and the three-dimensional keypoint feature to obtain a second two-dimensional eye model; and adjusting the eye image of the image to be processed by using the second two-dimensional eye model.
Optionally, the image to be processed includes an eyeglass image, as shown in fig. 8, the electronic device further includes:
a glasses removing module 307, configured to identify distribution positions of the glasses images in the image to be processed; and removing the glasses in the image to be processed according to the distribution position.
The electronic device provided in the embodiment of the present invention can implement each process implemented by the electronic device in the method embodiments of fig. 1 to fig. 5, and is not described herein again to avoid repetition. And the electronic equipment can ensure that the eye transition coordination of the photo is better when the photo is taken, and the beautifying effect is good.
Fig. 9 is a schematic diagram of a hardware structure of an electronic device implementing various embodiments of the present invention.
The electronic device 900 includes, but is not limited to: a radio frequency unit 901, a network module 902, an audio output unit 903, an input unit 904, a sensor 905, a display unit 906, a user input unit 907, an interface unit 908, a memory 909, a processor 910, and a power supply 911. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 9 does not constitute a limitation of the electronic device, and that the electronic device may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the electronic device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, a computer, a notebook computer, and the like.
The processor 910 is configured to obtain eye position information from an image to be processed;
according to the eye position information, extracting eye depth features from a depth image corresponding to the image to be processed;
constructing a first three-dimensional eye model according to the eye depth features;
optimizing the first three-dimensional eye model to obtain a second three-dimensional eye model;
and performing dimensionality reduction transformation on the second three-dimensional eye model to obtain a processed image.
Optionally, the processor 910 performs detection on the three-dimensional key point feature of the first three-dimensional eye model, and performs optimization processing on the three-dimensional key point feature by using a preset neural network to obtain a second three-dimensional eye model.
Optionally, the processor 910 performs a neural network building process, and performs optimization training on the neural network by using a training sample set, so as to obtain the preset neural network for optimizing the three-dimensional key point features of the first three-dimensional eye model.
Optionally, the processor 910 performs converting the second three-dimensional eye model into a two-dimensional eye model, and adjusts the eye image of the image to be processed by using the two-dimensional eye model.
The processor 910 performs machine learning on the three-dimensional key point features through the preset neural network to obtain optimized eye features;
adjusting the first three-dimensional eye model according to the three-dimensional key point characteristics and the optimized eye characteristics to obtain a second three-dimensional eye model;
the processor 910 is further configured to:
optimizing the two-dimensional eye model according to the optimized eye features and the three-dimensional key point features to obtain a second two-dimensional eye model;
the processor 910 is further configured to:
the image to be processed comprises a glasses image, and before the eye depth information of the image to be processed is obtained, the distribution position of the glasses image in the image to be processed is identified; and removing the glasses in the image to be processed according to the distribution position.
The electronic device 900 can enable eye transition coordination of the photo to be good during photographing, and the beautifying effect is good.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 901 may be used for receiving and sending signals during a message transmission and reception process or a call process, and specifically, after receiving downlink data from a base station, the downlink data is processed by the processor 910; in addition, the uplink data is transmitted to the base station. Generally, the radio frequency unit 901 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 901 can also communicate with a network and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user via the network module 902, such as assisting the user in sending and receiving e-mails, browsing web pages, and accessing streaming media.
The audio output unit 903 may convert audio data received by the radio frequency unit 901 or the network module 902 or stored in the memory 909 into an audio signal and output as sound. Also, the audio output unit 903 may provide audio output related to a specific function performed by the electronic device 900 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 903 includes a speaker, a buzzer, a receiver, and the like.
The input unit 904 is used to receive audio or video signals. The input Unit 904 may include a Graphics Processing Unit (GPU) 9041 and a microphone 9042, and the Graphics processor 9041 processes image data of a still picture or video obtained by an image capturing device (such as a camera) in a video capture mode or an image capture mode. The processed image frames may be displayed on the display unit 906. The image frames processed by the graphic processor 9041 may be stored in the memory 909 (or other storage medium) or transmitted via the radio frequency unit 901 or the network module 902. The microphone 9042 can receive sounds and can process such sounds into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 901 in case of the phone call mode.
The electronic device 900 also includes at least one sensor 905, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor includes an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display panel 9061 according to the brightness of ambient light, and the proximity sensor may turn off the display panel 9061 and/or the backlight when the electronic device 900 is moved to the ear. As one type of motion sensor, an accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of an electronic device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 905 may also include a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, etc., which are not described in detail herein.
The display unit 906 is used to display information input by the user or information provided to the user. The Display unit 906 may include a Display panel 9061, and the Display panel 9061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 907 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 907 includes a touch panel 9071 and other input devices 9072. The touch panel 9071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 9071 (e.g., operations by a user on or near the touch panel 9071 using a finger, a stylus, or any other suitable object or accessory). The touch panel 9071 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 910, receives a command from the processor 910, and executes the command. In addition, the touch panel 9071 may be implemented by using various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The user input unit 907 may include other input devices 9072 in addition to the touch panel 9071. Specifically, the other input devices 9072 may include, but are not limited to, a physical keyboard, function keys (such as a volume control key, a switch key, and the like), a track ball, a mouse, and a joystick, which are not described herein again.
Further, the touch panel 9071 may be overlaid on the display panel 9061, and when the touch panel 9071 detects a touch operation on or near the touch panel 9071, the touch panel is transmitted to the processor 910 to determine the type of the touch event, and then the processor 910 provides a corresponding visual output on the display panel 9061 according to the type of the touch event. Although in fig. 9, the touch panel 9071 and the display panel 9061 are two independent components to implement the input and output functions of the electronic device, in some embodiments, the touch panel 9071 and the display panel 9061 may be integrated to implement the input and output functions of the electronic device, which is not limited herein.
The interface unit 908 is an interface for connecting an external device to the electronic apparatus 900. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 908 may be used to receive input from external devices (e.g., data information, power, etc.) and transmit the received input to one or more elements within the electronic device 900 or may be used to transmit data between the electronic device 900 and external devices.
The memory 909 may be used to store software programs as well as various data. The memory 909 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 909 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The processor 910 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, and performs various functions of the electronic device and processes data by running or executing software programs and/or modules stored in the memory 909 and calling data stored in the memory 909, thereby performing overall monitoring of the electronic device. Processor 910 may include one or more processing units; preferably, the processor 910 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It is to be appreciated that the modem processor described above may not be integrated into processor 910.
The electronic device 900 may further include a power supply 911 (e.g., a battery) for supplying power to various components, and preferably, the power supply 911 may be logically connected to the processor 910 through a power management system, so as to manage charging, discharging, and power consumption management functions through the power management system.
In addition, the electronic device 900 includes some functional modules that are not shown, and thus are not described in detail herein.
Preferably, an embodiment of the present invention further provides an electronic device, which includes a processor 910, a memory 909, and a computer program stored in the memory 909 and capable of running on the processor 910, and when the computer program is executed by the processor 910, the processes of the above-mentioned embodiment of the eye image processing method are implemented, and the same technical effect can be achieved, and in order to avoid repetition, details are not described here again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the above-mentioned embodiment of the eye image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (8)

1. An image processing method applied to an electronic device, comprising:
acquiring eye position information from an image to be processed;
according to the eye position information, extracting eye depth features from a depth image corresponding to the image to be processed;
constructing a first three-dimensional eye model according to the eye depth features;
optimizing the first three-dimensional eye model to obtain a second three-dimensional eye model;
performing dimensionality reduction transformation on the second three-dimensional eye model to obtain a processed image;
the optimizing the first three-dimensional eye model to obtain a second three-dimensional eye model includes:
detecting three-dimensional key point characteristics of the first three-dimensional eye model, and optimizing the three-dimensional key point characteristics by adopting a preset neural network to obtain a second three-dimensional eye model;
performing dimensionality reduction transformation on the second three-dimensional eye model to obtain a processed image comprises:
converting the second three-dimensional eye model into a two-dimensional eye model, and adjusting the eye image of the image to be processed by using the two-dimensional eye model;
the optimizing the three-dimensional key point features by adopting a preset neural network to obtain a second three-dimensional eye model comprises the following steps:
performing machine learning on the three-dimensional key point features through the preset neural network to obtain optimized eye features;
adjusting the first three-dimensional eye model according to the three-dimensional key point characteristics and the optimized eye characteristics to obtain a second three-dimensional eye model;
performing dimensionality reduction transformation on the second three-dimensional eye model to obtain a processed image comprises:
optimizing the two-dimensional eye model according to the optimized eye features and the three-dimensional key point features to obtain a second two-dimensional eye model;
and adjusting the eye image of the image to be processed by using the second two-dimensional eye model.
2. The image processing method according to claim 1, wherein before the acquiring eye position information from the image to be processed, the method further comprises:
and establishing a neural network, and performing optimization training on the neural network by adopting a training sample set to obtain the preset neural network for optimizing the three-dimensional key point characteristics of the first three-dimensional eye model.
3. The image processing method according to claim 1, wherein the image to be processed includes a glasses image, and before acquiring the eye position information from the image to be processed, the method further comprises:
identifying the distribution position of the glasses image in the image to be processed;
and removing the glasses image in the image to be processed according to the distribution position.
4. An electronic device, comprising:
the acquisition module is used for acquiring eye position information from the image to be processed;
the extraction module is used for extracting eye depth features from the depth image corresponding to the image to be processed according to the eye position information;
the construction module is used for constructing a first three-dimensional eye model according to the eye depth characteristics;
the optimization module is used for optimizing the first three-dimensional eye model to obtain a second three-dimensional eye model;
the transformation module is used for carrying out dimensionality reduction transformation on the second three-dimensional eye model to obtain a processed image;
the optimization module is used for detecting three-dimensional key point characteristics of the first three-dimensional eye model and optimizing the three-dimensional key point characteristics by adopting a preset neural network to obtain a second three-dimensional eye model;
the transformation module is used for transforming the second three-dimensional eye model into a two-dimensional eye model and adjusting the eye image of the image to be processed by using the two-dimensional eye model;
the optimization module is used for performing machine learning on the three-dimensional key point features through the preset neural network to obtain optimized eye features; adjusting the first three-dimensional eye model according to the three-dimensional key point characteristics and the optimized eye characteristics to obtain a second three-dimensional eye model;
the transformation module is used for optimizing the two-dimensional eye model according to the optimized eye features and the three-dimensional key point features to obtain a second two-dimensional eye model; and adjusting the eye image of the image to be processed by using the second two-dimensional eye model.
5. The electronic device of claim 4, wherein the electronic device further comprises:
and the training module is used for establishing a neural network, and performing optimization training on the neural network by adopting a training sample set to obtain the preset neural network for optimizing the three-dimensional key point characteristics of the first three-dimensional eye model.
6. The electronic device of claim 4, wherein the image to be processed comprises an eyewear image, the electronic device further comprising:
the glasses removal module is used for identifying the distribution positions of the glasses images in the image to be processed; and removing the glasses image in the image to be processed according to the distribution position.
7. An electronic device, comprising: memory, processor and computer program stored on the memory and running on the processor, the processor implementing the steps in the image processing method according to any of claims 1-3 when executing the computer program.
8. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when being executed by a processor, carries out the steps in the image processing method according to any one of claims 1-3.
CN201810651981.4A 2018-06-22 2018-06-22 Image processing method and electronic equipment Active CN108830901B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810651981.4A CN108830901B (en) 2018-06-22 2018-06-22 Image processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810651981.4A CN108830901B (en) 2018-06-22 2018-06-22 Image processing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN108830901A CN108830901A (en) 2018-11-16
CN108830901B true CN108830901B (en) 2020-09-25

Family

ID=64137692

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810651981.4A Active CN108830901B (en) 2018-06-22 2018-06-22 Image processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN108830901B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111107281B (en) * 2019-12-30 2022-04-12 维沃移动通信有限公司 Image processing method, image processing apparatus, electronic device, and medium
CN113902790B (en) * 2021-12-09 2022-03-25 北京的卢深视科技有限公司 Beauty guidance method, device, electronic equipment and computer readable storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105787884A (en) * 2014-12-18 2016-07-20 联想(北京)有限公司 Image processing method and electronic device
CN107124548A (en) * 2017-04-25 2017-09-01 深圳市金立通信设备有限公司 A kind of photographic method and terminal
CN107977605A (en) * 2017-11-08 2018-05-01 清华大学 Ocular Boundary characteristic extraction method and device based on deep learning

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107704813B (en) * 2017-09-19 2020-11-17 北京一维大成科技有限公司 Face living body identification method and system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105787884A (en) * 2014-12-18 2016-07-20 联想(北京)有限公司 Image processing method and electronic device
CN107124548A (en) * 2017-04-25 2017-09-01 深圳市金立通信设备有限公司 A kind of photographic method and terminal
CN107977605A (en) * 2017-11-08 2018-05-01 清华大学 Ocular Boundary characteristic extraction method and device based on deep learning

Also Published As

Publication number Publication date
CN108830901A (en) 2018-11-16

Similar Documents

Publication Publication Date Title
CN110706179B (en) Image processing method and electronic equipment
CN108491775B (en) Image correction method and mobile terminal
CN108076290B (en) Image processing method and mobile terminal
CN107767333B (en) Method and equipment for beautifying and photographing and computer storage medium
CN108712603B (en) Image processing method and mobile terminal
CN107833177A (en) A kind of image processing method and mobile terminal
CN109272473B (en) Image processing method and mobile terminal
CN111047511A (en) Image processing method and electronic equipment
CN111031234B (en) Image processing method and electronic equipment
CN109671034B (en) Image processing method and terminal equipment
CN111080747B (en) Face image processing method and electronic equipment
CN111008929B (en) Image correction method and electronic equipment
CN108830901B (en) Image processing method and electronic equipment
CN109544445B (en) Image processing method and device and mobile terminal
CN109639981B (en) Image shooting method and mobile terminal
CN113255396A (en) Training method and device of image processing model, and image processing method and device
CN109727212B (en) Image processing method and mobile terminal
CN107563353B (en) Image processing method and device and mobile terminal
CN110555815A (en) Image processing method and electronic equipment
CN107798662B (en) Image processing method and mobile terminal
CN112733673B (en) Content display method and device, electronic equipment and readable storage medium
CN110443752B (en) Image processing method and mobile terminal
CN111405361B (en) Video acquisition method, electronic equipment and computer readable storage medium
CN111402157B (en) Image processing method and electronic equipment
CN111402271A (en) Image processing method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant