CN109194943B - Image processing method and terminal equipment - Google Patents

Image processing method and terminal equipment Download PDF

Info

Publication number
CN109194943B
CN109194943B CN201810993006.1A CN201810993006A CN109194943B CN 109194943 B CN109194943 B CN 109194943B CN 201810993006 A CN201810993006 A CN 201810993006A CN 109194943 B CN109194943 B CN 109194943B
Authority
CN
China
Prior art keywords
image
depth information
target
terminal device
information set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810993006.1A
Other languages
Chinese (zh)
Other versions
CN109194943A (en
Inventor
余磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201810993006.1A priority Critical patent/CN109194943B/en
Publication of CN109194943A publication Critical patent/CN109194943A/en
Application granted granted Critical
Publication of CN109194943B publication Critical patent/CN109194943B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0081Depth or disparity estimation from stereoscopic image signals

Landscapes

  • Processing Or Creating Images (AREA)
  • Telephone Function (AREA)

Abstract

The embodiment of the invention discloses an image processing method and terminal equipment, relates to the technical field of terminals, and can solve the problem that a method for beautifying a 2D face image cannot meet the requirement for beautifying a 3D face image. The specific scheme is as follows: acquiring a first depth information set corresponding to a first image to be processed, wherein the first depth information set comprises at least one piece of depth information, the at least one piece of depth information is depth information of a first object acquired when the first image is acquired, and the first object is an object corresponding to the first image; determining a first area to be processed in a first image; and processing the target RGB image and the target depth information to obtain a second image, wherein the target RGB image is an RGB image of a first area in the first image, and the target depth information is depth information corresponding to the first area in the first depth information set. The embodiment of the invention is applied to the image processing process.

Description

Image processing method and terminal equipment
Technical Field
The embodiment of the invention relates to the technical field of terminals, in particular to an image processing method and terminal equipment.
Background
Generally, a user can beautify a shot face image by using some application program in a terminal device (such as a mobile phone).
Generally, a face image shot by a camera function of a mobile phone is a 2D face image, and when beautifying the 2D face image, two-dimensional information such as texture information, color information, and a planar structure of the 2D face image can be respectively beautified.
However, with the development of terminal technology, more and more mobile phones can use a 3D sensor to acquire a 3D face image, and the above method for beautifying a 2D face image cannot meet the requirement for beautifying a 3D face image.
Disclosure of Invention
The embodiment of the invention provides an image processing method and terminal equipment, which can solve the problem that a method for beautifying a 2D face image cannot meet the requirement for beautifying a 3D face image.
In order to solve the technical problem, the embodiment of the invention adopts the following technical scheme:
in a first aspect of embodiments of the present invention, an image processing method is provided, where the image processing method includes: acquiring a first depth information set corresponding to a first image to be processed, wherein the first depth information set comprises at least one piece of depth information, the at least one piece of depth information is depth information of a first object acquired when the first image is acquired, and the first object is an object corresponding to the first image; determining a first area to be processed in a first image; and processing the target RGB image and the target depth information to obtain a second image, wherein the target RGB image is an RGB image of a first area in the first image, and the target depth information is depth information corresponding to the first area in the first depth information set.
In a second aspect of the embodiments of the present invention, there is provided a terminal device, including: the device comprises an acquisition unit, a determination unit and a processing unit. The device comprises an acquisition unit and a processing unit, wherein the acquisition unit is used for acquiring a first depth information set corresponding to a first image to be processed, the first depth information set comprises at least one piece of depth information, the at least one piece of depth information is depth information of a first object acquired when the first image is acquired, and the first object is an object corresponding to the first image. A determining unit for determining a first region to be processed in the first image. And the processing unit is used for processing the target RGB image and the target depth information to obtain a second image, wherein the target RGB image is an RGB image of a first area in the first image, and the target depth information is depth information corresponding to the first area in the first depth information set.
In a third aspect of the embodiments of the present invention, a terminal device is provided, where the terminal device includes a processor, a memory, and a computer program stored in the memory and being executable on the processor, and the computer program, when executed by the processor, implements the steps of the image processing method according to the first aspect.
A fourth aspect of the embodiments of the present invention provides a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the steps of the image processing method according to the first aspect.
In this embodiment of the present invention, the terminal device may obtain a first depth information set corresponding to the first image to be processed (where the first depth information set includes at least one piece of depth information, where the at least one piece of depth information is depth information of a first object acquired when the first image is acquired), and process a target RGB image (where the target RGB image is an RGB image of a first area in the first image) and target depth information (where the target depth information is depth information corresponding to the first area in the first depth information set) according to the first area to be processed in the first image, so as to obtain a second image. The obtained second image is an image obtained by processing the target RGB image and the target depth information according to the first area by the terminal device, the target depth information is depth information corresponding to the first area in the first depth information set acquired by the terminal device, and the target RGB image is an RGB image of the first area in the first image, so that the second image is an image obtained by processing after RGB processing and depth processing, that is, the second image is not only an image obtained by changing the RGB image of the first image, but also an image obtained by changing the depth information corresponding to the first image, and thus the requirement of beautifying the second image can be met.
Drawings
Fig. 1 is a schematic structural diagram of an android operating system according to an embodiment of the present invention;
fig. 2 is a schematic diagram of an image processing method according to an embodiment of the present invention;
fig. 3 is a second schematic diagram of an image processing method according to an embodiment of the present invention;
fig. 4 is a third schematic diagram of an image processing method according to an embodiment of the present invention;
FIG. 5 is a fourth schematic diagram illustrating an image processing method according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a terminal device according to an embodiment of the present invention;
fig. 7 is a hardware schematic diagram of a terminal device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first" and "second," and the like, in the description and in the claims of embodiments of the present invention are used for distinguishing between different objects and not for describing a particular order of the objects. For example, the first image and the second image, etc. are for distinguishing different images, rather than for describing a particular order of the images. In the description of the embodiments of the present invention, the meaning of "a plurality" means two or more unless otherwise specified.
The term "and/or" herein is an association relationship describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. The symbol "/" herein denotes a relationship in which the associated object is or, for example, a/B denotes a or B.
In the embodiments of the present invention, words such as "exemplary" or "for example" are used to mean serving as examples, illustrations or descriptions. Any embodiment or design described as "exemplary" or "e.g.," an embodiment of the present invention is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
The embodiment of the invention provides an image processing method and terminal equipment, wherein the terminal equipment can acquire a first depth information set corresponding to a first image to be processed (the first depth information set comprises at least one piece of depth information, and the at least one piece of depth information is depth information of a first object acquired when the first image is acquired), and process a target RGB image (the target RGB image is an RGB image of a first area in the first image) and target depth information (the target depth information is depth information corresponding to the first area in the first depth information set) according to a first area to be processed in the first image to obtain a second image. The obtained second image is an image obtained by processing the target RGB image and the target depth information according to the first area by the terminal device, the target depth information is depth information corresponding to the first area in the first depth information set acquired by the terminal device, and the target RGB image is an RGB image of the first area in the first image, so that the second image is an image obtained by processing after RGB processing and depth processing, that is, the second image is not only an image obtained by changing the RGB image of the first image, but also an image obtained by changing the depth information corresponding to the first image, and thus the requirement of beautifying the second image can be met.
The image processing method and the terminal device provided by the embodiment of the invention can be applied to the image processing process. Specifically, the method can be applied to a process that the terminal device processes the target RGB image and the target depth information to obtain the second image.
The terminal device in the embodiment of the present invention may be a terminal device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present invention are not limited in particular.
The following describes a software environment to which the image processing method provided by the embodiment of the present invention is applied, by taking an android operating system as an example.
Fig. 1 is a schematic diagram of an architecture of a possible android operating system according to an embodiment of the present invention. In fig. 1, the architecture of the android operating system includes 4 layers, which are respectively: an application layer, an application framework layer, a system runtime layer, and a kernel layer (specifically, a Linux kernel layer).
The application program layer comprises various application programs (including system application programs and third-party application programs) in an android operating system.
The application framework layer is a framework of the application, and a developer can develop some applications based on the application framework layer under the condition of complying with the development principle of the framework of the application.
The system runtime layer includes libraries (also called system libraries) and android operating system runtime environments. The library mainly provides various resources required by the android operating system. The android operating system running environment is used for providing a software environment for the android operating system.
The kernel layer is an operating system layer of an android operating system and belongs to the bottommost layer of an android operating system software layer. The kernel layer provides kernel system services and hardware-related drivers for the android operating system based on the Linux kernel.
Taking an android operating system as an example, in the embodiment of the present invention, a developer may develop a software program for implementing the image processing method provided in the embodiment of the present invention based on the system architecture of the android operating system shown in fig. 1, so that the image processing method may operate based on the android operating system shown in fig. 1. Namely, the processor or the terminal device can implement the image processing method provided by the embodiment of the invention by running the software program in the android operating system.
An image processing method and a terminal device provided by the embodiments of the present invention are described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
At present, in the prior art, as more and more mobile phones can use a 3D sensor to acquire a 3D face image along with the development of terminal technology, the method for beautifying a 2D face image cannot meet the requirement for beautifying the 3D face image.
In order to solve the above technical problem, in an image processing method provided by an embodiment of the present invention, fig. 2 shows a flowchart of an image processing method provided by an embodiment of the present invention, and the method can be applied to a terminal device having an android operating system as shown in fig. 1. Wherein, although the logical order of the image processing methods provided by embodiments of the present invention is shown in a method flow diagram, in some cases, various steps shown or described may be performed in an order different than here. As shown in fig. 2, the image processing method may include steps 201 to 203 described below.
Step 201, a terminal device acquires a first depth information set corresponding to a first image to be processed.
In an embodiment of the present invention, the first depth information set includes at least one piece of depth information, where the at least one piece of depth information is depth information of a first object acquired when the first image is acquired, and the first object is an object corresponding to the first image.
It should be noted that, in the embodiment of the present invention, the depth information may be understood as: the spatial distance between the shot object and the lens of the camera.
It can be understood that, in the embodiment of the present invention, the at least one depth information of the first object may be depth information of at least one pixel point of a plurality of pixel points included in the first object.
Optionally, in the embodiment of the present invention, when the terminal device acquires the first image for the first object, at least one depth information of the first object is acquired.
For example, after the user opens the camera application of the terminal device and triggers to acquire the depth information, the terminal device may photograph the first object through the camera, that is, in a case where the first image is displayed on the first interface of the terminal device, the terminal device may acquire at least one pixel point of the first object.
Optionally, in an embodiment of the present invention, the first image may be a human face image.
Optionally, in this embodiment of the present invention, the first image is a two-dimensional (2D) image acquired by the terminal device.
Optionally, in this embodiment of the present invention, the terminal device may acquire the first image through a 2D optical sensor of the terminal device, and acquire the first depth information set corresponding to the first image through a 3D optical sensor of the terminal device.
Step 202, the terminal device determines a first region to be processed in the first image.
Optionally, in this embodiment of the present invention, the first area may be a default area of a system in the terminal device; alternatively, the first area may be an area determined by the terminal device according to an input of the user.
And 203, processing the target RGB image and the target depth information by the terminal equipment to obtain a second image.
In an embodiment of the present invention, the target RGB image is an RGB image of a first region in the first image, and the target depth information is depth information corresponding to the first region in the first depth information set.
It can be understood that, in the embodiment of the present invention, the terminal device performs RGB processing on the target RGB image, and performs depth processing on the target depth information to obtain the second image.
It can be understood that, in the embodiment of the present invention, the terminal device may determine the target depth information from the first depth information set according to the first area.
Optionally, in this embodiment of the present invention, the terminal device may determine, according to at least one pixel point in the first region, a pixel point corresponding to each pixel point in the at least one pixel point from the pixel points corresponding to the first depth information set, and determine the depth information of the pixel points as the target depth information.
Optionally, in this embodiment of the present invention, the terminal device may adjust a shape of the image of the first area in the first image, and the like.
Illustratively, the terminal device may adjust the size of eyes in the face image, or adjust the size of the first direction of the face image.
Optionally, in the embodiment of the present invention, the terminal device may process the first region in the first image according to a preset mode; the terminal device may also process the first region in the first image according to an input of a user.
It should be noted that, for a specific method for the terminal device to process the target RGB image and the target depth information to obtain the second image, reference may be made to the description in the following embodiments, which is not described here.
The embodiment of the invention provides an image processing method, wherein a terminal device can acquire a first depth information set corresponding to a first image to be processed (the first depth information set comprises at least one piece of depth information, and the at least one piece of depth information is depth information of a first object acquired when the first image is acquired), and process a target RGB image (the target RGB image is an RGB image of a first area in the first image) and target depth information (the target depth information is depth information corresponding to the first area in the first depth information set) according to the first area to be processed in the first image to obtain a second image. The obtained second image is an image obtained by processing the target RGB image and the target depth information according to the first area by the terminal device, the target depth information is depth information corresponding to the first area in the first depth information set acquired by the terminal device, and the target RGB image is an RGB image of the first area in the first image, so that the second image is an image obtained by processing after RGB processing and depth processing, that is, the second image is not only an image obtained by changing the RGB image of the first image, but also an image obtained by changing the depth information corresponding to the first image, and thus the requirement of beautifying the second image can be met.
Optionally, in the embodiment of the present invention, as shown in fig. 3 in combination with fig. 2, the step 203 may be specifically implemented by the following steps 203a to 203 c.
And 203a, the terminal equipment processes the target RGB image to obtain a third image.
Optionally, in this embodiment of the present invention, the terminal device may adjust the shape of the target RGB image, and the like, to obtain the third image.
Optionally, in the embodiment of the present invention, as shown in fig. 4 in combination with fig. 3, the step 203a may be specifically implemented by the following step 203a 1.
Step 203a1, the terminal device processes the target RGB image by using the first parameter to obtain a third image.
In an embodiment of the present invention, the shape of the third image is different from the shape of the target RGB image, and the first parameter is input by a user or stored in the terminal device.
Optionally, in the embodiment of the present invention, the first parameter may be a parameter for changing a shape of the image, that is, a parameter for adjusting a position of a pixel of the image.
And 203b, the terminal device performs deep processing on the target depth information to obtain a second depth information set.
It can be understood that, in the embodiment of the present invention, the terminal device may change the positions of the pixel points corresponding to the target depth information depth to obtain the depth information of all the changed pixel points, which is the second depth information set.
Illustratively, it is assumed that the first depth information set includes depth information of a pixel 1, a pixel 2, and a pixel 3, the first region includes a pixel 4 and a pixel 5, the pixel 4 corresponds to the pixel 2, and the pixel 5 corresponds to the pixel 3. The terminal device may perform depth processing on the pixel point 2 and the pixel point 3 to obtain a second depth information set (including depth information of the depth-processed pixel point 2 and depth information of the depth-processed pixel point 3).
Optionally, in the embodiment of the present invention, as shown in fig. 4 in combination with fig. 3, the step 203b may be specifically implemented by the step 203b1 described below.
Step 203b1, the terminal device performs depth processing on the target depth information by using the first parameter to obtain a second depth information set.
It can be understood that, in the embodiment of the present invention, the terminal device may adjust the position of the pixel point corresponding to the target depth information depth by using the first parameter, so as to obtain the second depth information set.
And 203c, the terminal equipment combines the third image and the second depth information set to obtain a second image.
Optionally, in this embodiment of the present invention, the third image includes at least one pixel point, and each pixel point corresponds to depth information in the second depth information set one to one. Referring to fig. 3, as shown in fig. 5, the step 203c may be implemented by the step 203c1 described below.
Step 203c1, for each pixel point in at least one pixel point included in the third image, the terminal device combines the pixel information of one pixel point in the third image with one depth information to obtain the second image.
In an embodiment of the present invention, the second image is a three-dimensional (3D) image corresponding to the third image, and one depth information is a depth information corresponding to one pixel in the second depth information set.
Optionally, in the embodiment of the present invention, the pixel information of one pixel point in the third image may be two-dimensional pixel information of the one pixel point in the third image.
Optionally, in the embodiment of the present invention, the two-dimensional pixel information of one pixel point in the third image may include a color value, texture information, a plane structure, and the like of the one pixel point in the third image. The specific method can be determined according to actual use requirements, and the embodiment of the invention is not limited herein.
Illustratively, it is assumed that at least one pixel point included in the third image is a pixel point 6, a pixel point 7, and a pixel point 8, where the pixel point 6 corresponds to depth information 1 in the second depth information set, the pixel point 7 corresponds to depth information 2 in the second depth information set, and the pixel point 8 corresponds to depth information 3 in the second depth information set, and the terminal device may combine two-dimensional pixel information of the pixel point 6 in the third image with the depth information 1, combine two-dimensional pixel information of the pixel point 7 in the third image with the depth information 2, and combine two-dimensional pixel information of the pixel point 8 in the third image with the depth information 3 to obtain a three-dimensional image corresponding to the third image.
In the embodiment of the present invention, after obtaining the second depth information set, the terminal device may combine, for each pixel point in at least one pixel point included in the third image, pixel information of one pixel point in the third image with one depth information to obtain the second image (the second image is a three-dimensional image corresponding to the third image), so that the obtained three-dimensional image is a three-dimensional image subjected to RGB processing and depth processing, that is, the three-dimensional image is not only an image obtained by changing an RGB image of the first image, but also an image obtained by changing depth information corresponding to the first image, and thus, a requirement for beautifying a three-dimensional image corresponding to the third image can be met.
In the embodiment of the present invention, the terminal device may combine the third image obtained by processing the target RGB image with the second depth information set obtained by depth processing the target depth information to obtain the second image, so that the second image is an image subjected to RGB processing and depth processing, that is, the second image is not only an image obtained by changing the RGB image of the first image but also an image obtained by changing the depth information corresponding to the first image, and thus, the requirement for beautifying the second image can be met.
Fig. 6 shows a schematic diagram of a possible structure of a terminal device involved in the embodiment of the present invention, and as shown in fig. 6, the terminal device 60 may include: an acquisition unit 61, a determination unit 62 and a processing unit 63.
The obtaining unit 61 may be configured to obtain a first depth information set corresponding to a first image to be processed, where the first depth information set includes at least one piece of depth information, where the at least one piece of depth information is depth information of a first object acquired when the first image is acquired, and the first object is an object corresponding to the first image. The determining unit 62 may be configured to determine a first region to be processed in the first image. The processing unit 63 may be configured to process the target RGB image and the target depth information to obtain a second image, where the target RGB image is an RGB image of the first region in the first image, and the target depth information is depth information corresponding to the first region in the first depth information set.
In a possible implementation manner, the processing unit 63 may be specifically configured to process the target RGB image to obtain a third image; carrying out depth processing on the target depth information to obtain a second depth information set; and combining the third image and the second depth information set to obtain the second image.
In a possible implementation manner, the processing unit 63 may be specifically configured to process the target RGB image by using a first parameter, so as to obtain a third image, where a shape of the third image is different from a shape of the target RGB image, and the first parameter is input by a user or stored in the terminal device. The processing unit 63 may be specifically configured to perform depth processing on the target depth information by using the first parameter to obtain a second depth information set.
In a possible implementation manner, the third image may include at least one pixel, and each pixel in the at least one pixel corresponds to the depth information in the second depth information set in a one-to-one manner. The processing unit 63 may be specifically configured to, for each pixel point, combine pixel information of one pixel point in the third image with one depth information to obtain a second image, where the second image is a three-dimensional image corresponding to the third image, and the one depth information is depth information corresponding to one pixel point in the second depth information set.
The terminal device provided by the embodiment of the present invention can implement each process implemented by the terminal device in the above method embodiments, and for avoiding repetition, detailed description is not repeated here.
The embodiment of the present invention provides a terminal device, where the terminal device may obtain a first depth information set corresponding to a first image to be processed (the first depth information set includes at least one piece of depth information, where the at least one piece of depth information is depth information of a first object acquired when the first image is acquired), and process a target RGB image (the target RGB image is an RGB image of a first area in the first image) and target depth information (the target depth information is depth information corresponding to the first area in the first depth information set) according to a first area to be processed in the first image, so as to obtain a second image. The obtained second image is an image obtained by processing the target RGB image and the target depth information according to the first area by the terminal device, the target depth information is depth information corresponding to the first area in the first depth information set acquired by the terminal device, and the target RGB image is an RGB image of the first area in the first image, so that the second image is an image obtained by processing after RGB processing and depth processing, that is, the second image is not only an image obtained by changing the RGB image of the first image, but also an image obtained by changing the depth information corresponding to the first image, and thus the requirement of beautifying the second image can be met.
Fig. 7 is a hardware schematic diagram of a terminal device for implementing various embodiments of the present invention. As shown in fig. 7, the terminal device 100 includes, but is not limited to: radio frequency unit 101, network module 102, audio output unit 103, input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111.
It should be noted that, as those skilled in the art will appreciate, the terminal device structure shown in fig. 7 does not constitute a limitation to the terminal device, and the terminal device may include more or less components than those shown in fig. 7, or may combine some components, or may arrange different components. In the embodiment of the present invention, the terminal device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
The processor 110 may be configured to obtain a first depth information set corresponding to a first image to be processed, where the first depth information set includes at least one piece of depth information, where the at least one piece of depth information is depth information of a first object acquired when the first image is acquired, and the first object is an object corresponding to the first image; determining a first area to be processed in the first image; and processing the target RGB image and the target depth information to obtain a second image, wherein the target RGB image is an RGB image of a first area in the first image, and the target depth information is depth information corresponding to the first area in the first depth information set.
The embodiment of the present invention provides a terminal device, where the terminal device may obtain a first depth information set corresponding to a first image to be processed (the first depth information set includes at least one piece of depth information, where the at least one piece of depth information is depth information of a first object acquired when the first image is acquired), and process a target RGB image (the target RGB image is an RGB image of a first area in the first image) and target depth information (the target depth information is depth information corresponding to the first area in the first depth information set) according to a first area to be processed in the first image, so as to obtain a second image. The obtained second image is an image obtained by processing the target RGB image and the target depth information according to the first area by the terminal device, the target depth information is depth information corresponding to the first area in the first depth information set acquired by the terminal device, and the target RGB image is an RGB image of the first area in the first image, so that the second image is an image obtained by processing after RGB processing and depth processing, that is, the second image is not only an image obtained by changing the RGB image of the first image, but also an image obtained by changing the depth information corresponding to the first image, and thus the requirement of beautifying the second image can be met.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 101 may be used for receiving and sending signals during a message transmission or call process, and specifically, after receiving downlink data from a base station, the downlink data is processed by the processor 110; in addition, the uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through a wireless communication system.
The terminal device provides wireless broadband internet access to the user through the network module 102, such as helping the user send and receive e-mails, browse webpages, access streaming media, and the like.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the network module 102 or stored in the memory 109 into an audio signal and output as sound. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the terminal device 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 includes a speaker, a buzzer, a receiver, and the like.
The input unit 104 is used to receive an audio or video signal. The input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, and the graphics processor 1041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the network module 102. The microphone 1042 may receive sound and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of a phone call mode.
The terminal device 100 also includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 1061 and/or the backlight when the terminal device 100 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the terminal device posture (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration identification related functions (such as pedometer, tapping), and the like; the sensors 105 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 106 is used to display information input by a user or information provided to the user. The Display unit 106 may include a Display panel 1061, and the Display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the terminal device. Specifically, the user input unit 107 includes a touch panel 1071 and other input devices 1072. Touch panel 1071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 1071 (e.g., operations by a user on or near touch panel 1071 using a finger, stylus, or any suitable object or attachment). The touch panel 1071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and receives and executes commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. Specifically, other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 1071 may be overlaid on the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although in fig. 7, the touch panel 1071 and the display panel 1061 are two independent components to implement the input and output functions of the terminal device, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the terminal device, and is not limited herein.
The interface unit 108 is an interface for connecting an external device to the terminal apparatus 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the terminal apparatus 100 or may be used to transmit data between the terminal apparatus 100 and the external device.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the terminal device, connects various parts of the entire terminal device by using various interfaces and lines, and performs various functions of the terminal device and processes data by running or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the terminal device. Processor 110 may include one or more processing units; preferably, the processor 110 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The terminal device 100 may further include a power supply 111 (such as a battery) for supplying power to each component, and preferably, the power supply 111 may be logically connected to the processor 110 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system.
In addition, the terminal device 100 includes some functional modules that are not shown, and are not described in detail here.
Preferably, an embodiment of the present invention further provides a terminal device, which includes the processor 110 shown in fig. 7, the memory 109, and a computer program stored in the memory 109 and capable of running on the processor 110, and when the computer program is executed by the processor 110, the computer program implements each process of the foregoing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements the processes of the method embodiments, and can achieve the same technical effects, and in order to avoid repetition, the details are not repeated here. The computer-readable storage medium may be a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (8)

1. An image processing method, characterized in that the method comprises:
acquiring a first depth information set corresponding to a first image to be processed, wherein the first depth information set comprises at least one piece of depth information, the at least one piece of depth information is depth information of a first object acquired when the first image is acquired, and the first object is an object corresponding to the first image;
determining a first region to be processed in the first image;
processing a target RGB image and target depth information to obtain a second image, wherein the target RGB image is an RGB image of the first area in the first image, and the target depth information is depth information corresponding to the first area in the first depth information set;
wherein, the processing the target RGB image and the target depth information to obtain the second image includes:
adjusting the position of a pixel point of a target RGB image by adopting a first parameter to obtain a third image, wherein the shape of the third image is different from that of the target RGB image;
adjusting the position of a pixel point corresponding to the target depth information by adopting the first parameter to obtain a second depth information set;
and combining the third image and the second depth information set to obtain a second image.
2. The method of claim 1, wherein the first parameter is input by a user or stored in a terminal device.
3. The method according to claim 1 or 2, wherein the third image comprises at least one pixel point, each pixel point corresponding to depth information in the second depth information set one to one;
the combining the third image and the second depth information set to obtain the second image includes:
and for each pixel point, combining pixel information of one pixel point in the third image with one depth information to obtain the second image, wherein the second image is a three-dimensional image corresponding to the third image, and the one depth information is depth information corresponding to the one pixel point in the second depth information set.
4. A terminal device, characterized in that the terminal device comprises: the device comprises an acquisition unit, a determination unit and a processing unit;
the acquiring unit is configured to acquire a first depth information set corresponding to a first image to be processed, where the first depth information set includes at least one piece of depth information, the at least one piece of depth information is depth information of a first object acquired when the first image is acquired, and the first object is an object corresponding to the first image;
the determining unit is used for determining a first area to be processed in the first image;
the processing unit is configured to process a target RGB image and target depth information to obtain a second image, where the target RGB image is an RGB image of the first region in the first image, and the target depth information is depth information corresponding to the first region in the first depth information set;
the processing unit is specifically configured to adjust the pixel position of the target RGB image by using a first parameter to obtain a third image; adjusting the position of a pixel point corresponding to the target depth information by adopting the first parameter to obtain a second depth information set; and combining the third image and the second depth information set to obtain the second image, wherein the shape of the third image is different from that of the target RGB image.
5. The terminal device of claim 4, wherein the first parameter is input by a user or stored in the terminal device.
6. The terminal device according to claim 4, wherein the third image comprises at least one pixel point, each pixel point corresponding to depth information in the second depth information set one by one;
the processing unit is specifically configured to combine, for each pixel point, pixel information of one pixel point in the third image with one depth information to obtain the second image, where the second image is a three-dimensional image corresponding to the third image, and the one depth information is depth information corresponding to the one pixel point in the second depth information set.
7. A terminal device, characterized in that it comprises a processor, a memory and a computer program stored on the memory and executable on the processor, which computer program, when executed by the processor, implements the steps of the image processing method according to any one of claims 1 to 3.
8. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the image processing method according to any one of claims 1 to 3.
CN201810993006.1A 2018-08-29 2018-08-29 Image processing method and terminal equipment Active CN109194943B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810993006.1A CN109194943B (en) 2018-08-29 2018-08-29 Image processing method and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810993006.1A CN109194943B (en) 2018-08-29 2018-08-29 Image processing method and terminal equipment

Publications (2)

Publication Number Publication Date
CN109194943A CN109194943A (en) 2019-01-11
CN109194943B true CN109194943B (en) 2020-06-02

Family

ID=64917102

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810993006.1A Active CN109194943B (en) 2018-08-29 2018-08-29 Image processing method and terminal equipment

Country Status (1)

Country Link
CN (1) CN109194943B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110458177B (en) * 2019-07-12 2023-04-07 中国科学院深圳先进技术研究院 Method for acquiring image depth information, image processing device and storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3161658B1 (en) * 2014-12-19 2019-03-20 SZ DJI Technology Co., Ltd. Optical-flow imaging system and method using ultrasonic depth sensing
CN106991654B (en) * 2017-03-09 2021-02-05 Oppo广东移动通信有限公司 Human body beautifying method and device based on depth and electronic device
CN107169475B (en) * 2017-06-19 2019-11-19 电子科技大学 A kind of face three-dimensional point cloud optimized treatment method based on kinect camera
CN107480615B (en) * 2017-07-31 2020-01-10 Oppo广东移动通信有限公司 Beauty treatment method and device and mobile equipment
CN107392874B (en) * 2017-07-31 2021-04-09 Oppo广东移动通信有限公司 Beauty treatment method and device and mobile equipment
CN108447017B (en) * 2018-05-31 2022-05-13 Oppo广东移动通信有限公司 Face virtual face-lifting method and device

Also Published As

Publication number Publication date
CN109194943A (en) 2019-01-11

Similar Documents

Publication Publication Date Title
CN107817939B (en) Image processing method and mobile terminal
CN109743498B (en) Shooting parameter adjusting method and terminal equipment
CN108763317B (en) Method for assisting in selecting picture and terminal equipment
CN109032486B (en) Display control method and terminal equipment
CN107730460B (en) Image processing method and mobile terminal
CN109241832B (en) Face living body detection method and terminal equipment
CN109257505B (en) Screen control method and mobile terminal
CN109348019B (en) Display method and device
CN107734172B (en) Information display method and mobile terminal
CN111031234B (en) Image processing method and electronic equipment
CN107153500B (en) Method and equipment for realizing image display
CN110990172A (en) Application sharing method, first electronic device and computer-readable storage medium
CN111031178A (en) Video stream clipping method and electronic equipment
CN111401463A (en) Method for outputting detection result, electronic device, and medium
CN111399792B (en) Content sharing method and electronic equipment
CN110602390B (en) Image processing method and electronic equipment
CN109859718B (en) Screen brightness adjusting method and terminal equipment
CN109104573B (en) Method for determining focusing point and terminal equipment
CN110058686B (en) Control method and terminal equipment
CN109117037B (en) Image processing method and terminal equipment
CN109639981B (en) Image shooting method and mobile terminal
CN108833791B (en) Shooting method and device
CN108449490B (en) Terminal control method and terminal
CN107729100B (en) Interface display control method and mobile terminal
CN110740265B (en) Image processing method and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant