CN112116624A - Image processing method and electronic equipment - Google Patents

Image processing method and electronic equipment Download PDF

Info

Publication number
CN112116624A
CN112116624A CN201910544448.2A CN201910544448A CN112116624A CN 112116624 A CN112116624 A CN 112116624A CN 201910544448 A CN201910544448 A CN 201910544448A CN 112116624 A CN112116624 A CN 112116624A
Authority
CN
China
Prior art keywords
image
mask
processing
foreground
background
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910544448.2A
Other languages
Chinese (zh)
Inventor
冯寒予
朱聪超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201910544448.2A priority Critical patent/CN112116624A/en
Publication of CN112116624A publication Critical patent/CN112116624A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/155Segmentation; Edge detection involving morphological operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20036Morphological image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

An image processing method and an electronic device can eliminate burrs and sawteeth existing on the edge of the foreground part of an image after segmentation processing by segmenting the foreground and the background through a mask image obtained by performing corrosion, expansion, smoothing and the like on the mask image used for segmenting the foreground and the background. In addition, the fusion weight can be determined according to the mask image after the smoothing processing, and the foreground and the background after the segmentation processing are fused, so that the transition of the foreground and the background after the fusion is natural.

Description

Image processing method and electronic equipment
Technical Field
The present application relates to the field of communications technologies, and in particular, to an image processing method and an electronic device.
Background
With the development of terminal technology, the processing capability of the terminal device to the picture is more and more strong. The user can use the terminal device to process the photographed picture to obtain the effect desired by the user. In the image processing process, one process is foreground and background segmentation of an image. And respectively processing the segmented foreground image and the segmented background image, and then fusing.
In the prior art, generally, a segmentation model of deep learning is adopted to segment foreground and background images of an image to obtain a mask (mask) for foreground and background segmentation, and then the image is segmented into the foreground image and the background image based on the mask. In order to reduce the running time of the segmentation model for deep learning, generally, after an original image to be processed is downsampled into an image with low resolution, a mask is obtained through the segmentation model for deep learning based on the image with low resolution, and in this way, burrs may exist at the edge of the obtained mask, which causes the burrs to exist at the edge of the foreground part of the image finally fused through segmentation processing, and affects the visual effect.
Disclosure of Invention
The embodiment of the application provides an image processing method and electronic equipment, which are used for providing a mode for eliminating burrs at the edges of a foreground part of an image after segmentation processing.
In a first aspect, an embodiment of the present application provides an image processing method, which may be performed by an electronic device. The method comprises the following steps: acquiring a first mask image used for segmenting the foreground and the background of an image to be processed, wherein the first mask image is a binary mask image; carrying out first processing on the first mask image to obtain a second mask image, wherein the first processing comprises corrosion and/or expansion; and segmenting the foreground and the background of the image to be processed based on the second mask image to obtain a first foreground image and a first background image of the image to be processed. By performing erosion and/or dilation processing on the mask image, burrs at the foreground edge of the mask image can be eliminated.
In one possible design, the first process further includes a smoothing process; the first processing of the first mask image to obtain a second mask image includes: carrying out corrosion and/or expansion treatment on the first mask image to obtain a third mask image; and smoothing the third mask image, and performing binarization processing on the smoothed mask image according to a first threshold value to obtain the second mask image.
Through the design, the third mask image obtained after the corrosion and/or expansion treatment is subjected to smoothing treatment, and the saw teeth on the foreground edge of the mask image can be further eliminated.
In one possible design, the method further includes: and performing personalized processing according with user requirements on the first foreground image and/or the first background image to obtain a second foreground image and a second background image, and performing fusion processing on the second foreground image and the second background image. By adopting the processed mask image to segment the foreground and the background and then perform the fusion after the personalized processing, the burr at the edge of the fused image foreground and background caused by the burr at the edge of the foreground of the mask image can be eliminated, and the better visual effect is brought.
In one possible design, performing a fusion process on the second foreground image and the second background image includes: and determining a fusion weight according to the mask image obtained after smoothing the second mask image, and performing fusion processing on the second foreground image and the second background image according to the fusion weight to enable the transition between the foreground and the background of the fused image to be natural.
In one possible design, determining a first fusion weight according to a mask image obtained after smoothing the second mask image, and performing fusion processing on the second foreground image and the second background image according to the first fusion weight includes:
performing fusion processing on the second foreground image and the second background image through the following formula to obtain a fused image;
fusioni=maski*foregroundi+(1-maski)*backgroundi
wherein the fusion isiA pixel value, mask, representing the ith pixel point of the fused imageiRepresenting the pixel value of the ith pixel point of the mask image obtained after the smoothing treatment, for the roundiA pixel value, background, representing an ith pixel point of the second foreground imageiAnd representing the pixel value of the ith pixel point of the second background image.
In order to ensure that the transition between the foreground image and the background image is natural after fusion, smoothing is carried out on the mask image for dividing the foreground image and the background image, and then the mask is processed according to the pixel value of the mask image to be smoothediAs the weight value of the foreground image, (1-mask)i) And as the weight value of the background image, the foreground image and the background image are subjected to weighted fusion processing. So that the edges of the foreground image can smoothly transition to the background image.
In a possible design, the first mask image may be subjected to erosion and/or expansion processing in a manner that the first mask image is eroded and then expanded, and in another manner, the first mask image is expanded and then eroded.
In one possible design, the smoothing process is any one of the following processes: mean filtering, median filtering, bilateral filtering, or gaussian filtering.
In a second aspect, the present application further provides an image processing apparatus, including means/modules for performing the method according to any one of the first aspect or the first aspect; these modules/units may be implemented by hardware, or by hardware executing corresponding software.
In a third aspect, an embodiment of the present application further provides an electronic device, including a processor and a memory; wherein the processor is coupled to the memory; wherein the memory is used for storing program instructions; the processor is adapted to read program instructions stored in the memory to implement the method of the first aspect and any possible design thereof.
In a fourth aspect, embodiments of the present application further provide a computer storage medium storing program instructions, which, when executed on an electronic device, cause the electronic device to perform the method of the first aspect and any possible design thereof.
In a fifth aspect, embodiments of the present application further provide a computer program product, which when run on an electronic device, causes the electronic device to perform the method of the first aspect and any possible design thereof.
In a sixth aspect, embodiments of the present application further provide a chip, which is coupled with a memory in an electronic device, and performs the method of the first aspect and any possible design thereof.
In addition, the technical effects brought by the second aspect to the sixth aspect can be referred to the description of the first aspect, and are not repeated herein.
It should be noted that "coupled" in the embodiments of the present application means that two components are directly or indirectly combined with each other.
Drawings
Fig. 1 is a schematic structural diagram of a mobile phone 100 according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram of a user graphical interface of the mobile phone 100 according to an embodiment of the present application;
fig. 3 is a schematic diagram of a user graphical interface of the mobile phone 100 according to an embodiment of the present application;
fig. 4 is a schematic diagram of a user graphical interface of the mobile phone 100 according to an embodiment of the present application;
fig. 5 is a schematic flowchart of an image processing method according to an embodiment of the present application;
fig. 6 is a schematic diagram of a mask image with burrs at a foreground edge according to an embodiment of the present disclosure;
FIG. 7A is a schematic view of a mask image for removing burrs by etching according to an embodiment of the present disclosure;
FIG. 7B is a mask image schematic diagram of the dilation process provided in an embodiment of the present application;
fig. 8 is a schematic diagram of a mask image with sawteeth on a foreground edge according to an embodiment of the present disclosure;
FIG. 9 is a schematic diagram of a mask image after smoothing according to an embodiment of the present disclosure;
FIG. 10 is a schematic diagram of a second mask image after smoothing according to an embodiment of the present application;
fig. 11A is a schematic diagram of a background image and a foreground image to be fused according to an embodiment of the present application;
fig. 11B is a schematic diagram of a fused image obtained by smoothing edge processing according to the embodiment of the present application;
fig. 12 is a schematic structural diagram of an apparatus 1200 according to an embodiment of the present disclosure;
fig. 13 is a schematic structural diagram of an apparatus 1300 according to an embodiment of the present disclosure.
Detailed Description
The embodiment of the application can be applied to electronic equipment capable of realizing image segmentation and fusion, such as terminal equipment, a cloud server and the like. One or more application programs, which are computer programs capable of implementing one or more specific functions, may be installed in the terminal device. Such as camera applications, photo album applications, WeChat, Tencent chat software (QQ), WhatsApp Messenger, Link, photo sharing (instagram), Kakao Talk, nails, and the like. The application program mentioned below may be an application program carried by the terminal device when the terminal device is shipped from a factory, or an application program downloaded by a user from a network side during use of the terminal device. The image processing method provided by the embodiment of the present application can be applied to one or more application programs, for example, a camera application can process an image by using the image processing method provided by the embodiment of the present application.
The image related to the embodiment of the present application may be a picture or a frame of image in a video stream, and the embodiment of the present application is not limited.
The pixel related to the embodiment of the present application is a minimum imaging unit on one image. One pixel may correspond to one coordinate point on the image. A pixel may include one parameter (such as gray scale) or may be a set of parameters (such as gray scale, brightness, color, etc.). If a pixel includes a parameter, then the pixel value is the value of that parameter, and if a pixel is a set of parameters, then the pixel value includes the value of each parameter in the set.
The embodiments of the present application relate to a plurality of numbers greater than or equal to two.
It should be noted that the term "and/or" is only one kind of association relationship describing the associated object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship, unless otherwise specified. Moreover, in the description of the embodiments of the present invention, the terms "first," "second," and the like are used for descriptive purposes only and not for purposes of indicating or implying relative importance, nor for purposes of indicating or implying order.
The following describes terminal devices, Graphical User Interfaces (GUIs) for such terminal devices, and embodiments for using such terminal devices. In some embodiments of the present application, the terminal device may be a portable terminal, such as a mobile phone, a tablet computer, a wearable device (e.g., a smart watch) with wireless communication function, and the like. The portable terminal includes a device (such as a processor) capable of capturing an image and processing the captured image. Exemplary embodiments of the portable terminal include, but are not limited to, a mount
Figure BDA0002103522090000031
Or other operating system. The portable terminal may be another portable terminal as long as it can capture an image and process the captured image. It should also be understood that in some other embodiments of the present application, the terminal device may not be a portable terminal, but may be a desktop computer capable of capturing images and processing the captured images. In some embodiments of the present application, the terminal device may also be a camera, a video camera, a camcorder, or the like.
In other embodiments of the present application, the terminal device may also have a communication function instead of having an image acquisition function, that is, the terminal device may receive an image sent by another device and then process the received image; wherein, the other devices may be network side devices, such as images downloaded from the network by the terminal device; or, the terminal device receives the image sent by other devices through the WeChat application or other communication applications.
In other embodiments of the present application, the terminal device may not have an image processing function, but may have a communication function. For example, after the terminal device acquires an image, the image may be sent to another device, such as a server, and the other device processes the image by using the image processing method provided in the embodiment of the present application, and then sends the processed image to the terminal device.
Taking the terminal device as a mobile phone as an example, fig. 1 shows a schematic structural diagram of a mobile phone 100.
The handset 100 may include a processor 110, an external memory interface 120, an internal memory 121, an antenna 1, an antenna 2, a mobile communication module 151, a wireless communication module 152, a sensor module 180, keys 190, a display 194, a camera 193, and the like. It is to be understood that the illustrated structure of the embodiment of the present application does not specifically limit the mobile phone 100. In other embodiments of the present application, the handset 100 may include more or fewer components than shown, or some components may be combined, some components may be separated, or a different arrangement of components may be used. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The components of the handset 100 shown in fig. 1 are described below.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors. The controller may be a neural center and a command center of the cell phone 100, among others. The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
The processor 110 can execute software codes of the image processing algorithm provided by the embodiment of the present application to perform the following image processing procedures, which will be described later.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The processor 110 executes various functional applications of the cellular phone 100 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a program storage area and a data storage area. The storage program area may store software codes of an operating system and application programs (such as a camera application, an album application, a WeChat application, and the like). The storage data area may store personal data created during use of the handset 100 (e.g., images before processing, images after processing, etc.).
The internal memory 121 may also store software codes of the image processing method provided by the embodiment of the present application. When the processor 110 runs the code, the image processing flow below is performed.
The internal memory 121 may also store other contents, for example, a mask image of a foreground and a background of the image to be processed is stored in the internal memory 121, and the processor 110 may process the mask image.
The internal memory 121 may also store the processed mask image, the fused image, and the like, and illustratively, the fused image and the original image (i.e., the image before the segmentation and fusion) may be stored correspondingly. The internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like.
The external memory interface 120 is configured to connect an external memory to the mobile phone 100, where the external memory includes an external memory card (SD memory card), an NAS memory device, and the like, and the embodiment of the present application is not limited thereto. In order to save the storage space of the internal memory 121, the mobile phone 100 may also store the software code applied for the image processing method provided in the embodiment, the image after processing, and the like in the external memory. The processor 110 may access data stored in the external memory through the external memory interface 120.
The sensor module 180 may include a pressure sensor, a gyroscope sensor, an air pressure sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a proximity light sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, a bone conduction sensor, etc., which are not shown in fig. 1.
Wherein the distance sensor is used for measuring distance. The handset 100 may measure distance by infrared or laser. In some embodiments, the scene is photographed and the cell phone 100 may utilize a range sensor to measure the distance to achieve fast focus. In other embodiments, the cell phone 100 may also detect the presence of a person or object using a distance sensor. The proximity light sensor may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The cellular phone 100 emits infrared light to the outside through the light emitting diode. The handset 100 uses a photodiode to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it can be determined that there is an object near the cell phone 100. When insufficient reflected light is detected, the cell phone 100 can determine that there are no objects near the cell phone 100. The mobile phone 100 can detect that the user holds the mobile phone 100 close to the ear by using the proximity light sensor, so as to automatically turn off the screen to achieve the purpose of saving power. The proximity light sensor can also be used in a holster mode, a pocket mode automatically unlocks and locks the screen.
The ambient light sensor is used for sensing the ambient light brightness. The handset 100 may adaptively adjust the brightness of the display 194 according to the perceived ambient light level. The ambient light sensor can also be used to automatically adjust the white balance when taking a picture. The ambient light sensor may also cooperate with the proximity light sensor to detect whether the cell phone 100 is in a pocket to prevent inadvertent contact. The fingerprint sensor is used for collecting fingerprints. The mobile phone 100 can utilize the collected fingerprint characteristics to unlock the fingerprint, access the application lock, take a photograph of the fingerprint, answer an incoming call with the fingerprint, and the like. The temperature sensor is used for detecting temperature. In some embodiments, the handset 100 implements a temperature processing strategy using the temperature detected by the temperature sensor 180J.
Touch sensors, also known as "touch panels". The touch sensor may be disposed on the display screen 194, and the touch sensor and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor is used to detect a touch operation applied thereto or nearby. The touch sensor can communicate the detected touch operation to the application processor to determine the touch event type. Visual output associated with the touch operation may be provided through the display screen 194. In other embodiments, the touch sensor can be disposed on the surface of the mobile phone 100, different from the position of the display 194.
In addition, the mobile phone 100 can implement audio functions through an audio module, a speaker, a receiver, a microphone, an earphone interface, an application processor, and the like. Such as music playing, recording, etc. The handset 100 may receive key 190 inputs, generating key signal inputs relating to user settings and function controls of the handset 100. The handset 100 can generate a vibration alert (e.g., an incoming call vibration alert) using a motor. The indicator in the mobile phone 100 may be an indicator light, and may be used to indicate a charging state and a power change, or may be used to indicate a message, a missed call, a notification, and the like. The SIM card interface in the handset 100 is used to connect the SIM card. The SIM card can be attached to and detached from the cellular phone 100 by being inserted into or pulled out from the SIM card interface.
It is to be understood that the illustrated structure of the embodiment of the present invention does not specifically limit the mobile phone 100. In other embodiments of the present application, the handset 100 may include more or fewer components than shown, or some components may be combined, some components may be separated, or a different arrangement of components may be used. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The embodiment of the application can be applied to image processing scenes comprising a foreground and a background, and particularly can be applied to image processing in image shooting, image processing in video recording, or application scenes such as picture repairing and the like. For example, the foreground is a person, an animal, etc., and the background is a scene. The foreground and the background need to be processed differently, for example, the foreground and/or the background need to be processed individually according to the needs of the user, for example, color temperature and color tone of the foreground and the background are adjusted, or the background is blurred, or light is processed, and light effects of the foreground and the background are adjusted. The following description will be given by taking the application to camera photographing, image processing in a gallery, and a retouching software scene as an example.
In an example, the image processing provided by the embodiment of the present application may be applied to a photographing scene of a camera.
Take the adjustment of the color tone of the background image as an example. The display screen 194 of the handset 100 displays a home interface, where the home interface includes a camera icon. By way of example, the main interface may be the user interface 200 shown in FIG. 2 (a). Therein, the user interface 200 includes a camera icon 201. In addition, the user interface 200 may include icons for other applications, such as a settings icon, a memo icon, a gallery icon, and the like.
The mobile phone 100 may respond to a first operation of the user triggering the start camera icon 201 to start the camera 193, and display the view interface 202 in the display screen 194, wherein an image captured by the camera 193 is displayed on the view interface 202. It will be appreciated that the camera 193 in the handset 100 may also be always on. It should be understood that the application corresponding to the camera icon 201 may be referred to as a camera, and may also be referred to by other names, which are not limited. The following embodiment refers to the application to which the camera icon 201 corresponds as a camera. The first operation may be a click operation on the camera icon 201, an operation of receiving a voice instruction (for example, "turn on the camera") issued by the user, or a shortcut gesture operation (for example, three fingers slide down). In addition, in the embodiment of the present application, when the mobile phone 100 is in a blank screen state or a lock screen state, the camera 193 may be directly started and the viewfinder interface 202 may be displayed on the display 194 in response to a voice instruction or a shortcut gesture operation of the user.
For example, the viewfinder interface 202 shown in fig. 2(b) takes the example of capturing an image by a front camera, and the viewfinder interface 202 displays the image captured by the front camera of the mobile phone 100. The viewing interface 202 may also display images captured by a rear camera of the handset 100. It should be noted that, in the embodiment of the present application, the mobile phone 100 can switch between the front camera and the rear camera in response to the operation of the camera switching button 204 shown in fig. 2 (c).
The following description will take an example in which the mobile phone 100 displays the viewfinder interface 202 on the display 194 in response to the first operation, and an image captured by the front camera of the mobile phone 100 is displayed on the viewfinder interface 202. For example, as shown in fig. 2(b), a shooting mode selection button area 203 may be further included in the viewing interface, and the shooting mode selection button area 203 may include mode selection buttons for photographing, recording, specialization, portrait, large aperture, night view, or more. The user can view more photographing modes of the cellular phone 100 by sliding left and right in the photographing mode selection button region 203. Taking the selection of the "portrait" mode as an example, as shown in fig. 2(c), the viewfinder interface 202 may further include a hue selection button 206, and the mobile phone 100 prompts a plurality of hues for the user to select in response to an operation triggered by the user clicking the hue selection button 206, as shown in fig. 2(c), and pops up the hue a and the hue B for the user to select as an example.
If the user clicks and selects the hue a, the mobile phone 100 segments the foreground and the background of the to-be-processed image in the viewing interface 202, which are collected by the camera, to obtain a first mask image, and then performs a first process on the first mask image to obtain a second mask image. For example, the first treatment may include erosion and/or swelling; then, the mobile phone 100 segments the foreground and the background of the image to be processed based on the second mask image to obtain a first foreground image and a first background image of the image to be processed, performs personalized processing meeting the user requirements on the first foreground image and/or the first background image to obtain a second foreground image and a second background image, and finally performs fusion processing on the second foreground image and the second background image to obtain a fused image. Finally, the processed and fused image is displayed in the viewing interface 202, the user can see the image with the adjusted color tone in the viewing interface 202, and here, the user can click the shooting button 205 after the user sees the image to the satisfaction degree, and the mobile phone 100 triggers the processed image captured in the shooting viewing interface 202 in response to the operation of clicking the shooting button 205 by the user, so as to obtain an image with the adjusted background color tone.
As an example, if the user is not satisfied with seeing the image after the color adjustment in the viewfinder interface 202, the user may continue to click the color selection button 206, thereby triggering the mobile phone 100 to continue to adjust the content of the viewfinder image according to the above-mentioned process until the user is satisfied with the image, and then click the capture button 205.
In another example, the image processing provided by the embodiment of the present application may be applied to a scene in which images in a gallery are processed.
Referring to fig. 3(a), the mobile phone 100 displays a main interface 301, and the main interface 302 includes icons of a plurality of applications, including an icon 302 of a gallery. When the mobile phone 100 detects the second operation for the icon 302, the interface 303 is displayed. As shown in fig. 3(b), the interface 303 includes thumbnails of a plurality of images. The cell phone 100 detects an operation for the thumbnail 304, and displays the interface 305. As shown in fig. 3(b), an image 306 (image corresponding to the thumbnail 304) is included in the interface 305. Upon detecting operation of the control 307, the cell phone 100 displays a plurality of options, such as a "tone adjustment mode" selection and a "filter" option. When the mobile phone 100 detects that the "tone adjustment mode" is selected and selects the tone a, the processor 110 in the mobile phone 100 processes the image 306 by using the image processing method provided in the embodiment of the present application, and the specific image processing method may refer to the following description of the embodiment corresponding to fig. 5, which is not described repeatedly herein, specifically adjusts the tone of the background portion of the image 306 to the tone a, and displays the processed and fused image on the display screen, as shown in fig. 3 (d).
In yet another example, the image processing provided by the embodiment of the present application may be applied to a cropping software. It should be noted that the modification software may be installed when the mobile phone 100 leaves the factory, or may be downloaded and installed from the network side by the mobile phone 100. The retouching software may be various, such as american show, vsco, MIX, etc., or the retouching function may be integrated into the gallery in the above scenario two, which is not limited by the embodiment of the present application.
Illustratively, referring to fig. 4(a), the mobile phone 100 displays an interface 401, and an image 402 is displayed in the interface 401. When the mobile phone 100 detects an operation on the control of the edit control 403, two options are displayed: "tone adjustment mode" and filter mode. When the mobile phone 100 detects an operation for the "color tone adjustment mode", the processor 40 in the mobile phone 100 processes the image 402 by using the image processing method provided in the embodiment of the present application, and the specific image processing method may refer to the embodiment corresponding to fig. 5, which is not described repeatedly herein, and displays the processed and fused image on the display screen 193, which refers to fig. 4 (b). It should be understood that, in the embodiment of the present application, only the adjustment of the image color tone is taken as an example, and certainly, the adjustment may be background blurring, light effect adjustment, and the like, and all scenes that need to be processed separately for the foreground image and the background image are applicable to the present application. The above lists three possible scenes, and it should be noted that the method provided in the embodiment of the present application may also be used in other scenes, such as a video recording scene, a video call scene of WeChat, a production scene of a WeChat expression package, and the like.
Of course, the above only lists two scenes of taking and processing a gallery photo, and the embodiment of the present application may also be applied to the mobile phone 100 receiving an image from another electronic device, performing the processing of the embodiment of the present application on the image, then storing the processed image locally or displaying the processed image to a user, or feeding back the processed image to an electronic device that sends the original image, and details are not described here.
The following describes in detail how to execute the process of the image processing method provided by the embodiment of the present application inside the device, please refer to fig. 5, and the process can be applied to the mobile phone 100 shown in fig. 1 or other devices. Taking the method applied to the mobile phone 100 shown in fig. 1 as an example, software codes of the method may be stored in the internal memory 121, and the processor 110 of the mobile phone 100 may execute the software codes to implement the image processing flow shown in fig. 5.
As shown in fig. 5, the flow of the method includes:
s501, the mobile phone 100 obtains a first mask image for segmenting a foreground and a background of an image to be processed, where the first mask image is a binary mask image.
The image to be processed may be captured by the mobile phone 100 through a camera, or may be a picture stored in the mobile phone 100.
Illustratively, the handset 100 may obtain a first mask image of the image to be processed through a foreground-background segmentation algorithm of deep learning. The area with a pixel value of 0 in the first mask image may represent a background area, the area with a pixel value of 1 may represent a foreground area, or certainly, the area with a pixel value of 1 may represent a background area, and the area with a pixel value of 0 may represent a foreground area, which is not limited in this application. In this embodiment, an area with a pixel value of 0 in a mask image may represent a background area, and an area with a pixel value of 1 may represent a foreground area.
S502, the mobile phone 100 performs a first process on the first mask image to obtain a second mask image. Wherein the first treatment may comprise erosion and/or swelling.
It should be noted that, the boundary points of the object can be eliminated by corrosion treatment, so that the target is reduced, and the noise points smaller than the structural elements can be eliminated; all background points in contact with the object can be merged into the object through the expansion processing, so that the object is enlarged, and the holes in the object can be supplemented.
Illustratively, when the first process includes erosion and/or dilation, the cell phone 100 may perform the erosion and/or dilation process on the first mask image by:
in one mode, the mobile phone 100 performs the erosion process on the first mask image, and then performs the expansion process on the eroded first mask image.
In another mode, the mobile phone 100 performs an expansion process on the first mask image, and then performs an etching process on the expanded first mask image.
S503, the mobile phone 100 segments the foreground and the background of the image to be processed based on the second mask image to obtain a first foreground image and a first background image of the image to be processed.
Since the time required for obtaining the mask image by the deep learning segmentation model is long, the time for obtaining the mask image by the deep learning needs to be reduced in order to improve the user experience. In addition, in the field of video processing, since a video has real-time performance, image processing of each frame must be completed within a limited time, otherwise, frame loss or inconsistent inter-frame effect may be caused, and therefore, the time length for acquiring a mask image by using deep learning also needs to be reduced. The original image is typically down-sampled to a low resolution, such as: 288 x 288, and the output of the segmentation model based on deep learning is also low resolution, and then the resolution of the original image is restored by up-sampling. The original image is down-sampled and has more lost information, and the segmentation model based on the deep learning is likely to mistake part of the background as part of the foreground, so that the output foreground edge contains burrs, as shown in fig. 6. After the mask image obtained based on the segmentation model is subjected to corrosion and expansion processing in the above manner, burrs at the edge of the mask can be reduced, the original mask is subjected to corrosion processing first, and the burrs at the edge of the mask are eliminated, for example, as shown in fig. 7A, but the mask is generally slightly smaller than the original mask, and in this case, the mask subjected to corrosion processing can be subjected to expansion processing, for example, as shown in fig. 7B. Therefore, burrs do not exist on the edges of the foreground image and the background image obtained after the original image is segmented based on the mask, and a natural visual effect is achieved.
Due to the mask recovered by upsampling, there may be jaggies on the edges, possibly due to upsampling, as shown in fig. 8. In order to eliminate the jaggy of the edge of the first mask image obtained by the segmentation model of the deep learning, smoothing (also referred to as blurring) may be performed on the first mask image, then binarization processing may be performed on the smoothed mask image, and then the foreground and the background of the image to be processed may be segmented according to the mask image obtained by the binarization processing.
In addition, after the erosion and/or expansion, although the burr of the edge of the mask is removed, there may be jaggy, in this case, the first processing in step S502 may further include a smoothing processing, and the third mask image obtained through the erosion and/or expansion may be subjected to a smoothing processing (may also be referred to as a blurring processing), and then the mask image obtained through the smoothing processing may be subjected to a binarization processing by using a set first threshold value to obtain the second mask image. Further, in step S503, the mobile phone 100 further segments the foreground and the background of the image to be processed according to the second mask image to obtain a first foreground image and a first background image of the image to be processed. The edges of the mask image after the binarization by the smoothing process have no more jaggies, as shown in fig. 9.
For example, in the smoothing processing on the mask image (such as the first mask image, the second mask image, or the third mask image) according to the embodiment of the present application, the image may be processed by using a mean filter, a median filter, a gaussian filter, or a bilateral filter, so as to achieve a smoothing effect.
After segmenting the foreground and the background of the image to be processed based on the second mask image to obtain a first foreground image and a first background image of the image to be processed, the mobile phone 100 executes S504, performs personalized processing meeting the user requirements on the first foreground image and/or the first background image to obtain a second foreground image and a second background image, and executes S505, and performs fusion processing on the second foreground image and the second background image to obtain a fused image.
In addition, after the original image to be processed is obtained, the foreground and the background are generally segmented and processed respectively according to the mask. When the background image is processed, only the place where the mask is 0 is processed, and the pixel value is directly set to be 0 at the place where the mask is not 0; when processing the foreground image, only the place where the mask is 1 is processed, and the pixel value is set to 0 directly at the place where the mask is not 1. After the foreground image and the background image are processed, a processed foreground image and a processed background image are obtained respectively. Generally, two images are directly subjected to addition operation for fusion to obtain an image processed by the image processing. When the difference between the foreground image and the background image is large, the transition is hard and obvious area edge appears through simple addition operation, and the visual effect is very unnatural.
In order to solve the problem of unnatural edge transition, when the second foreground image and the second background image are subjected to fusion processing, the second mask image may be smoothed first, for example, as shown in fig. 10, a fusion weight is determined according to the mask image obtained after the second mask image is smoothed, and the second foreground image and the second background image are subjected to fusion processing according to the fusion weight. The fusion weight described herein includes a weight value of the second foreground image and a weight value of the second background image.
Exemplarily, when the second foreground image and the second background image are fused according to the fusion weight, the second foreground image and the second background image may be fused according to the following formula (1) to obtain a fused image;
fusioni=maski*foregroundi+(1-maski)*backgroundi(ii) a Formula (1)
Wherein the fusion isiA pixel value, mask, representing the ith pixel point of the fused imageiRepresenting the pixel value of the ith pixel point of the smoothed second mask image, for afteriA pixel value, background, representing an ith pixel point of the second foreground imageiAnd representing the pixel value of the ith pixel point of the second background image.
As can be seen from the above formula (1), in order to make the transition between the foreground image and the background image natural after the fusion, the mask image for dividing the foreground image and the background image is smoothed, and then the pixel value mask of the mask image to be smoothed is used to perform smoothingiAs the weight value of the foreground image, (1-mask)i) And as the weight value of the background image, the foreground image and the background image are subjected to weighted fusion processing. So that the edges of the foreground image can smoothly transition to the background image. For example, referring to a background image and a foreground image to be fused shown in fig. 11A, a fused image obtained through the smoothing edge processing according to the embodiment of the present application is shown in fig. 11B.
In the embodiments provided in the present application, the method provided in the embodiments of the present application is described from the perspective of the terminal device (the mobile phone 100) as an execution subject. In order to implement the functions in the method provided by the embodiment of the present application, the terminal may include a hardware structure and/or a software module, and implement the functions in the form of a hardware structure, a software module, or a hardware structure and a software module. Whether any of the above-described functions is implemented as a hardware structure, a software module, or a hardware structure plus a software module depends upon the particular application and design constraints imposed on the technical solution.
Based on the same concept as the method described above, an apparatus 12 is further provided in the embodiments of the present application, and as shown in fig. 12, the apparatus 1120 may specifically be a processor, or a chip system in an electronic device, or a module in the electronic device. Illustratively, the apparatus may comprise an acquisition unit 1201, a masking processing unit 1202, a segmentation unit 1203, and a personalization processing unit 1204 and a fusion unit 1205. The acquisition unit 1201, the masking processing unit 1202, the segmentation unit 1203, the personalization processing unit 1204 and the fusion unit 1205 perform the method steps illustrated in the embodiment of fig. 5. For example, the obtaining unit 1201 may be configured to obtain a first mask image of an image to be processed, the mask processing unit 1202 is configured to perform first processing on the first mask image, the segmenting unit 1203 is configured to segment a foreground and a background of the image to be processed to obtain a first foreground image and a first background image, the personalization processing unit 1204 is configured to perform personalization processing on the first foreground processing and/or the first background image according to a user requirement to obtain a second foreground image and a second background image, and the fusing unit 1205 is configured to perform fusion processing on the second foreground image and the second background image.
An electronic device is also provided in the embodiments of the present application, as shown in fig. 13, a processor 1310 may be included in the electronic device 13. Optionally, the electronic device 13 may further include a memory 1320. The memory 1320 may be disposed inside the electronic device 13, or may be disposed outside the electronic device 13. The above-mentioned acquisition unit 1201, masking processing unit 1202, segmentation unit 1203, personalization processing unit 1204 and fusion unit 1205 shown in fig. 10 may all be implemented by the processor 1310. Optionally, the device 13 may further include a display 1330 and a camera 1340. The processor 1310 is coupled with the memory 1320, the display 1330 and the camera 1340, and in the embodiment of the present invention, the coupling is an indirect coupling or a communication connection between devices, units or modules, and may be in an electrical, mechanical or other form, which is used for information interaction between the devices, units or modules. It should be noted that, in the embodiment of the present application, the display screen and the camera may be located on the electronic device, or may not be located on the electronic device. For example, the display screen and/or the camera may be connected to the electronic device as an external device.
In particular, memory 1320 is used to store program instructions. The display screen 1330 is used to display a photo preview interface, which includes images captured by the camera 1340. The processor 1310 is configured to call program instructions stored in the memory 1320 to cause the electronic device 1300 to perform the steps performed by the electronic device in the image processing method shown in fig. 5.
It should be understood that the electronic device 1300 may be used to implement the method of the image processing method shown in fig. 5 according to the embodiment of the present application, and reference may be made to the above for relevant features, which are not described herein again.
Embodiments of the present application also provide a computer-readable storage medium, which may include a memory, where the memory may store a program, and the program, when executed, causes an electronic device to execute all or part of the steps described in the method embodiment shown in fig. 5.
Embodiments of the present application also provide a computer program product, which when run on an electronic device, causes the electronic device to perform all or part of the steps described in the method embodiment shown in fig. 5.
It should be noted that the division of the unit in the embodiment of the present application is schematic, and is only a logic function division, and there may be another division manner in actual implementation. Each functional unit in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. For example, in the above embodiment, the first obtaining unit and the second obtaining unit may be the same unit or different units. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
As used in the above embodiments, the term "when …" may be interpreted to mean "if …" or "after …" or "in response to a determination of …" or "in response to a detection of …", depending on the context. Similarly, depending on the context, the phrase "at the time of determination …" or "if (a stated condition or event) is detected" may be interpreted to mean "if the determination …" or "in response to the determination …" or "upon detection (a stated condition or event)" or "in response to detection (a stated condition or event)".
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., coaxial cable, fiber optic, digital subscriber line) or wirelessly (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid state disk), among others.
The foregoing description, for purpose of explanation, has been described with reference to specific embodiments. However, the exemplary discussions above are not intended to be exhaustive or to limit the application to the precise forms disclosed. Many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the application and its practical applications, to thereby enable others skilled in the art to best utilize the application and various embodiments with various modifications as are suited to the particular use contemplated.

Claims (10)

1. An image processing method applied to an electronic device, the method comprising:
acquiring a first mask image used for segmenting the foreground and the background of an image to be processed, wherein the first mask image is a binary mask image;
carrying out first processing on the first mask image to obtain a second mask image, wherein the first processing comprises corrosion and/or expansion;
and segmenting the foreground and the background of the image to be processed based on the second mask image to obtain a first foreground image and a first background image of the image to be processed.
2. The method of claim 1, wherein the first processing further comprises smoothing;
the first processing of the first mask image to obtain a second mask image includes:
carrying out corrosion and/or expansion treatment on the first mask image to obtain a third mask image;
and smoothing the third mask image, and performing binarization processing on the smoothed mask image according to a first threshold value to obtain the second mask image.
3. The method of claim 1 or 2, wherein the method further comprises:
and performing personalized processing according with user requirements on the first foreground image and/or the first background image to obtain a second foreground image and a second background image, and performing fusion processing on the second foreground image and the second background image.
4. The method of claim 3, wherein fusing the second foreground image and the second background image comprises:
and determining a fusion weight according to the mask image obtained after the second mask image is subjected to smoothing processing, and performing fusion processing on the second foreground image and the second background image according to the fusion weight.
5. The method according to claim 4, wherein determining a first fusion weight according to the mask image obtained by smoothing the second mask image, and performing fusion processing on the second foreground image and the second background image according to the first fusion weight comprises:
performing fusion processing on the second foreground image and the second background image through the following formula to obtain a fused image;
fusioni=maski*foregroundi+(1-maski)*backgroundi
wherein the fusion isiA pixel value, mask, representing the ith pixel point of the fused imageiRepresenting the pixel value of the ith pixel point of the mask image obtained after the smoothing treatment, for the roundiA pixel value, background, representing an ith pixel point of the second foreground imageiAnd representing the pixel value of the ith pixel point of the second background image.
6. The method of any of claims 1-5, wherein the subjecting the first mask image to erosion and/or dilation comprises:
performing erosion processing on the first mask image, and then performing expansion processing on the first mask image after the erosion processing, or,
and after the first mask image is subjected to expansion processing, carrying out corrosion processing on the expanded first mask image.
7. The method according to claim 2 or 4, wherein the smoothing process is any one of the following processes:
mean filtering, median filtering, bilateral filtering, or gaussian filtering.
8. An electronic device comprising a processor and a memory;
the memory is to store a first image and one or more computer programs; the one or more computer programs stored in the memory, when executed by the processor, enable the electronic device to implement the method of any of claims 1-7.
9. A computer-readable storage medium, comprising a computer program which, when run on an electronic device, causes the electronic device to perform the method of any of claims 1 to 7.
10. A program product comprising instructions which, when run on a computer, cause the computer to perform the method of any one of claims 1-7.
CN201910544448.2A 2019-06-21 2019-06-21 Image processing method and electronic equipment Pending CN112116624A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910544448.2A CN112116624A (en) 2019-06-21 2019-06-21 Image processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910544448.2A CN112116624A (en) 2019-06-21 2019-06-21 Image processing method and electronic equipment

Publications (1)

Publication Number Publication Date
CN112116624A true CN112116624A (en) 2020-12-22

Family

ID=73795164

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910544448.2A Pending CN112116624A (en) 2019-06-21 2019-06-21 Image processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN112116624A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112991208A (en) * 2021-03-11 2021-06-18 Oppo广东移动通信有限公司 Image processing method and device, computer readable medium and electronic device
CN113096069A (en) * 2021-03-08 2021-07-09 北京达佳互联信息技术有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN114092364A (en) * 2021-08-12 2022-02-25 荣耀终端有限公司 Image processing method and related device
WO2022160701A1 (en) * 2021-01-29 2022-08-04 北京市商汤科技开发有限公司 Special effect generation method and apparatus, device, and storage medium
CN115908120A (en) * 2023-01-06 2023-04-04 荣耀终端有限公司 Image processing method and electronic device
WO2024016923A1 (en) * 2022-07-18 2024-01-25 北京字跳网络技术有限公司 Method and apparatus for generating special effect graph, and device and storage medium
CN113096069B (en) * 2021-03-08 2024-07-26 北京达佳互联信息技术有限公司 Image processing method, device, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103413347A (en) * 2013-07-05 2013-11-27 南京邮电大学 Extraction method of monocular image depth map based on foreground and background fusion
CN104992445A (en) * 2015-07-20 2015-10-21 河北大学 Automatic division method for pulmonary parenchyma of CT image
CN107730528A (en) * 2017-10-28 2018-02-23 天津大学 A kind of interactive image segmentation and fusion method based on grabcut algorithms
CN108337515A (en) * 2018-01-19 2018-07-27 浙江大华技术股份有限公司 A kind of method for video coding and device
CN108596940A (en) * 2018-04-12 2018-09-28 北京京东尚科信息技术有限公司 A kind of methods of video segmentation and device
CN109829877A (en) * 2018-09-20 2019-05-31 中南大学 A kind of retinal fundus images cup disc ratio automatic evaluation method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103413347A (en) * 2013-07-05 2013-11-27 南京邮电大学 Extraction method of monocular image depth map based on foreground and background fusion
CN104992445A (en) * 2015-07-20 2015-10-21 河北大学 Automatic division method for pulmonary parenchyma of CT image
CN107730528A (en) * 2017-10-28 2018-02-23 天津大学 A kind of interactive image segmentation and fusion method based on grabcut algorithms
CN108337515A (en) * 2018-01-19 2018-07-27 浙江大华技术股份有限公司 A kind of method for video coding and device
CN108596940A (en) * 2018-04-12 2018-09-28 北京京东尚科信息技术有限公司 A kind of methods of video segmentation and device
CN109829877A (en) * 2018-09-20 2019-05-31 中南大学 A kind of retinal fundus images cup disc ratio automatic evaluation method

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022160701A1 (en) * 2021-01-29 2022-08-04 北京市商汤科技开发有限公司 Special effect generation method and apparatus, device, and storage medium
CN113096069A (en) * 2021-03-08 2021-07-09 北京达佳互联信息技术有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN113096069B (en) * 2021-03-08 2024-07-26 北京达佳互联信息技术有限公司 Image processing method, device, electronic equipment and storage medium
CN112991208A (en) * 2021-03-11 2021-06-18 Oppo广东移动通信有限公司 Image processing method and device, computer readable medium and electronic device
CN112991208B (en) * 2021-03-11 2024-05-07 Oppo广东移动通信有限公司 Image processing method and device, computer readable medium and electronic equipment
CN114092364A (en) * 2021-08-12 2022-02-25 荣耀终端有限公司 Image processing method and related device
CN114092364B (en) * 2021-08-12 2023-10-03 荣耀终端有限公司 Image processing method and related device
WO2024016923A1 (en) * 2022-07-18 2024-01-25 北京字跳网络技术有限公司 Method and apparatus for generating special effect graph, and device and storage medium
CN115908120A (en) * 2023-01-06 2023-04-04 荣耀终端有限公司 Image processing method and electronic device
CN115908120B (en) * 2023-01-06 2023-07-07 荣耀终端有限公司 Image processing method and electronic device

Similar Documents

Publication Publication Date Title
CN110493538B (en) Image processing method, image processing device, storage medium and electronic equipment
CN112116624A (en) Image processing method and electronic equipment
CN114205522B (en) Method for long-focus shooting and electronic equipment
CN113747085B (en) Method and device for shooting video
CN105874776B (en) Image processing apparatus and method
CN110100251B (en) Apparatus, method, and computer-readable storage medium for processing document
EP3125135A1 (en) Picture processing method and device
CN111491102B (en) Detection method and system for photographing scene, mobile terminal and storage medium
CN109040523B (en) Artifact eliminating method and device, storage medium and terminal
CN110430357B (en) Image shooting method and electronic equipment
CN112153272B (en) Image shooting method and electronic equipment
US12003850B2 (en) Method for selecting image based on burst shooting and electronic device
CN110072057B (en) Image processing method and related product
CN112614057A (en) Image blurring processing method and electronic equipment
CN113810604B (en) Document shooting method, electronic device and storage medium
US20210406532A1 (en) Method and apparatus for detecting finger occlusion image, and storage medium
CN113810588B (en) Image synthesis method, terminal and storage medium
US20230224574A1 (en) Photographing method and apparatus
US20240086580A1 (en) Unlocking method and electronic device
KR20210101009A (en) Method for Recording Video using a plurality of Cameras and Device thereof
CN108259767B (en) Image processing method, image processing device, storage medium and electronic equipment
CN112532854B (en) Image processing method and electronic equipment
CN109040604B (en) Shot image processing method and device, storage medium and mobile terminal
CN115567783B (en) Image processing method
US20230014272A1 (en) Image processing method and apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination