CN114170222A - Image processing method, related device, equipment and storage medium - Google Patents

Image processing method, related device, equipment and storage medium Download PDF

Info

Publication number
CN114170222A
CN114170222A CN202111633399.3A CN202111633399A CN114170222A CN 114170222 A CN114170222 A CN 114170222A CN 202111633399 A CN202111633399 A CN 202111633399A CN 114170222 A CN114170222 A CN 114170222A
Authority
CN
China
Prior art keywords
image
target
target image
region
shooting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202111633399.3A
Other languages
Chinese (zh)
Inventor
翁超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen TetrasAI Technology Co Ltd
Original Assignee
Shenzhen TetrasAI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen TetrasAI Technology Co Ltd filed Critical Shenzhen TetrasAI Technology Co Ltd
Priority to CN202111633399.3A priority Critical patent/CN114170222A/en
Publication of CN114170222A publication Critical patent/CN114170222A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses an image processing method, a related device, equipment and a storage medium, wherein the method comprises the following steps: confirming to enter a dim light mode in response to detecting that the brightness of the current environment is not greater than a preset brightness threshold; shooting the current environment respectively to obtain a depth image and a target image, wherein the target image comprises at least one of a black-and-white image and a color image; determining a first object region in the target image about a target object based on a first detection region in the depth image about the target object, wherein the first detection region is subject to target object detection on the depth image; and performing enhancement processing on the first object region in the target image. By the method, the imaging quality of the target object can be improved.

Description

Image processing method, related device, equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method, and a related apparatus, device, and storage medium.
Background
With the development of electronic devices, people have higher and higher requirements for the image capability of electronic devices. At present, the imaging level of electronic equipment under the dark light condition is the focus of attention, and especially how to improve the imaging quality of portrait under the dark light condition has great significance for improving the product strength of products.
In the existing dark light shooting mode, after a color image is obtained, the color image is detected, and when a portrait is detected, targeted processing is performed to improve the imaging quality of the portrait. Due to the fact that the light is insufficient under the dim light condition, the portrait cannot be successfully detected, the imaging quality of the portrait is poor, and user experience is greatly influenced.
Therefore, how to improve the image quality of the portrait under the dark light condition has important significance.
Disclosure of Invention
The application at least provides an image processing method, a related device, equipment and a storage medium.
A first aspect of the present application provides an image processing method, including: confirming to enter a dim light mode in response to detecting that the brightness of the current environment is not greater than a preset brightness threshold; shooting the current environment respectively to obtain a depth image and a target image, wherein the target image comprises at least one of a black-and-white image and a color image; determining a first object region in the target image about a target object based on a first detection region in the depth image about the target object, wherein the first detection region is subject to target object detection on the depth image; and performing enhancement processing on the first object region in the target image.
Therefore, in the dim light mode, the first object area of the target object in the target image is determined, and the first object area in the target image is subjected to enhancement processing, so that the imaging quality of the target object in the target image can be improved, and the imaging quality of the target object is improved.
Wherein the determining a first object region in the target image about the target object based on a first detection region in the depth image about the target object includes: determining a second object area corresponding to the first detection area in the target image; determining the first object region based on the second object region.
Therefore, by determining a second object region corresponding to the first detection region of the depth image in the target image and obtaining a first object region about the target object in the target image by using the second object region, detection of the target object in the target image by using the depth image is achieved.
After the current environment is respectively photographed to obtain a depth image and a target image, and before a second object area corresponding to the first detection area is determined in the target image, the image processing method further includes: carrying out target object detection on the target image to obtain a detection result of the target image; the determining the first object region based on the second object region includes: in response to the target object being detected in the target image, fusing a second detection area of the target object included in the detection result with the second object area to obtain the first object area; or, in response to the target object not being detected in the target image, taking the second object region as the first object region;
therefore, in the target image, the first object region is obtained by fusing the second detection region of the target object with the second object region, so that the detection result of the target object by the target image and the detection result of the target object by the depth map image can be simultaneously utilized, the detection of the target object by different images is realized, and the detection success rate of the target object is improved. In addition, when the target object is not detected in the target image, the second object region may be directly used as the first object region, so as to position the target object in the target image.
Wherein, in the above target image, determining a second object region corresponding to the first detection region includes: acquiring a current image registration parameter between the depth image and the target image; determining a pixel correspondence between the depth image and the target image based on the current image registration parameters; determining a second object region corresponding to the first detection region in the target image based on the pixel correspondence.
Therefore, the current image registration parameter between the depth image and the target image is obtained, so that the pixel correspondence between the depth image and the target image can be determined, and then the second target area corresponding to the first detection area can be determined in the target image according to the pixel correspondence, so that the target object in the target image can be detected by using the depth image.
Wherein the above-mentioned obtaining of the current image registration parameter between the depth image and the target image includes: acquiring preset image registration parameters as current image registration parameters, wherein the preset image registration parameters are determined after calibration during assembly of a first camera component and a second camera component, the first camera component is used for shooting a target image, and the second camera component is used for shooting a depth image; or when the difference between the imaging time corresponding to the depth image and the imaging time corresponding to the target image is detected to be greater than a preset difference threshold value, performing image registration on the depth image and the target image to obtain the current image registration parameter.
Therefore, when the difference between the imaging time corresponding to the depth image and the imaging time corresponding to the target image is detected to be larger than the preset difference threshold value, the actual image registration parameters of the depth image and the target image can be used as the current image registration parameters by performing image registration on the depth image and the target image, and then the current image registration parameters are used for performing image fusion so as to improve the quality of image fusion.
The image processing method of the present application further includes: the following steps to determine the preset image registration parameters: acquiring registration images respectively shot by the first camera shooting assembly and the second camera shooting assembly after assembly, and acquiring actual image registration parameters obtained after image registration is carried out on the basis of the registration images; in response to the preset image registration parameter and the actual image registration parameter not being consistent, determining the actual image registration parameter as the preset image registration parameter.
Therefore, by acquiring registration images respectively shot by the first camera shooting assembly and the second camera shooting assembly after assembly, and acquiring actual image registration parameters obtained after image registration is carried out on the basis of the registration images, under the condition that preset image registration parameters are inconsistent with the actual image registration parameters, the recalibration of the first camera shooting assembly and the second camera shooting assembly can be realized by determining the actual image registration parameters as the preset image registration parameters, so that the imaging quality can be provided when the preset image registration parameters are used for carrying out image fusion subsequently.
The image processing method is applied to the shooting equipment, and further comprises the following steps: after the collision of the shooting equipment is detected, displaying prompt information, wherein the prompt information is used for prompting a user to carry out image shooting operation; and responding to a shooting instruction input by the user, and obtaining a registration image shot by the first camera assembly and the second camera assembly after assembly.
Therefore, whether the shooting equipment collides or not is detected, and under the condition that the collision is detected, the user is prompted to carry out image shooting operation by displaying prompt information, so that the registration images shot by the first camera shooting assembly and the second camera shooting assembly after assembly can be obtained, and then the registration images can be subsequently utilized to carry out registration, so that the first camera shooting assembly and the second camera shooting assembly can be recalibrated.
Wherein, the detecting that the brightness of the current environment is not greater than the preset brightness threshold includes: and detecting that the brightness of the current environment is not greater than a preset brightness threshold value through the light sensor.
Therefore, the brightness information of the current environment is obtained by utilizing the light sensor for detection, so that the brightness of the current environment is detected to be not more than the preset brightness threshold value through the brightness information obtained by the light sensor, and the brightness detection of the current environment is realized.
Wherein, the detecting that the brightness of the current environment is not greater than the preset brightness threshold includes: and detecting that the brightness of the current environment is not greater than a preset brightness threshold value through the brightness information of the preview picture of the first camera shooting assembly and/or the second camera shooting assembly.
Therefore, the brightness information of the current environment is acquired by utilizing the brightness information of the preview picture of the first camera shooting assembly and/or the second camera shooting assembly, so that the brightness of the current environment is detected to be not more than the preset brightness threshold value through the brightness information of the preview picture, and the brightness detection of the current environment is realized
Wherein, the detecting that the brightness of the current environment is not greater than the preset brightness threshold includes: and responding to a night scene shooting instruction input by a user, and determining that the brightness of the current environment is detected to be not greater than a preset brightness threshold value.
Therefore, by responding to the night scene shooting instruction input by the user, it can be correspondingly determined that the brightness of the detected current environment is not greater than the preset brightness threshold.
Wherein, the above-mentioned shooting respectively the current environment to obtain the depth image and the target image includes: shooting the current environment respectively to obtain the depth images of a preset number of frames and the target images of the preset number of frames, and determining the depth images and the target images obtained corresponding to the same shooting moment as an image group; the determining a first object region about a target object in the target image based on a first detection region about the target object in the depth image, and performing enhancement processing on the first object region in the target image includes: for each group of the images, determining the first object region in the target image in the group of the images by using the first detection region of the depth image in the group of the images, and performing enhancement processing on the first object region in the target image in the group of the images; after the enhancing the first object region in the target image, the method further comprises: and carrying out image fusion on the target image subjected to the enhancement processing to obtain a fused image.
Therefore, by performing enhancement processing on the first object region in the target image of each image group, the imaging quality of the target object in the target image of each image group can be improved. In addition, the target images subjected to enhancement processing are subjected to image fusion to obtain a fused image, the target images of different image groups can be subjected to image fusion, and the imaging quality of the fused image is improved by utilizing the image information of the multi-frame target images.
The second aspect of the present application provides an image processing apparatus, which includes a brightness detection module, an acquisition module, an area detection module, and a processing module, where the brightness detection module is configured to detect whether a brightness of a current environment is greater than a preset brightness threshold, and determine to enter a dim mode when the brightness of the current environment is not greater than the preset brightness threshold; the acquisition module is used for respectively shooting the current environment to obtain a depth image and a target image, wherein the target image comprises at least one of a black-and-white image and a color image; the region detection module is used for determining a first object region of a target object in the target image based on a first detection region of the target object in the depth image, wherein the first detection region is used for carrying out target object detection on the depth image; the processing module is used for performing enhancement processing on the first object region in the target image.
A third aspect of the present application provides an electronic device, the device comprising a memory and a processor coupled to each other, the processor being configured to execute program instructions stored in the memory to implement the image processing method described in the first aspect above.
A fourth aspect of the present application provides a computer-readable storage medium having stored thereon program instructions that, when executed by a processor, implement the image processing method described in the first aspect above.
According to the scheme, in the dim light mode, the first object area of the target object in the target image is determined, and the first object area in the target image is subjected to enhancement processing, so that the imaging quality of the target object in the target image can be improved, and the imaging quality of the target object is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and, together with the description, serve to explain the principles of the application.
FIG. 1 is a first flowchart of an embodiment of an image processing method of the present application;
FIG. 2 is a second flowchart of an embodiment of an image processing method of the present application;
FIG. 3 is a schematic flow chart diagram illustrating another embodiment of an image processing method of the present application;
FIG. 4 is a schematic flow chart diagram of another embodiment of the image processing method of the present application;
FIG. 5 is a schematic flow chart diagram illustrating a further embodiment of an image processing method according to the present application;
FIG. 6 is a block diagram of an embodiment of an image processing apparatus according to the present application;
FIG. 7 is a block diagram of an embodiment of an electronic device of the present application;
FIG. 8 is a block diagram of an embodiment of a computer-readable storage medium of the present application.
Detailed Description
The following describes in detail the embodiments of the present application with reference to the drawings attached hereto.
In the following description, for purposes of explanation and not limitation, specific details are set forth such as particular system structures, interfaces, techniques, etc. in order to provide a thorough understanding of the present application.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship. Further, the term "plurality" herein means two or more than two. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
The execution subject of the image processing method of the present application may be an image processing apparatus, and the image processing apparatus may be any terminal device, server, other electronic device, or the like that can execute the technical solution disclosed in the embodiment of the method of the present application. For example, the terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle mounted device, a wearable device, and so on. In some possible implementations, the image processing method may be implemented by a processor calling computer readable instructions stored in a memory. In the present application, the execution subject of the image processing method is simply referred to as an electronic apparatus.
Referring to fig. 1, fig. 1 is a first flowchart illustrating an image processing method according to an embodiment of the present application. In the present embodiment, the image processing method includes the steps of:
step S11: and detecting whether the brightness of the current environment is not greater than a preset brightness threshold value.
Step S12: and confirming to enter a dark light mode in response to detecting that the brightness of the current environment is not greater than a preset brightness threshold value.
The current environment may be considered the environment in which the electronic device is located. In the case of an electronic device, it may determine the brightness of the current environment through a light sensor, an image sensor, or the like of the electronic device, or acquire the brightness of the current environment determined by other electronic devices by communicating with other electronic devices. It is to be understood that the present application does not limit the determination method of the brightness of the current environment.
After obtaining the brightness of the current environment, the electronic device may detect whether the brightness of the current environment is not greater than a preset brightness threshold. In this embodiment, the preset brightness threshold may be regarded as a judgment threshold for judging whether the current environment is in a dim condition. Specifically, when it is detected that the brightness of the current environment is not greater than the preset brightness threshold, the brightness of the current environment may be considered to be insufficient, and at this time, the electronic device may confirm to enter the dim mode. When the brightness of the current environment is detected to be greater than the preset brightness threshold, the electronic device can confirm to enter the normal mode at the moment. The specific value of the preset brightness threshold may be set as needed, and the application is not limited.
In one embodiment, the electronic device may automatically detect whether the brightness of the current environment is not greater than the preset brightness threshold once at intervals to more accurately determine the brightness condition of the current environment in which the electronic device is located. In one embodiment, after the electronic device enters the shooting mode, the electronic device may automatically perform the step of detecting whether the brightness of the current environment is not greater than a preset brightness threshold value, so as to more accurately determine the brightness condition of the current environment in which the electronic device is located. In another embodiment, after the electronic device enters the shooting mode, the electronic device may determine whether the brightness of the current environment is not greater than the preset brightness threshold according to the last detection result of the brightness of the current environment.
In one embodiment, whether the brightness of the current environment is not greater than the preset brightness threshold may be detected by the light sensor, that is, the brightness information of the current environment is obtained by using the light sensor, and then whether the brightness of the current environment is not greater than the preset brightness threshold is detected according to the brightness information obtained by the light sensor. In one embodiment, in the case of having a plurality of light sensors, whether the brightness of the current environment is not greater than the preset brightness threshold may be detected according to the brightness information obtained by one of the light sensors, or whether the brightness of the current environment is not greater than the preset brightness threshold may be detected according to the brightness information obtained by the plurality of light sensors. In another embodiment, in the case of having a plurality of light sensors, it may be detected whether the brightness of the current environment is not greater than the preset brightness threshold according to the brightness information obtained by the light sensor closest to the image capturing component of the electronic device. Therefore, the brightness information of the current environment is obtained by utilizing the light sensor for detection, so that the brightness of the current environment is detected to be not more than the preset brightness threshold value through the brightness information obtained by the light sensor, and the brightness detection of the current environment is realized.
In one embodiment, whether the brightness of the current environment is not greater than a preset brightness threshold value can be detected through the brightness information of the preview picture of the first camera assembly. The first camera module is, for example, a camera module of the electronic apparatus itself, and is used for capturing a target image. A specific example of the imaging module is camera module hardware. The first camera module can acquire a preview picture about the current environment, wherein the brightness information of the preview picture can be used for determining the brightness condition of the current environment. The method for determining the brightness of the current environment according to the brightness information of the preview image may be a method commonly used in the art, and is not described herein again. In a specific embodiment, when the first camera module includes a monochrome camera module, since the amount of light entering the monochrome camera module is relatively large, that is, it can be considered that the brightness information of the preview picture acquired by the monochrome camera module can more accurately reflect the brightness of the current environment, it can be detected whether the brightness of the current environment is not greater than a preset brightness threshold according to the brightness information of the preview picture of the monochrome camera module, so as to more accurately determine the brightness of the current environment where the electronic device is located. Therefore, the brightness information of the current environment is acquired by utilizing the brightness information of the preview picture of the first camera shooting assembly, so that the brightness of the current environment is detected to be not more than the preset brightness threshold value through the brightness information of the preview picture, and the brightness detection of the current environment is realized.
In one embodiment, whether the brightness of the current environment is not greater than a preset brightness threshold may be determined by detecting whether a night scene shooting instruction input by a user is received. Specifically, after a night scene photographing instruction input by the user is detected, the user may consider that the user currently needs to enter the dim light mode, and the electronic device may determine that the detected brightness of the current environment is not greater than the preset brightness threshold in response to the night scene photographing instruction input by the user. Therefore, by responding to the night scene shooting instruction input by the user, it can be correspondingly determined that the brightness of the current environment is detected to be not greater than the preset brightness threshold.
Step S13: and respectively shooting the current environment to obtain a depth image and a target image.
In the present application, the target image includes at least one of a black-and-white image and a color image. The black-and-white image is, for example, a grayscale image.
In one embodiment, the electronic device may include a first camera assembly and a second camera assembly, wherein the first camera assembly is used for shooting a target image, and the second camera assembly is used for shooting a depth image.
The first camera assembly may include a first color camera assembly and/or a black and white camera assembly. The color camera shooting assembly is used for shooting to obtain a color image, and the black-and-white camera shooting assembly is used for shooting to obtain a black-and-white image.
The second camera module is, for example, a depth camera module, and the depth camera module is, for example, a TOF (Time-of-Flight) camera module, a structured light camera module. In another embodiment, the second camera assembly may include a second color camera assembly and a third color camera assembly, where the second color camera assembly and the third color camera assembly are both configured to capture a color image, and the second color camera assembly and the third color camera assembly form a binocular vision system for obtaining a depth image based on a binocular stereoscopic imaging technique.
Step S14: determining a first object region in the target image with respect to a target object based on a first detection region in the depth image with respect to the target object.
In this embodiment, the first detection region is subject-detected for the depth image. For example, the detection region may be determined by detecting the depth image using a target detection algorithm or an image segmentation algorithm, which are commonly used in the art. The target object is, for example, a portrait.
After the first detection region is determined, a first object region of the target object in the target image can be determined based on the pixel point correspondence of the depth image target image. The correspondence between the pixel points of the depth image and the target image is determined by a method of image registration, for example.
Step S15: and performing enhancement processing on the first object region in the target image.
After the first object region in the target image is determined, the image enhancement algorithm commonly used in the image processing field can be used for carrying out image enhancement processing on the first object region, so that the imaging quality of the first object region is improved. The image enhancement algorithm is, for example, a histogram equalization algorithm, a Gamma (Gamma) transform algorithm, etc., and will not be described herein.
Therefore, in the dim light mode, the first object area of the target object in the target image is determined, and the first object area in the target image is subjected to enhancement processing, so that the imaging quality of the target object in the target image can be improved, and the imaging quality of the target object is improved.
Referring to fig. 2, fig. 2 is a second flowchart of an embodiment of an image processing method according to the present application. In this embodiment, the step of "determining a first object region in the target image with respect to the target object based on the first detection region in the depth image with respect to the target object" described above specifically includes step S141 and step S142.
Step S141: in the target image, a second object area corresponding to the first detection area is determined.
In one embodiment, step S141 specifically includes steps S1411 through S1413.
Step S1411: obtaining a current image registration parameter between the depth image and the target image.
In one embodiment, the current image registration parameter between the depth image and the target image may be an image registration parameter determined after calibration of the second camera assembly capturing the depth image and the first camera assembly capturing the target image. For example, during assembly, the first camera assembly and the second camera assembly are calibrated to determine image registration parameters as current image registration parameters. In another embodiment, the current image registration parameter between the depth image and the target image may be an image registration parameter after image registration of the depth image and the target image.
Step S1412: determining a pixel correspondence between the depth image and the target image based on the current image registration parameters.
After determining the current image registration parameters of the depth image and the target image, it indicates that the translation, rotation, etc. relationship between the two images has been obtained, so that the pixel correspondence between the depth image and the target image can be further determined by using the current image registration parameters. The method for determining the pixel correspondence between the depth image and the target image may be a method commonly used in the art, and is not described herein again.
Step S1413: determining a second object region corresponding to the first detection region in the target image based on the pixel correspondence.
In one embodiment, a pixel point where the first detection region is located in the depth image may be determined, and then a pixel point corresponding to the pixel point where the first detection region is located in the target image is determined based on a pixel correspondence relationship between the depth image and the target image, so as to determine the second target region.
Therefore, the current image registration parameter between the depth image and the target image is obtained, so that the pixel correspondence between the depth image and the target image can be determined, and then the second target area corresponding to the first detection area can be determined in the target image according to the pixel correspondence, so that the target object in the target image can be detected by using the depth image.
Step S142: determining the first object region based on the second object region.
In one embodiment, the second object region may be directly used as the first object region. In another embodiment, the first object region may be obtained based on a detection result of the target object determined by performing object detection on the target object and by combining the second object region.
Therefore, by determining a second object region corresponding to the first detection region of the depth image in the target image and obtaining a first object region about the target object in the target image by using the second object region, detection of the target object in the target image by using the depth image is achieved.
In an embodiment, after the step of "respectively capturing the depth image and the target image of the current environment" is performed, and before the step of "determining the second object region corresponding to the first detection region in the target image" is performed, the image processing method of the present application may further include: and carrying out target object detection on the target image to obtain a detection result of the target image.
The target object detection is performed on the target image, and the target image may also be detected by using a target detection algorithm or an image segmentation algorithm commonly used in the art, so as to obtain a detection result of the target image. It is understood that the detection result of the target image may include the detection of the target object and the non-detection of the target object, and in the case of the detection of the target object, the detection result of the target image may further include a first object region in which the target object is located in the target image. Therefore, by detecting the target object in the target image, the target object can be positioned in the second detection area where the target object is detected in the target image.
In this embodiment, the step of "determining the first object region based on the second object region" specifically includes step S21 or step S22 when the target object detection is performed on the target image.
Step S21: and in response to the target object being detected in the target image, fusing a second detection area of the target object included in the detection result with the second object area to obtain a first object area.
When the target object is detected in the target image, the second detection region of the target object and the second object region may be fused to obtain the first object region. In one embodiment, the first object region may be a set of the second detection region and the second object region. In another embodiment, the partial region of the second detection region and the partial region of the second object region may be fused to obtain the first object region. It is to be understood that the method of fusing the second detection region and the second object region is not limited.
Therefore, in the target image, the first object region is obtained by fusing the second detection region of the target object with the second object region, so that the detection result of the target object by the target image and the detection result of the target object by the depth map image can be simultaneously utilized, the detection of the target object by different images is realized, and the detection success rate of the target object is improved.
Step S22: in response to not detecting the target object in the target image, treating the second object region as the first object region.
In the case that the target object is not detected in the target image, the second object region may be directly used as the first object region, so as to locate the target object in the target image.
In one embodiment, the step of "acquiring the current image registration parameters between the depth image and the target image" includes step S31 or step S32.
Step S31: acquiring preset image registration parameters as current image registration parameters, wherein the preset image registration parameters are determined after calibration during assembly of a first camera component and a second camera component, the first camera component is used for shooting a target image, and the second camera component is used for shooting a depth image.
In this embodiment, a preset image registration parameter may be obtained as the current image registration parameter, where the preset image registration parameter is determined after calibration is performed during assembly of the first camera shooting assembly and the second camera shooting assembly, the first camera shooting assembly is used for shooting a target image, and the second camera shooting assembly is used for shooting a depth image. In this embodiment, the assembly of the first camera module and the second camera module may be considered as the assembly of the first camera module and the second camera module. Upon assembly, the relative positional relationship of the first camera assembly and the second camera assembly is determined. For example, in the process of manufacturing the electronic device executing the method, when the first camera component and the second camera component are assembled, the preset image registration parameters can be determined by calibrating the first camera component and the second camera component under the condition that the relative positions of the first camera component and the second camera component are determined to be unchanged. The first camera component and the second camera component are calibrated, and image registration can be performed by using images respectively obtained by shooting by the first camera component and the second camera component, and the obtained image registration parameters are used as preset image registration parameters. Therefore, the preset image registration parameters are directly acquired as the current image registration parameters, so that image registration operation does not need to be carried out on the depth image and the target image, and the imaging time can be shortened.
Step S32: and when detecting that the difference between the imaging time corresponding to the depth image and the imaging time corresponding to the target image is greater than a preset difference threshold value, carrying out image registration on the depth image and the target image to obtain the current image registration parameter.
In this embodiment, when it is detected that a difference between imaging times corresponding to a depth image and a target image is greater than a preset difference threshold, image registration is performed on the depth image and the target image to obtain the current image registration parameter. In one embodiment, the imaging time of the image may be considered the time at which the electronic device generated the image. The preset difference threshold may be 0, or may be other specific values, and the application is not limited. It can be understood that, due to the influence of the problems such as heat of the electronic device, insufficient processing capability of the electronic device, and the like, the difference between the imaging times corresponding to the depth image and the target image is greater than the preset difference threshold, and it can be considered that the electronic device has moved when shooting the depth image and the target image, which may result in poor correspondence between the depth image and the target image, that is, the quality of image fusion performed by using the preset image registration parameters may be poor. Therefore, in order to improve the quality of image fusion, image registration may be performed on the depth image and the target image, so as to obtain an image registration parameter of the depth image and the target image, which is used as the current image registration parameter. Therefore, when the difference between the imaging time corresponding to the depth image and the imaging time corresponding to the target image is detected to be larger than the preset difference threshold value, the actual image registration parameters of the depth image and the target image can be used as the current image registration parameters by performing image registration on the depth image and the target image, and then the current image registration parameters are used for performing image fusion so as to improve the quality of image fusion.
Referring to fig. 3, fig. 3 is a schematic flowchart illustrating an image processing method according to another embodiment of the present application. In this embodiment, the image processing method of the present application may further include steps S41 to S43 to determine preset image registration parameters.
Step S41: and acquiring registration images respectively shot by the first camera shooting assembly and the second camera shooting assembly after assembly, and acquiring actual image registration parameters obtained after image registration is carried out on the basis of the registration images.
In one embodiment, the first camera component and the second camera component can be considered to be a link that the electronic equipment is manufactured and sold and used after being assembled.
The registration images respectively shot by the first camera shooting assembly and the second camera shooting assembly after assembly can be a target image and a depth image respectively shot by the first camera shooting assembly and the second camera shooting assembly, and the difference between the imaging time corresponding to the depth image and the imaging time corresponding to the target image is not greater than a preset difference threshold value. In one embodiment, the registration images may be images taken of registration calibration plates, respectively.
After obtaining the registration images respectively photographed by the first camera component and the second camera component after assembly, image registration may be performed on the obtained registration images, so as to obtain actual image registration parameters obtained after image registration is performed on the basis of the registration images.
Step S42: and determining whether the preset image registration parameters are consistent with the actual image registration parameters.
During use of the electronic device, the relative positions of the first camera assembly and the second camera assembly may change. Under the condition that the relative positions of the first camera shooting assembly and the second camera shooting assembly are changed, the preset image registration parameters determined by the first camera shooting assembly and the second camera shooting assembly during assembly cannot accurately reflect the pixel corresponding relation between the target image and the depth image obtained by shooting through the first camera shooting assembly and the second camera shooting assembly, and therefore the image registration parameters between the images shot by the first camera shooting assembly and the second camera shooting assembly need to be determined again.
Based on this, whether the preset image registration parameter is consistent with the actual image registration parameter needs to be determined, if the preset image registration parameter is inconsistent with the actual image registration parameter, it is indicated that the relative position of the first camera shooting assembly and the second camera shooting assembly changes, and if the preset image registration parameter is consistent with the actual image registration parameter, it can be considered that the relative position of the first camera shooting assembly and the second camera shooting assembly does not change. In a specific embodiment, the preset image registration parameter and the actual image registration parameter are consistent, and it can be considered that the difference between the preset image registration parameter and the actual image registration parameter is within a preset parameter difference threshold.
In case it is determined that the preset image registration parameters and the actual image registration parameters are not consistent, step S43 may be performed; in case it is determined that the preset image registration parameter is consistent with the actual image registration parameter, it may be determined that the current preset image registration parameter is not changed.
Step S43: in response to the preset image registration parameter and the actual image registration parameter not being consistent, determining the actual image registration parameter as the preset image registration parameter.
The preset image registration parameter is inconsistent with the actual image registration parameter, that is, the relative position of the first camera assembly and the second camera assembly changes, so that the actual image registration parameter can be determined as the preset image registration parameter.
Therefore, by acquiring registration images respectively shot by the first camera shooting assembly and the second camera shooting assembly after assembly, and acquiring actual image registration parameters obtained after image registration is carried out on the basis of the registration images, under the condition that preset image registration parameters are inconsistent with the actual image registration parameters, the recalibration of the first camera shooting assembly and the second camera shooting assembly can be realized by determining the actual image registration parameters as the preset image registration parameters, so that the imaging quality can be provided when the preset image registration parameters are used for carrying out image fusion subsequently.
Referring to fig. 4, fig. 4 is a schematic flowchart illustrating an image processing method according to another embodiment of the present application. In this embodiment, the electronic device executing the image processing method of the present application may specifically be a shooting device, and the image processing method of the present application further includes steps S51 to S53.
Step S51: whether the photographing apparatus collides is detected.
The shooting device may detect whether the shooting device collides by using a collision detection algorithm commonly used in the art, for example, the shooting device performs collision detection by using data of an acceleration sensor, which is not described in detail in this application.
The fact that the shooting device collides means that the relative positions of the first shooting component and the second shooting component are likely to change, and if the relative positions of the first shooting component and the second shooting component are changed, it is indicated that the preset image registration parameters of the shooting device are likely to be inaccurate, so that the first shooting component and the second shooting component need to be recalibrated.
After the collision is detected, step S52 may be performed; if no collision is detected, step S51 may be re-executed.
Step S52: and displaying prompt information, wherein the prompt information is used for prompting a user to carry out image shooting operation.
The shooting device can display prompt information on a screen after detecting the collision, wherein the prompt information is used for prompting a user to carry out image shooting operation. The prompt message is, for example, "please recalibrate the first camera assembly and the second camera assembly when detecting that the equipment collides".
Step S53: and responding to a shooting instruction input by the user, and obtaining a registration image shot by the first camera assembly and the second camera assembly after assembly.
When the shooting device detects a shooting instruction input by a user, images shot by the first camera shooting assembly and the second camera shooting assembly after assembly can be used as registration images for recalibrating the first camera shooting assembly and the second camera shooting assembly.
Therefore, whether the shooting equipment collides or not is detected, and under the condition that the collision is detected, the user is prompted to carry out image shooting operation by displaying prompt information, so that the registration images shot by the first camera shooting assembly and the second camera shooting assembly after assembly can be obtained, and then the registration images can be subsequently utilized to carry out registration, so that the first camera shooting assembly and the second camera shooting assembly can be recalibrated.
Referring to fig. 5, fig. 5 is a schematic flowchart illustrating an image processing method according to still another embodiment of the present application. In the present embodiment, the image processing method of the present application further includes steps S61 to S65.
Step S61: and detecting whether the brightness of the current environment is not greater than a preset brightness threshold value.
Step S62: and confirming to enter a dark light mode in response to detecting that the brightness of the current environment is not greater than a preset brightness threshold value.
For the detailed description of step S61 and step S62, please refer to step S11 and step S12, which are not described herein again.
Step S63: shooting a current environment respectively to obtain a preset number of frames of depth images and a preset number of frames of target images, and determining the depth images and the target images obtained corresponding to the same shooting moment as an image group;
in one embodiment, it may be considered that a difference between the imaging time of the depth image and the imaging time of the target image is not greater than a preset difference threshold corresponding to the same photographing time. In another embodiment, it may be considered that the depth images and the target images correspond to each other according to their respective imaging orders at the same shooting time, the first frame depth image corresponds to the first frame target image, and the second frame depth image corresponds to the second frame target image.
In one embodiment, if the difference between the imaging time of two frames of target images and the imaging time of one frame of depth image is not greater than the preset difference threshold, the target image closest to the imaging time of the depth image may be determined to belong to the same image group as the depth image.
Step S64: for each group of the images, determining the first object region in the target image in the group of the images by using the first detection region of the depth image in the group of the images, and performing enhancement processing on the first object region in the target image in the group of the images;
in a specific embodiment, in the case that the target image is one of a black-and-white image and a color image, a first object region in the black-and-white image or the color image may be determined, and then the first object region in the black-and-white image or the color image may be subjected to an enhancement process. In another embodiment, in the case that the target image includes a black-and-white image and a color image, a first object region in the black-and-white image and a first object region in the color image may be determined, and then the first object region in the black-and-white image and the first object region in the color image may be subjected to enhancement processing.
Step S65: and carrying out image fusion on the target image subjected to the enhancement processing to obtain a fused image.
In one embodiment, when the target image is one of a black-and-white image and a color image, a plurality of the black-and-white images or the color images subjected to the enhancement processing may be subjected to image fusion to obtain a fused image. For example, if there are 5 image groups, 5 frames of black-and-white images or 5 frames of color images may be subjected to image fusion to obtain a single frame of fused image.
In another embodiment, in the case that the target image includes a black-and-white image and a color image, several black-and-white images and color images after enhancement processing may be subjected to image fusion to obtain a fused image. For example, if there are 8 image groups, 8 frames of black-and-white images or color images subjected to enhancement processing may be subjected to image fusion to obtain a single frame of fused image.
Therefore, by performing enhancement processing on the first object region in the target image of each image group, the imaging quality of the target object in the target image of each image group can be improved. In addition, the target images subjected to enhancement processing are subjected to image fusion to obtain a fused image, the target images of different image groups can be subjected to image fusion, and the imaging quality of the fused image is improved by utilizing the image information of the multi-frame target images.
In one embodiment, the exposure time and/or sensitivity of the target image may vary from one image group to another, for example, the exposure time of the target image in the first image group is 1/180 seconds at a sensitivity of 400, and the exposure time of the target image in the second image group is 1/120 seconds at a sensitivity of 600. Therefore, by setting the exposure time and/or the sensitivity corresponding to the target image of each image group to be different, when the number of the image groups is not less than 2, at least 2 target images obtained based on different imaging parameters can be utilized for image fusion, which is beneficial to improving the imaging quality.
In one embodiment, the exposure time and/or sensitivity of the partial image groups may be set to be the same for all the image groups, and the image groups with the same exposure time and/or sensitivity may be determined to belong to the same image group, so as to obtain a plurality of image groups. For one image set, by setting the exposure time and/or the sensitivity of the target image contained in each image set to be the same, the target images obtained based on the same imaging parameters can be used for image fusion, which is beneficial to improving the imaging quality.
In one embodiment, in the case that the target image includes a black-and-white image and a color image, the shutter time in the imaging parameters is not lower than the preset shutter threshold when the first image capturing assembly is used for capturing the black-and-white image and the color image, so that the brightness of the image is brighter without using the slow shutter time, and the problems of blurring, ghosting, excessive noise and the like caused by the slow shutter can be reduced. In addition, when the first camera shooting assembly is used for acquiring a black-and-white image and a color image, the sensitivity can be set to be not more than a preset sensitivity threshold value, so that the brightness of the image can be brighter without using excessive sensitivity, and the problems of excessive noise and low signal-to-noise ratio caused by excessive sensitivity are reduced. Therefore, when the first camera shooting assembly is used for acquiring a black-white image and a color image, the shutter time is not lower than the preset shutter threshold, and the light sensitivity is not higher than the preset light sensitivity threshold, so that the imaging quality of the image can be improved.
Referring to fig. 6, fig. 6 is a schematic diagram of a framework of an embodiment of an image processing apparatus according to the present application. The image processing device 60 comprises a brightness detection module 61, an acquisition module 62, an area detection module 63 and a processing module 64, wherein the brightness detection module 61 is configured to determine to enter a dim mode when detecting whether the brightness of the current environment is greater than a preset brightness threshold and when the brightness of the current environment is not greater than the preset brightness threshold; the obtaining module 62 is configured to respectively shoot the current environment to obtain a depth image and a target image, where the target image includes at least one of a black-and-white image and a color image; the region detection module 63 is configured to determine a first object region in the target image with respect to the target object based on a first detection region in the depth image with respect to the target object, where the first detection region is used for target object detection on the depth image; the processing module 64 is configured to perform enhancement processing on the first object region in the target image.
The region detection module 63 is configured to determine a first object region in the target image about the target object based on a first detection region in the depth image about the target object, and includes: determining a second object area corresponding to the first detection area in the target image; determining the first object region based on the second object region.
Wherein, before the obtaining module 62 is configured to respectively shoot the current environment to obtain a depth image and a target image, and the area detecting module 63 is configured to determine a second object area corresponding to the first detection area in the target image, the area detecting module 63 is further configured to perform target object detection on the target image to obtain a detection result of the target image; the region detection module 63 is configured to determine the first object region based on the second object region, and includes: in response to the target object being detected in the target image, fusing a second detection area of the target object included in the detection result with the second object area to obtain the first object area; or, in response to the target object not being detected in the target image, taking the second object region as the first object region;
the region detection module 63 is configured to determine, in the target image, a second object region corresponding to the first detection region, including acquiring a current image registration parameter between the depth image and the target image; determining a pixel correspondence between the depth image and the target image based on the current image registration parameters; determining a second object region corresponding to the first detection region in the target image based on the pixel correspondence.
Wherein the region detection module 63 is configured to obtain a current image registration parameter between the depth image and the target image, and includes: acquiring preset image registration parameters as current image registration parameters, wherein the preset image registration parameters are determined after calibration during assembly of a first camera component and a second camera component, the first camera component is used for shooting a target image, and the second camera component is used for shooting a depth image; or when the difference between the imaging time corresponding to the depth image and the imaging time corresponding to the target image is detected to be greater than a preset difference threshold value, performing image registration on the depth image and the target image to obtain the current image registration parameter.
The region detection module 63 is further configured to acquire registration images respectively captured by the first camera shooting assembly and the second camera shooting assembly after assembly, and acquire actual image registration parameters obtained after image registration is performed based on the registration images; in response to the preset image registration parameter and the actual image registration parameter not being consistent, determining the actual image registration parameter as the preset image registration parameter.
The image processing device 60 may specifically be a shooting device, the obtaining module 62 includes a first shooting component and a second shooting component, the area detecting module 63 is configured to display a prompt message after detecting that the shooting device collides, and the prompt message is used to prompt a user to perform an image shooting operation; and responding to a shooting instruction input by the user, and obtaining a registration image shot by the first camera assembly and the second camera assembly after assembly.
The brightness detection module 61 is configured to detect that the brightness of the current environment is not greater than a preset brightness threshold, and includes: detecting that the brightness of the current environment is not greater than a preset brightness threshold value through a light sensor; or detecting that the brightness of the current environment is not greater than a preset brightness threshold value through the brightness information of a preview picture of the first camera shooting assembly, wherein the first camera shooting assembly is used for shooting a target image; or responding to a night scene shooting instruction input by a user, and determining that the brightness of the current environment is detected to be not greater than a preset brightness threshold value.
The obtaining module 62 is configured to respectively shoot the current environment to obtain a depth image and a target image, and includes: shooting the current environment respectively to obtain the depth images of a preset number of frames and the target images of the preset number of frames, and determining the depth images and the target images obtained corresponding to the same shooting moment as an image group; the processing module 64 is configured to, for each group of the image groups, determine the first object region in the target image in the group of the image using the first detection region of the depth image in the group of the image, and perform enhancement processing on the first object region in the target image in the group of the image; after the processing module 64 is configured to perform enhancement processing on the first object region in the target image, the fusion module of the image processing apparatus is configured to perform image fusion on the target image after the enhancement processing to obtain a fused image.
Referring to fig. 7, fig. 7 is a schematic diagram of a frame of an electronic device according to an embodiment of the present application. The electronic device 70 comprises a memory 71 and a processor 72 coupled to each other, the processor 72 being configured to execute program instructions stored in the memory 71 to implement the steps in any of the above-described embodiments of the image processing method. The electronic device may perform the correlation steps by using the first camera module and the second camera module included in the electronic device, or may perform the correlation steps by using the first camera module and the second camera module of another device. In one particular implementation scenario, the electronic device 70 may include, but is not limited to: a microcomputer, a server, and the electronic device 70 may also include a mobile device such as a notebook computer, a tablet computer, and the like, which is not limited herein.
In particular, the processor 72 is configured to control itself and the memory 71 to implement the steps in any of the above-described embodiments of the image processing method. The processor 72 may also be referred to as a CPU (Central Processing Unit). The processor 72 may be an integrated circuit chip having signal processing capabilities. The Processor 72 may also be a general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. Additionally, the processor 72 may be collectively implemented by an integrated circuit chip.
Referring to fig. 8, fig. 8 is a block diagram illustrating an embodiment of a computer-readable storage medium according to the present application. The computer readable storage medium 800 stores program instructions 801 that can be executed by a processor, the program instructions 801 being for implementing the steps in any of the image processing method embodiments described above.
According to the scheme, in the dim light mode, the first object area of the target object in the target image is determined, and the first object area in the target image is subjected to enhancement processing, so that the imaging quality of the target object in the target image can be improved, and the imaging quality of the target object is improved.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, will not be described again here.
The foregoing description of the various embodiments is intended to highlight various differences between the embodiments, and the same or similar parts may be referred to each other, and for brevity, will not be described again herein.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a module or a unit is merely one type of logical division, and an actual implementation may have another division, for example, a unit or a component may be combined or integrated with another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some interfaces, and may be in an electrical, mechanical or other form.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.

Claims (12)

1. An image processing method, comprising:
confirming to enter a dim light mode in response to detecting that the brightness of the current environment is not greater than a preset brightness threshold;
shooting the current environment respectively to obtain a depth image and a target image, wherein the target image comprises at least one of a black-and-white image and a color image;
determining a first object region in the target image about a target object based on a first detection region in the depth image about the target object, wherein the first detection region is subject to target object detection on the depth image;
and performing enhancement processing on the first object region in the target image.
2. The method of claim 1, wherein determining a first object region in the target image for a target object based on a first detection region in the depth image for the target object comprises:
determining a second object area corresponding to the first detection area in the target image;
determining the first object region based on the second object region.
3. The method according to claim 2, wherein after the capturing of the depth image and the target image of the current environment respectively and before the determining of the second object region corresponding to the first detection region in the target image, the method further comprises:
carrying out target object detection on the target image to obtain a detection result of the target image;
the determining the first object region based on the second object region comprises:
in response to the target object being detected in the target image, fusing a second detection area of the target object included in the detection result with the second object area to obtain the first object area; or the like, or, alternatively,
in response to not detecting the target object in the target image, treating the second object region as the first object region.
4. The method of claim 2, wherein determining a second object region in the target image corresponding to the first detection region comprises
Acquiring a current image registration parameter between the depth image and the target image;
determining a pixel correspondence between the depth image and the target image based on the current image registration parameters;
determining a second object region corresponding to the first detection region in the target image based on the pixel correspondence.
5. The method of claim 4, wherein the obtaining current image registration parameters between the depth image and the target image comprises:
acquiring preset image registration parameters as current image registration parameters, wherein the preset image registration parameters are determined after calibration during assembly of a first camera component and a second camera component, the first camera component is used for shooting a target image, and the second camera component is used for shooting a depth image;
or when the difference between the imaging time corresponding to the depth image and the imaging time corresponding to the target image is detected to be greater than a preset difference threshold value, performing image registration on the depth image and the target image to obtain the current image registration parameter.
6. The method according to claim 5, further comprising the steps of:
acquiring registration images respectively shot by the first camera shooting assembly and the second camera shooting assembly after assembly, and acquiring actual image registration parameters obtained after image registration is carried out on the basis of the registration images;
in response to the preset image registration parameter and the actual image registration parameter not being consistent, determining the actual image registration parameter as the preset image registration parameter.
7. The method according to claim 6, wherein the image processing method is applied to a photographing apparatus, the method further comprising:
after the collision of the shooting equipment is detected, displaying prompt information, wherein the prompt information is used for prompting a user to carry out image shooting operation;
and responding to a shooting instruction input by the user, and obtaining a registration image shot by the first camera assembly and the second camera assembly after assembly.
8. The method of claim 1, wherein the detecting that the brightness of the current environment is not greater than a preset brightness threshold comprises:
detecting that the brightness of the current environment is not greater than a preset brightness threshold value through a light sensor;
or detecting that the brightness of the current environment is not greater than a preset brightness threshold value through the brightness information of a preview picture of the first camera shooting assembly, wherein the first camera shooting assembly is used for shooting a target image;
or responding to a night scene shooting instruction input by a user, and determining that the brightness of the current environment is detected to be not greater than a preset brightness threshold value.
9. The method of claim 1,
the step of respectively shooting the current environment to obtain a depth image and a target image comprises the following steps: shooting the current environment respectively to obtain the depth images of a preset number of frames and the target images of the preset number of frames, and determining the depth images and the target images obtained corresponding to the same shooting moment as an image group;
the determining a first object region about a target object in the target image based on a first detection region about the target object in the depth image, and performing enhancement processing on the first object region in the target image includes:
for each group of the images, determining the first object region in the target image in the group of the images by using the first detection region of the depth image in the group of the images, and performing enhancement processing on the first object region in the target image in the group of the images;
after the enhancing the first object region in the target image, the method further comprises:
and carrying out image fusion on the target image subjected to the enhancement processing to obtain a fused image.
10. An image processing apparatus characterized by comprising:
the brightness detection module is used for confirming to enter a dim light mode when detecting whether the brightness of the current environment is greater than a preset brightness threshold value and the brightness of the current environment is not greater than the preset brightness threshold value;
the acquisition module is used for respectively shooting the current environment to obtain a depth image and a target image, wherein the target image comprises at least one of a black-and-white image and a color image;
the region detection module is used for determining a first object region of a target object in the target image based on a first detection region of the target object in the depth image, wherein the first detection region is used for carrying out target object detection on the depth image;
and the processing module is used for performing enhancement processing on the first object area in the target image.
11. An electronic device comprising a memory and a processor coupled to each other, the processor being configured to execute program instructions stored in the memory to implement the image processing method of any one of claims 1 to 9.
12. A computer-readable storage medium on which program instructions are stored, which program instructions, when executed by a processor, implement the image processing method of any one of claims 1 to 9.
CN202111633399.3A 2021-12-29 2021-12-29 Image processing method, related device, equipment and storage medium Withdrawn CN114170222A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111633399.3A CN114170222A (en) 2021-12-29 2021-12-29 Image processing method, related device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111633399.3A CN114170222A (en) 2021-12-29 2021-12-29 Image processing method, related device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114170222A true CN114170222A (en) 2022-03-11

Family

ID=80488386

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111633399.3A Withdrawn CN114170222A (en) 2021-12-29 2021-12-29 Image processing method, related device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114170222A (en)

Similar Documents

Publication Publication Date Title
CN111028189B (en) Image processing method, device, storage medium and electronic equipment
US10997696B2 (en) Image processing method, apparatus and device
CN109089047B (en) Method and device for controlling focusing, storage medium and electronic equipment
US8184196B2 (en) System and method to generate depth data using edge detection
US7454134B2 (en) Image signal processing unit and digital camera
US9357127B2 (en) System for auto-HDR capture decision making
CN110349163B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN109685853B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN103945145A (en) Apparatus and method for processing image
CN115601244B (en) Image processing method and device and electronic equipment
CN107704798B (en) Image blurring method and device, computer readable storage medium and computer device
KR20110124965A (en) Apparatus and method for generating bokeh in out-of-focus shooting
CN107395991B (en) Image synthesis method, image synthesis device, computer-readable storage medium and computer equipment
CN110519485B (en) Image processing method, image processing device, storage medium and electronic equipment
US10972676B2 (en) Image processing method and electronic device capable of optimizing hdr image by using depth information
CN109068060B (en) Image processing method and device, terminal device and computer readable storage medium
CN105450932A (en) Backlight photographing method and device
CN111246100B (en) Anti-shake parameter calibration method and device and electronic equipment
CN115601274B (en) Image processing method and device and electronic equipment
CN110728705B (en) Image processing method, image processing device, storage medium and electronic equipment
CN109559353B (en) Camera module calibration method and device, electronic equipment and computer readable storage medium
CN110708463B (en) Focusing method, focusing device, storage medium and electronic equipment
WO2023137956A1 (en) Image processing method and apparatus, electronic device, and storage medium
CN110740266B (en) Image frame selection method and device, storage medium and electronic equipment
CN114827487B (en) High dynamic range image synthesis method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20220311

WW01 Invention patent application withdrawn after publication