CN112887614B - Image processing method and device and electronic equipment - Google Patents

Image processing method and device and electronic equipment Download PDF

Info

Publication number
CN112887614B
CN112887614B CN202110114220.7A CN202110114220A CN112887614B CN 112887614 B CN112887614 B CN 112887614B CN 202110114220 A CN202110114220 A CN 202110114220A CN 112887614 B CN112887614 B CN 112887614B
Authority
CN
China
Prior art keywords
region
sub
image
area
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110114220.7A
Other languages
Chinese (zh)
Other versions
CN112887614A (en
Inventor
李明津
陈卓艺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202110114220.7A priority Critical patent/CN112887614B/en
Publication of CN112887614A publication Critical patent/CN112887614A/en
Application granted granted Critical
Publication of CN112887614B publication Critical patent/CN112887614B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an image processing method, an image processing device and electronic equipment, and belongs to the technical field of images. The method comprises the following steps: acquiring N images acquired by N cameras; based on the images of the reference areas in the N-1 reference images, carrying out image processing on the reflection areas in the reference images; wherein the N images comprise the reference image and the N-1 reference images, and N is an integer greater than 1. In the embodiment of the invention, the electronic equipment can automatically acquire 1 piece of reference image and N-1 pieces of reference image through N cameras without manual removal by a user, and remove the reflective area in the reference image based on the image of the reference area in the reference image, thereby simplifying the user operation and improving the processing efficiency.

Description

Image processing method and device and electronic equipment
Technical Field
The invention belongs to the technical field of images, and particularly relates to an image processing method and device, electronic equipment and a storage medium.
Background
Conventionally, when a photograph is taken through a transparent object that reflects a light source such as glass, a light reflection region exists in the photographed photograph, and the subject or person on the photograph is blurred.
In the prior art, after shooting, post-processing is performed on an image in a manual image modifying mode to remove a light reflection region in order to eliminate the light reflection region in the image. In this processing mode, the operation is cumbersome and the processing efficiency is low.
Disclosure of Invention
An embodiment of the present invention provides an image processing method, an image processing apparatus, an electronic device, and a storage medium, which can solve the technical problems of complicated operation and low processing efficiency in eliminating a light reflection area in an image in the prior art.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides an image processing method, where the method includes:
acquiring N images acquired by N cameras;
based on the images of the reference areas in the N-1 reference images, carrying out image processing on the reflection areas in the reference images;
wherein the N images comprise the reference image and the N-1 reference images, and N is an integer greater than 1.
In a second aspect, an embodiment of the present invention provides an image processing apparatus, including:
the acquisition module acquires N images acquired by the N cameras;
the processing module is used for carrying out image processing on the reflection area in the reference image based on the images of the reference areas in the N-1 reference images; wherein the N images comprise the reference image and the N-1 reference images, and N is an integer greater than 1.
In a third aspect, an embodiment of the present invention provides an electronic device, which includes at least two cameras, a processor, a memory, and a program or instructions stored on the memory and executable on the processor, where the program or instructions, when executed by the processor, implement the steps of the image processing method according to the first aspect.
In a fourth aspect, the embodiments of the present invention provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the image processing method according to the first aspect.
In a fifth aspect, an embodiment of the present invention provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the image processing method according to the first aspect.
In the embodiment of the invention, N images acquired by N cameras are acquired; based on the images of the reference areas in the N-1 reference images, carrying out image processing on the reflection areas in the reference images; wherein the N images comprise the reference image and the N-1 reference images, and N is an integer greater than 1. In the embodiment of the invention, the electronic equipment can automatically acquire 1 piece of reference image and N-1 pieces of reference image through N cameras without manual removal by a user, and remove the reflective area in the reference image based on the image of the reference area in the reference image, thereby simplifying the user operation and improving the processing efficiency.
Meanwhile, due to the physical characteristics of light reflection, the positions of light reflection in the N images acquired by the different cameras with coincident shooting visual angles are also different, so that in the embodiment of the invention, the reference image and the reference image are determined from the N images, and the image processing is performed on the light reflection area based on the image of the reference area in the reference image, so that the removal effect of the light reflection area can be ensured.
Drawings
FIG. 1 illustrates a schematic reflection of light from a rugged surface according to embodiments of the present invention;
FIG. 2 is a schematic diagram showing the reflection of light rays in a smooth plane according to an embodiment of the present invention;
FIG. 3 is a flow chart illustrating the steps of an image processing method according to an embodiment of the present invention;
FIG. 4 is a schematic diagram illustrating an arrangement of cameras according to an embodiment of the present invention;
FIG. 5 is a schematic diagram showing reflection of an intense light source on a smooth surface of an object according to an embodiment of the present invention;
fig. 6 is a schematic diagram illustrating a moving manner of a virtual camera according to an embodiment of the present invention;
fig. 7 is a block diagram showing a configuration of an image processing apparatus according to an embodiment of the present invention;
FIG. 8 is a block diagram of an electronic device according to an embodiment of the invention;
fig. 9 is a schematic diagram of a hardware structure of an electronic device according to various embodiments of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without inventive step based on the embodiments of the present invention, are within the scope of protection of the present invention.
The terms first, second and the like in the description and in the claims of the present invention are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the invention may be practiced other than those illustrated or described herein, and that the objects identified as "first," "second," etc. are generally a class of objects and do not limit the number of objects, e.g., a first object may be one or more.
The normal object surface is all comparatively crude, refer to fig. 1, because the object shows unsmooth, even if ambient light source luminance is higher, the angle of the light of reflection also can not be unanimous, and light is more dispersed, and the light quantity of reflection is little in same angle, can not appear the problem of reflection of light. Referring to fig. 2, the reflection angles of the light rays are substantially the same, and under the condition that the brightness of the ambient light source is high, the light rays are reflected from the same angle in a concentrated manner, and at the moment, if a camera of the electronic device shoots an object along the reflection angle, a reflection area appears because a large amount of reflection light exists on the surface of the shot object.
In the prior art, there are two main ways to eliminate the reflection phenomenon existing in mobile phone photographing: one method is to eliminate partial light reflection phenomenon in the mobile phone photographing process by an earlier photographing method; one is to use image processing software to perform post-processing on a picture taken by a mobile phone to remove all light reflecting areas on the taken picture. Therefore, the method for reducing the reflection influence in the mobile phone photographing process through the earlier photographing method needs the user to manually press close to the glass and other reflection objects, is complex to operate, cannot completely eliminate the reflection area of the preview image in the mobile phone photographing process, and only reduces the reflection area; and the post-image processing is carried out on the photo shot by the mobile phone, although the reflective area of the photo shot by the mobile phone can be completely removed, the operation is complicated, and the reflective area of the preview image in the process of shooting by the mobile phone cannot be eliminated.
In order to simplify the operation and improve the processing efficiency when eliminating such a light reflection area, the present invention provides an image processing method, an image processing apparatus, an electronic device, and a storage medium.
An image processing method, an image processing apparatus, an electronic device, and a storage medium according to embodiments of the present invention are described in detail below with reference to the accompanying drawings.
Referring to fig. 3, a flow chart illustrating steps of an image processing method according to an embodiment of the present invention is shown, the method including:
and 301, acquiring N images acquired by the N cameras.
In the embodiment of the present invention, referring to fig. 4, N cameras 400 may be located on the same plane of the electronic device, and the distances between the cameras may be equal or unequal; the N cameras may not be located on the same plane of the electronic device, but there are overlapping shooting angles, for example, in a case that a screen of the electronic device is a flexible screen, the N cameras may include cameras respectively located on a front side and a back side of the electronic device, and further, the electronic device may be bent so that the front camera and the back camera have overlapping shooting angles. The electronic device may be a smart phone, a tablet computer, a flexible device, a curved screen device, or the like, which is not limited in the embodiments of the present invention. In addition, each camera may acquire a plurality of images for multiple times, or may acquire one image only once, which is not limited in the embodiment of the present invention.
It should be noted that, referring to fig. 5, because the shooting angles of the N cameras are different, if there is an intense light source that is irradiated on a smooth surface of an object and reflected back, the reflected light will be concentrated at a reflection angle to be reflected, and will not be transmitted to all the cameras at the same time, so that generally, the brightness at the same position in all the images will not be too high to form a light reflection region.
It is worth noting that in the embodiment of the present invention, specifically, when N images are acquired based on N cameras, the N cameras may be simultaneously turned on by default, so as to ensure the integrity of the shooting angle and ensure that there are a sufficient number of reference images; the electronic device may also be set to default to turn on two cameras according to the actual needs of the user to reduce the power consumption of the electronic device, and then more cameras may be turned on based on the needs.
In this step, a primary image may be acquired by at least two rear cameras of the smart phone to obtain N images. For example, a smartphone is provided with 2 rear cameras, and 2 images can be obtained by acquiring one image through the 2 rear cameras.
Step 302, based on the images of the reference areas in the N-1 reference images, performing image processing on the reflection areas in the reference image; wherein the N images comprise the reference image and the N-1 reference images, and N is an integer greater than 1.
In this embodiment of the present invention, the reference image may be an image with the smallest area of the light reflection region in the N images, or one image may be randomly selected from all the N images as the reference image, and all the remaining N-1 images may be used as the reference image, or a part of the remaining N-1 images may be used as the reference image, which is not limited in this embodiment of the present invention. The reference image is an image for performing image processing such as pixel replacement on the light reflection region in the reference image. Furthermore, after the reference image is determined, the reference image will not be changed; after the reference image is determined, the reference image may be changed or may not be changed any more, which is not limited in the embodiment of the present invention.
In this step, one reference image can be determined from at least 2 images acquired by at least 2 rear cameras of the smart phone, and all of the remaining N-1 images are used as reference images. For example, referring to fig. 4, an arrangement of the rear cameras of the smart phone is shown, when a user opens the camera software of the smart phone and enters the camera preview interface, at this time, one of 16 images acquired by the rear camera 400 of the smart phone is used as a reference image and is displayed on the camera preview interface, and the remaining 15 images are used as reference images and may not be displayed on the camera preview interface.
In an embodiment of the present invention, the light reflecting region may include M light reflecting subregions, where M is a positive integer. The size of each light reflecting sub-region may be the same or different, and this is not limited in the embodiment of the present invention. Accordingly, the reference region includes M reference sub-regions, and since the reference region corresponds to the light-reflecting region, the number and size of the reference sub-regions depend on the number and size of the light-reflecting sub-regions.
In this step, after the reference image and the reference image are determined, the light reflection region in the reference image and the reference region corresponding to the light reflection region in the reference image may be determined, and then image processing such as pixel replacement may be performed on the light reflection region based on the reference region. For example, referring to fig. 4, if 1 reference image and 15 reference images are determined from 16 images acquired by the smartphone rear camera 400, a light-reflecting area in the reference image is determined, a reference area corresponding to the light-reflecting area is determined from the 15 reference images, and then the light-reflecting area is replaced based on the reference area.
In the embodiment of the present invention, when the reflective area is replaced according to the reference area, the size of the reflective area may be different, and accordingly, the reference area corresponding to the reflective area may be the same as or different from the size of the reflective area, that is, the reflective area may be replaced with a larger reference area. Therefore, the reference area may be used to replace the corresponding light-reflecting area, or a part of the reference area may be used to replace the corresponding light-reflecting area, which is not limited in the embodiment of the present invention.
In this step, a reference area corresponding to the reflection area in the reference image may be used to perform replacement processing on the reflection area in the determined reference image to remove the reflection area in the determined reference image, so as to obtain the reference image without the reflection area as the target image to be output. For example, for 16 first preview images acquired by a rear camera of a smartphone, 1 reference image and 15 reference images are determined, then a light reflection region in the 1 reference image and a reference region corresponding to the light reflection region in the 15 reference images are determined, and the light reflection region is replaced by the reference region, so that the light reflection region in the reference image can be removed, the reference image without the light reflection region is obtained as a target image, and finally the target image can be used as a picture taken by the rear camera of the smartphone.
The embodiment of the invention provides an image processing method, which comprises the steps of obtaining N images collected by N cameras; based on the images of the reference areas in the N-1 reference images, carrying out image processing on the reflection areas in the reference images; wherein the N images comprise the reference image and the N-1 reference images, and N is an integer greater than 1. In the embodiment of the invention, the electronic equipment can automatically acquire 1 piece of reference image and N-1 pieces of reference image through N cameras without manual removal by a user, and remove the reflective area in the reference image based on the image of the reference area in the reference image, thereby simplifying the user operation and improving the processing efficiency.
Meanwhile, due to the physical characteristics of light reflection, the positions of light reflection in the N images acquired by the different cameras with coincident shooting visual angles are also different, so that in the embodiment of the invention, the reference image and the reference image are determined from the N images, and the image processing is performed on the light reflection area based on the image of the reference area in the reference image, so that the removal effect of the light reflection area can be ensured.
Alternatively, in the embodiment of the present invention, the light reflection region may be acquired based on an input of a user.
In the embodiment of the present invention, the light reflection region may be a region circled by a user, or may also be a reference sub-region clicked by the user, where the circled region is a region selected by the user on the reference image through a preset gesture, and the reference sub-region is a sub-region pre-divided in the reference image.
It should be noted that the sizes of the reference sub-regions pre-divided by the reference image may be the same or different, and this is not limited in the embodiment of the present invention.
In this step, the user can select a preset gesture to circle out each light reflecting sub-region on the camera preview interface to form a light reflecting region. For example, when a user opens camera software of a smart phone and enters a camera preview interface, the camera preview interface displays a determined reference image, the user clicks a light reflection removing button on the camera preview interface and pops up a preset gesture dialog box, and the user can circle out each light reflection sub-region by selecting a preset gesture.
In the embodiment of the invention, the light reflection region is obtained based on the input of the user, calculation is not needed, the light reflection region can be quickly determined, the light reflection region in the reference image can be quickly removed in the shooting process of the camera, and the processing efficiency is improved.
Optionally, in an embodiment of the present invention, the light reflecting region includes M light reflecting sub-regions, where M is a positive integer; before the image processing operation is performed on the reflection region in the reference image based on the images of the reference regions in the N-1 reference images, the image processing method may further include steps (1) to (3) of:
and (1) dividing the reference image into T reference sub-regions.
Wherein T is an integer greater than 1.
In the embodiment of the present invention, the sizes of the reference sub-regions may be the same or different, and the embodiment of the present invention does not limit this.
In this step, after the reference image is determined, the reference image may be divided into T reference sub-regions having the same size. For example, a reference image determined from N images acquired by a smartphone rear camera is divided into T reference sub-regions of 50 × 50 pixels in size.
And (2) determining a first number of pixel points of which the gray value of each reference sub-region in the T reference sub-regions is greater than a first threshold value.
In the embodiment of the present invention, the pixel values in the reference sub-region may be converted into the gray values by using a floating point method, an integer method, a shift method, an average value method, a green only method, a Gamma correction algorithm, and the like, which is not limited in the embodiment of the present invention. The first threshold may be set by the user based on actual adjustment experience, or may be a default numerical value of the system, which is not limited in the embodiment of the present invention.
In this step, the reference image may be divided into T reference sub-regions of the same size, the gray value of each reference sub-region is determined by an average value method, and then the first number of the pixels of which the gray value of each reference sub-region is greater than the first threshold is detected. For example, the first threshold value is 200, and M is 100; dividing the reference image into 100 reference sub-regions of 2 × 2 pixels, and respectively leveling the 100 reference sub-regionsDetermining a gray value in each reference sub-area by using an averaging method; if the reference sub-region J1The gray values in (1) are 210, 190, 230 and 150, and are compared with the reference sub-region J1The gray-level value and the first threshold value in (1) are known 210<200、190>200、230<200、150>200, then the reference sub-region J1The corresponding first number is 2; other 99 reference sub-regions corresponding first quantity determination method and reference sub-region J1The corresponding first quantity determination methods are similar, and are not described in detail herein.
And (3) determining M light reflecting sub-regions based on the first number of each reference sub-region.
In the embodiment of the present invention, the light reflecting sub-regions are selected from all the reference sub-regions, so that the size of the light reflecting sub-region depends on the size of the reference sub-region, that is, if the size of the reference sub-regions is different, the size of the first light reflecting sub-region is also different, and vice versa.
In this step, after the first number of each reference sub-region is determined, M light reflecting sub-regions may be determined from the reference sub-regions according to the requirement of the preset gray value. For example, the reference image is divided into 6 reference sub-regions J of the same size1、J2、J3、J4、J5、J6Determining the reference sub-region J1The corresponding first number is 70, J2The corresponding first number is 90, J3The corresponding first number is 85, J4The corresponding first number is 60, J5The corresponding first number is 80, J6The corresponding first number is 95 if the reference subregion J6If the requirement of the preset gray value is met, determining the reference sub-area J6As a light-reflecting region, the light-reflecting region of the reference image is thus { J }6}。
In the embodiment of the invention, the reference image is divided into the T reference sub-regions, so that the light reflecting sub-regions can be automatically selected from all the reference sub-regions to remove the light reflecting sub-regions of the reference image, a user does not need to operate, and the processing efficiency is improved. And the light reflecting sub-regions are determined only by the first number, so that the calculation amount can be reduced, and the processing efficiency is further improved.
Optionally, in an embodiment of the present invention, the operation of determining M light reflecting sub-regions based on the first number of each reference sub-region may specifically be implemented by the following sub-step a:
the sub-step A, for each reference sub-region, under the condition that the first ratio is larger than a second threshold value, determining the reference sub-region as a light reflecting sub-region; wherein the first ratio is a ratio of the first number of each reference sub-region to a total number of pixels of each reference sub-region.
In this embodiment of the present invention, the second threshold may be set by a user based on actual adjustment experience, or may be a default numerical value of the system, which is not limited in this embodiment of the present invention.
In this step, after the reference image is divided into M reference sub-regions of the same size and the first number corresponding to each reference sub-region is determined, for each reference sub-region, a first ratio of the first number of each reference sub-region to the total number of pixels of each reference sub-region is calculated, and then the first ratio is compared with a second threshold to determine whether the reference sub-region is used as a light reflecting sub-region. For example, the second threshold is 90%, M is 6; dividing the reference image into 6 reference sub-regions J of 50 by 50 pixel size1、J2、J3、J4、J5、J6Then, the first total pixel number of each reference sub-region is 100, the first numbers of the 6 reference sub-regions are respectively determined to be 70, 90, 85, 60, 80, and 95, the first ratios of the 6 reference sub-regions can be calculated to be 70%, 90%, 85%, 60%, 80%, and 95%, and then the first ratios of the 6 reference sub-regions are respectively compared with the second threshold, so that 70% can be obtained<90%、90%=90%、85%<90%、60%<90%、80%<90%、95%>90%, the reference sub-region J can be determined6There is a light reflecting area inside, so the reference sub-area J6Is determined as a light reflecting sub-region
In the embodiment of the invention, after the reference image is divided into the T reference sub-regions, the operation of determining the light reflecting sub-regions only through the relation between the first ratio and the second threshold is simple, the calculation amount for determining the light reflecting sub-regions can be reduced, and the processing efficiency can be improved.
Optionally, in an embodiment of the present invention, the operation of determining M light reflecting sub-regions based on the first number of each reference sub-region may specifically be implemented by the following sub-steps B to C:
determining the number of pixel points with the gray value smaller than a third threshold value in the adjacent reference sub-region of each reference sub-region in the T reference sub-regions to obtain a second number corresponding to the reference sub-regions;
in the embodiment of the present invention, the gray values in the adjacent reference sub-regions of the reference sub-region may be determined by a floating point method, an integer method, a shift method, an average value method, a green only method, a Gamma correction algorithm, and the like, which is not limited in the embodiment of the present invention. The third threshold may be set by the user based on actual adjustment experience, or may be a default numerical value of the system, which is not limited in the embodiment of the present invention.
In this step, after dividing the reference image into M reference sub-regions of the same size, first determining adjacent reference sub-regions of each reference sub-region, then determining a gray value of each adjacent reference sub-region by an average value method, and detecting a second number of pixel points in which the gray value is smaller than a third threshold. For example, the third threshold value is 50, and M is 100; dividing a reference image into 100 reference sub-regions with the size of 2 x 2 pixels, and determining adjacent reference sub-regions for the 100 reference sub-regions respectively; if the reference sub-region J1Adjacent reference sub-region of J2、J11、J12Then to the adjacent reference sub-region J2、J11、J12Determining the gray value in each reference sub-area by an average value method; if adjacent reference sub-region J2Are 110, 90, 30, 70, compare adjacent reference sub-regions J2The gray value and the third threshold value in (1) are known to be 110>50、90>50、30<50、70>50, then adjacent to the reference sub-region J2The number of the pixel points with the middle gray value smaller than the third threshold value is 1;if adjacent reference sub-region J11、J12In the same way, the adjacent reference sub-region J is determined11The number of pixel points with the middle gray value smaller than the second gray value threshold is 3, and the adjacent reference subareas J12The number of the pixel points with the middle gray value smaller than the second gray value threshold value is 2; the reference sub-region J can be determined1The corresponding second number is: 1+3+2 ═ 6; other 99 reference sub-regions and reference sub-region J1The corresponding second quantity determination methods are similar and are not described in detail herein.
And a substep C of determining M light reflecting sub-regions from the reference sub-regions based on the first number and the second number of each of the reference sub-regions.
In this step, after the first number and the second number of each reference sub-region are determined, M light reflecting sub-regions may be determined from the reference sub-regions according to the requirement of the preset gray value. For example, the reference image is divided into 6 reference sub-regions J of 50 × 50 pixel size1、J2、J3、J4、J5、J6Determining that the first number corresponding to the 6 reference sub-regions is 70, 90, 85, 60, 80, 95, determining that the second number corresponding to the 6 reference sub-regions is 300, 480, 260, 285, 490, 300, respectively, and determining that the reference sub-region J is to be divided into the reference sub-regions according to the first number and the second number corresponding to the 6 reference sub-regions6As the light reflecting sub-region, the first light reflecting region of the reference image is { J }6}。
In the embodiment of the invention, after the reference image is divided into the T reference subregions, the first light reflecting subregions are determined according to the relationship between the first number and the second number, although the calculation amount is increased, the accuracy of the determined light reflecting subregions is improved, the reliability of removing the light reflecting regions by the image processing method can be improved, and the processing efficiency is improved.
Optionally, in the embodiment of the present invention, the operation of determining M light reflecting sub-regions from the reference sub-regions based on the first number and the second number of each reference sub-region may specifically be implemented by the following sub-step D:
substep D, for each reference sub-region, determining the reference sub-region as a light reflecting sub-region under the condition that the first ratio is greater than a second threshold and the second number is equal to the total number of pixels of the adjacent reference sub-region; wherein the first ratio is a ratio of the first number corresponding to the reference sub-region to the total number of pixels of each reference sub-region.
In the embodiment of the present invention, the total number of pixels in the adjacent reference sub-region of one reference sub-region refers to the sum of the number of pixels in all the adjacent reference sub-regions of the reference sub-region. For example, the reference image is divided into 6 reference sub-regions with a size of 50 × 50 pixels, and the number of pixels in each reference sub-region is 100; wherein the reference sub-region J1Adjacent reference sub-region of J2、J4、J5Reference sub-region J1The total number of pixels of the adjacent reference sub-region of (1) is 100+100+ 100-300.
In this step, after dividing the reference image into M reference sub-regions of the same size and determining the first number and the second number of each reference sub-region, for each reference sub-region, first calculating a first ratio of the first number of the reference sub-regions to the first total pixel number of the reference sub-region and a total pixel amount of an adjacent reference sub-region of the reference sub-region, and then comparing the first ratio with a second threshold and the second number with the total pixel amount of the adjacent reference sub-region of the reference sub-region to determine whether to use the reference sub-region as a light reflecting sub-region.
Illustratively, the second threshold is 90%, M is 6; dividing the reference image into 6 reference sub-regions J of 50 x 50 pixel size1、J2、J3、J4、J5、J6Then the number of pixels of each reference sub-region is 100, and the reference sub-region J1Adjacent reference sub-region of J2、J4、J5Reference subregion J2Adjacent reference sub-region of J1、J3、J4、J5、J6Reference subregion J3Adjacent reference sub-region of J2、J5、J6Reference subregion J4Adjacent reference sub-region of J1、J2、J5Reference subregion J5Adjacent reference sub-region of J1、J2、J3、J4、J6Reference subregion J6Adjacent reference sub-region of J2、J3、J5Determining the first number of the 6 reference sub-regions to be 70, 90, 85, 60, 80, 95, the second number of the 6 reference sub-regions to be 300, 480, 260, 285, 490, 300, the total number of pixels of the adjacent reference sub-regions to be 300, 500, 300, respectively, and comparing the first ratio with the second threshold to determine 70%<90%、90%=90%、85%<90%、60%<90%、80%<90%、95%>90%, comparing the second number of each reference sub-region with the total number of pixels of the adjacent reference sub-region of each reference sub-region to obtain 300 ═ 300, 480 ═ 300<500、260<300、285<300、490<500. 300-300, then what satisfies "the first ratio is greater than the second threshold and the second number is equal to the total number of pixels of said adjacent reference sub-region" is the reference sub-region J6Then the reference sub-region J can be determined6Are determined as light-reflecting sub-regions.
In the embodiment of the invention, for each reference sub-region, the light reflecting sub-region is determined according to the relationship between the first ratio and the second threshold and the relationship between the second number and the total number of pixels of the adjacent reference sub-region, although the calculation is more complicated, the accuracy of the determined light reflecting sub-region can be ensured, and the reliability of the image processing method can be improved.
Optionally, in an embodiment of the present invention, the reference region includes M reference sub-regions, and the image processing method may further include the following steps (4) to (5):
step (4), for each light reflection sub-area in the light reflection area, determining a sub-area matched with the position of each light reflection sub-area in each reference image to obtain T alternative sub-areas; wherein T is a positive integer.
In this embodiment of the present invention, step (4) may be implemented after step (3) or sub-step a, and sub-step (4) may also be implemented after sub-step C or sub-step D, so as to determine a reference region in the reference image, where the reference region corresponds to the reflection region.
It should be noted that if a coordinate system is established with the first pixel point at the lower left corner of the reference image and the reference image as the origin, and the coordinates of the position of the light reflecting sub-region in the reference image are (X, Y), the coordinates of the position of each candidate sub-region in the reference image corresponding to the candidate sub-region are also (X, Y), wherein the coordinates of the central pixel point of each light reflecting sub-region and each candidate sub-region are used as the coordinates of the region.
In this step, after the light reflection region of the reference image is determined, for each light reflection sub-region in the light reflection region, a region matching the position of each light reflection sub-region in each reference image may be determined, so as to obtain a candidate sub-region of each light reflection sub-region. For example, 1 reference image and 3 reference images are determined, and the reflection area of the reference image is { J }2,J6}; for the light-reflecting sub-region J in the light-reflecting region2And determining that the areas matched with the positions of the light reflecting subregions in the 3 reference images are 3: f12、F22、F32Then reflecting sub-region J2The corresponding alternative sub-region has F12、F22、F32(ii) a For the light-reflecting sub-region J in the light-reflecting region6And determining that the sub-regions matched with the positions of the light reflecting sub-regions in the 3 reference images are 3: f16、F26、F36Then the first light reflecting sub-region J6The corresponding alternative sub-region has F16、F26、F36
And (5) selecting the candidate subarea with the definition meeting the preset definition condition as the reference subarea according to the definition of each candidate subarea.
In the embodiment of the invention, the definition of the candidate sub-region can be obtained by a Brenner gradient function, a Laplacian gradient function, a gray variance product function, a variance function, an energy gradient function, a Vollant function, an entropy function, an EAV point sharpness algorithm function, Reblur second-order blur, NRSS gradient structure similarity and the like; the preset definition condition may be set by a user based on actual adjustment experience, or may be a default numerical value of the system, which is not limited in the embodiment of the present invention.
In this step, after determining the candidate sub-regions corresponding to each light reflecting sub-region, according to the definition of each candidate sub-region of the light reflecting sub-region, the candidate sub-region whose definition meets the preset definition condition is selected as the reference sub-region corresponding to the light reflecting sub-region. For example, 1 reference image and 3 non-reference images are determined, and the reflection area of the reference image is { J }2,J6}; wherein the light reflecting sub-region J2The corresponding alternative sub-region has F12、F22、F32Light reflecting sub-region J6The corresponding alternative sub-region has F16、F26、F36(ii) a Alternative sub-region F12Has a resolution of 100, alternative sub-region F22Has a resolution of 150, alternative sub-region F32Has a resolution of 200, if the light-reflecting sub-region J2The alternative subarea F with definition meeting the preset definition condition in the corresponding alternative subarea32Then the sub-region F is selected as the candidate32As a light-reflecting sub-region J2A corresponding reference sub-region; alternative sub-region F16Has a resolution of 400, alternative sub-region F26Is 90, alternative sub-region F36Has a resolution of 70, if the light-reflecting sub-region J6The alternative subarea F with definition meeting the preset definition condition in the corresponding alternative subarea16Then the alternative sub-region F16As a light-reflecting sub-region J6A corresponding reference sub-region.
In the embodiment of the invention, the sub-region matched with the position of the light reflecting sub-region is selected from the reference image as the alternative sub-region, so that the position and space relation of people or objects in the reference image is not changed when the light reflecting sub-region is replaced by the selected alternative sub-region; and selecting the candidate subarea meeting the preset definition requirement condition from each candidate subarea through the preset definition condition as the reference subarea, so that the definition of the replaced reference image can be ensured, and the aim of optimizing the shooting effect is fulfilled.
Optionally, in the embodiment of the present invention, the selecting, according to the definition of each candidate sub-region, a candidate sub-region whose definition meets a preset definition condition as the reference sub-region may specifically be implemented by the following sub-steps E to F:
and E, determining the candidate sub-region with the maximum definition according to the definition of each candidate sub-region.
In the embodiment of the present invention, the candidate sub-region with the maximum definition may be obtained by sorting the definitions of each candidate sub-region, or the candidate sub-region with the maximum definition may be obtained by using a mathematical function, which is not limited in the embodiment of the present invention.
In this step, after the definition of each candidate sub-region is determined, the candidate sub-regions with the highest definition may be obtained by sorting the definitions of each candidate sub-region from large to small. For example, the light reflecting sub-region J2The corresponding alternative sub-region has F12、F22、F32Alternative sub-region F12Has a resolution of 100, alternative sub-region F22Has a resolution of 150, alternative sub-region F32Is 200, for the alternative sub-region F12、F22、F32The resolution of (c) is sorted from large to small, which results in the sequence 200, 100, 150, it can be determined that the candidate sub-region F with the highest resolution is32
And a substep F of using the candidate sub-region with the highest resolution as the reference sub-region.
In this step, after the candidate sub-region with the maximum definition is determined in all the candidate sub-regions corresponding to one light reflecting sub-region, the candidate sub-region with the maximum definition may be used as the reference sub-region corresponding to the light reflecting sub-region. For example, a light-reflecting sub-regionDomain J2The corresponding sub-region F with the highest resolution is the candidate sub-region F32Then the alternative sub-region F32As a light-reflecting sub-region J2A corresponding reference sub-region.
In the embodiment of the invention, the alternative sub-region with the maximum definition is used as the reference sub-region, so that the reliability of removing the light reflecting region by the image processing method can be improved, the definition of the reference image after the reference sub-region is used for replacing the light reflecting sub-region can be ensured, and the aim of optimizing the shooting effect can be achieved.
Optionally, in the embodiment of the present invention, the operation of selecting, according to the definition of each candidate sub-region, a candidate sub-region whose definition meets a preset definition condition as the reference sub-region may specifically be implemented by the following sub-steps G to H:
and a substep G: and determining a target candidate sub-region with the definition of the candidate sub-region being greater than or equal to a preset definition according to the definition of each candidate sub-region.
In the embodiment of the present invention, the preset definition may be set by the user based on actual adjustment experience, or may be a default numerical value of the system, which is not limited in the embodiment of the present invention.
In this step, after the definition of each candidate sub-region is determined, a target candidate sub-region whose definition of the candidate sub-region is greater than or equal to the preset definition may be determined by comparing the definition of the candidate sub-region with the preset definition. For example, the predetermined resolution is 200, the reflection sub-region J2The corresponding alternative sub-region has F12、F22、F32Alternative sub-region F12Has a resolution of 150, alternative sub-region F22Is 250, alternative sub-region F32The resolution of (2) is 200, and the resolution of the candidate sub-region is compared with the preset resolution to know 150<200、250>200. 200-200, the target candidate sub-region has F22、F32
And a substep H of selecting one target candidate subregion from the target candidate subregions as the reference subregion.
In the embodiment of the present invention, one target candidate sub-region is selected from the target candidate sub-regions, which may be randomly selected or selected according to a preset requirement, and this is not limited in the embodiment of the present invention.
In this step, after the target candidate sub-region is determined, one target candidate sub-region may be selected as the reference sub-region in the target candidate sub-region according to a preset requirement. For example, the preset requirements are: the third ratio is the ratio of the number of pixel points of which the gray value is greater than the fifth threshold value in the target candidate sub-region to the total number of pixels of the target candidate sub-region, the fourth number is the number of pixel points of which the gray value is less than the second threshold value in the adjacent reference sub-region of the light reflecting sub-region corresponding to the target candidate sub-region, the fourth total number of pixels is the sum of the number of pixels of all the adjacent reference sub-regions of the light reflecting sub-region corresponding to the target candidate sub-region, and the second threshold value can be 90%; target candidate subregion has F22、F32The target candidate subarea corresponds to the first light reflecting subarea J2(ii) a Target candidate subregion has F22The corresponding third ratio is 95%, and the target candidate subregion has F22The corresponding fourth number is 500, and the target candidate sub-region has F22If the corresponding fourth pixel total amount is 500, it is found by comparison that the third ratio 95% is greater than the second threshold 90%, and the fourth amount 500 is equal to the fourth pixel total amount 500; target candidate subregion has F23The corresponding third ratio is 93%, and the target candidate subregion has F23The corresponding fourth number is 450, and the target candidate sub-region has F23If the corresponding fourth pixel total amount is 500, it is found by comparison that the third ratio 93% is greater than the second threshold 90%, and the fourth amount 450 is not equal to the fourth pixel total amount 500; in conclusion, it can be known that the target candidate sub-region F does not meet the preset requirement23Determining that the selected target candidate sub-region has F23As a light-reflecting sub-region J2If the corresponding reference sub-area does not have the light reflection area, the target candidate sub-area F is selected23As a light-reflecting sub-region J2Corresponding toThe sub-regions are referenced.
It is understood that the fifth threshold may be set by the user based on actual adjustment experience, or may be a default value of the system; the fifth threshold may be equal to the first threshold, or may not be equal to the first threshold, which is not limited in the embodiment of the present invention.
In the embodiment of the invention, the target candidate subarea meeting the preset requirement is selected as the first replacement subarea, so that the reliability of the image processing method can be further improved, the effect of removing the first light reflecting area from the reference image is ensured, and the shooting effect is optimized.
Optionally, in this embodiment of the present invention, before the operation of determining, for each light-reflecting sub-region in the light-reflecting region, a sub-region in each reference image that matches the position where each light-reflecting sub-region is located to obtain T candidate sub-regions, the image processing method may further include the following step (6):
and (6) aligning the reference image with the reference image based on a preset image alignment algorithm for each reference image.
In the embodiment of the present invention, sub-step (6) may be implemented after step (3) or sub-step a and before step (4), or may be implemented after sub-step C or sub-step D and before step (4), which is not limited in the embodiment of the present invention. In addition, the preset image alignment algorithm may be an SAT (Spatial alignment transform) algorithm for aligning image frames of the same object captured by different cameras from different angles, so that the images output by different cameras look the same as the image output by the same camera.
It should be noted that, for the rear camera of the smart phone, images with different shooting angles are output due to different arrangements of the cameras, so that areas reflecting light of different images are different, and meanwhile, since the position difference of the N cameras is not large, the reference images of different cameras can be aligned to the reference image through the SAT algorithm.
In this step, after the reference image and the reference image are determined, the reference image may be aligned with the reference image using an SAT algorithm. For example, 1 reference image and 3 reference images are determined, and for the 3 reference images, the SAT algorithm is respectively adopted to align the reference images with the 3 reference images, so that the image picture in the sub-area matched with the position of the reference sub-area in each reference image is consistent with the image picture of the reference sub-area in the reference image, and in all the N images, the image coordinates of the reference area in the aligned reference images are consistent with the image coordinates of the light reflection area in the reference images.
In the embodiment of the invention, the reference image is aligned with the reference image through a preset image alignment algorithm, so that the image picture of the subarea matched with the position of the reference subarea is consistent with the image picture of the reference subarea, and the image pictures of the reference subarea and the adjacent reference subarea of the light reflecting subarea are continuous after the corresponding light reflecting subarea is replaced by the selected reference subarea in the subareas, thereby not affecting the shooting effect.
Optionally, in the embodiment of the present invention, the operation of performing image processing on the reflection region in the reference image based on the image of the reference region in the N-1 reference images may specifically be implemented by the following sub-step I:
step I, for each light reflecting sub-region in the light reflecting region, replacing the pixel value of a pixel point in each light reflecting sub-region with a first target pixel value; and the first target pixel value is the pixel value of a pixel point in each reference subarea corresponding to each light reflecting subarea.
In this embodiment of the present invention, the first target pixel value may be all pixel points in the reference sub-area, or may be a part of pixel points in the reference sub-area.
In this step, after the light reflecting region and the reference region are determined, the pixel values of the pixels in each light reflecting sub-region in the light reflecting region are all replaced with the pixel values of the pixels in the corresponding reference sub-region. For example, the light reflection region is { J2,J6The reference area is { F }32,F16}, reflecting sub-region J2The corresponding reference sub-region is F32Light reflecting sub-region J6The corresponding reference sub-region is F16Then reflect the light sub-region J2All pixel values of the inner pixel points are replaced by reference subregions F32The pixel value of the inner pixel point is the reflection sub-region J6All pixel values of the inner pixel points are replaced by reference subregions F16The pixel value of the inner pixel point.
For example, if one of the image coordinates in the light reflecting sub-region of the reference image is (x, y), and its pixel value is a (x, y), the pixel value a (x, y) of the coordinate (x, y) in the reference image may be directly replaced by the pixel value B (x, y) of the reference image at the same coordinate (x, y), and so on, all the pixel values in all the light reflecting sub-regions in the reference image are replaced by the pixel values of their corresponding reference sub-regions. For a reference image with a plurality of light reflecting sub-regions, for each light reflecting sub-region, all pixel values in the light reflecting sub-region can be replaced by the pixel values of the corresponding reference sub-region.
In the embodiment of the invention, the reflection area of the reference image can be removed by completely replacing the pixel values of the pixel points in the reflection sub-area with the pixel values of the pixel points in the corresponding reference sub-area, so that the definition of the reference image without the reflection area is ensured, and the shooting effect of the camera is optimized.
Optionally, in an embodiment of the present invention, the image processing method may further include the following steps:
and (7) detecting whether a second light reflection area exists in the replaced reference image.
It will be appreciated that the second light reflecting area may comprise a second light reflecting sub-area.
In this step, the second light reflection area of the replaced reference image is determined, which is similar to the determination of the light reflection area of the reference image and is not repeated here.
And (8) if the second light reflection area does not exist, determining the replaced reference image as the target image.
In this embodiment of the present invention, since the second light-reflecting area is determined from the replaced reference image, the second light-reflecting area may be all or a part of the reference sub-area in the reference area, or may not be any reference sub-area in the first replacement area, which is not limited in this embodiment of the present invention.
In this step, after the second light reflection area of the replaced reference image is determined, whether the second light reflection area is empty or not may be detected, and if the second light reflection area is empty, it may be determined that the second light reflection area is not present, and the replaced reference image may be determined as the target image. For example, after the first light reflecting area in the reference image is replaced by the first replacement area, the replaced reference image is obtained, the sub-areas are divided into the replaced reference image, and it is determined that any sub-area does not have a light reflecting area, so that the second light reflecting area is empty, that is, the second light reflecting area does not exist.
Step (9), if a second light reflection area exists, adjusting the shooting angle of a target camera, and acquiring a corrected image according to the target camera; replacing the second light reflecting area according to the corrected image; the target camera is a camera used for shooting the reference image.
In this embodiment of the present invention, the second light-reflecting region may be a part of a reference sub-region in the reference region, and after determining a reference image corresponding to the part of the reference sub-region, the target camera may be a camera that outputs the reference image.
In this step, after the second light reflecting area of the replaced reference image is determined, whether the second light reflecting area is empty is detected, if the second light reflecting area is not empty, it is determined that the second light reflecting area exists, the shooting angle of the target camera is adjusted, and a corrected image is obtained according to the target camera; and carrying out replacement processing on the second light reflecting area according to the corrected image.
It can be understood that the corrected image obtained by the target camera is similar to the N images acquired by the N cameras, and is not repeated here; the replacing process of the second reflection region according to the corrected image may be determining a designated region according to the corrected image and performing image processing on the second reflection region according to the designated region, where determining the designated region according to the corrected image is similar to determining a reference region in the reference image corresponding to the reflection region, and performing image processing on the second reflection region according to the designated region is similar to performing image processing on the reflection region according to the reference region, and is not described here any more.
In the embodiment of the invention, whether the light reflecting region of the replaced reference image needs to be continuously removed or not can be determined by detecting whether the second light reflecting region exists in the replaced reference image, so that the light reflecting region of the obtained target image does not exist, and the effect of removing the light reflecting region of the reference image is ensured.
Optionally, in an embodiment of the present invention, the image processing method further includes the following steps (10) to (11):
step (10), under the condition that no alternative subarea with definition meeting a preset definition condition exists, adjusting the shooting angle of a target camera, and controlling the target camera to shoot a target image; the target camera is a camera used for shooting the reference image.
In the embodiment of the present invention, the preset definition condition may be set by the user based on actual adjustment experience, or may be a default numerical value of the system. For example, the preset definition condition may be that the definition of the candidate sub-region is greater than or equal to a preset definition, or the definition is maximum. Therefore, step (10) may be implemented after step (5), sub-step F, or sub-step H, which is not limited in this embodiment of the present invention.
It is to be noted that the reference picture in the sub-step (10) refers to the reference picture in the step 302. Thus, the target camera may be the camera that captured N-1 reference images in step 302.
In this step, after determining T candidate sub-regions that are matched with the position of each light reflecting sub-region of the light reflecting region in the reference image captured in step 302 for the N-1 reference images captured in step 302, when determining that there is no candidate sub-region whose definition meets a preset definition condition, the capturing angle of the target camera capturing the reference image may be adjusted to control the target camera to capture the image as the target image again.
Exemplarily, the smart phone is provided with 4 rear cameras, 1 reference image and 3 reference images are acquired through the 4 rear cameras, the 3 cameras for shooting the 3 reference images are taken as target cameras, the preset definition condition is that the definition of the candidate subarea is greater than or equal to the preset definition, the preset definition is 300, and the light reflecting subarea J is2The corresponding alternative sub-region has F12、F22、F32Alternative sub-region F12Has a resolution of 150, alternative sub-region F22Is 250, alternative subregion F32The resolution of (2) is 200, and the resolution of the candidate sub-region is compared with the predetermined resolution to know 150<300、250<300、200<300, if there is no candidate subarea with the definition meeting the preset definition condition, adjusting the shooting angles of the 3 target cameras, and controlling the 3 target cameras to shoot images again as target images.
And (11) performing replacement image processing on the light reflection area according to the target image.
It is understood that the detailed description of step (11) is similar to step 302, and is not repeated here.
In the embodiment of the invention, under the condition that no alternative subarea with definition meeting the preset definition requirement condition exists, the shooting angle of the target camera is adjusted, the target camera is controlled to shoot an image again to serve as a target image, and the image replacement processing is carried out on the light reflection area according to the target image. Thus, after the image replacement processing is carried out on the light reflection area, the light reflection area does not exist in the reference image, and the effect of removing the light reflection area from the reference image can be ensured.
Optionally, in the embodiment of the present invention, the operation of adjusting the shooting angle of the target camera and controlling the target camera to shoot the target image may specifically be implemented by the following substeps J to K:
and step J, adjusting the shooting angle of the target camera, and detecting whether a preview image acquired by the target camera has an alternative sub-region with definition meeting a preset definition condition or not in the adjustment process.
It should be noted that if the brightness values of the same region of the N images acquired by the N cameras are too high, it indicates that the user is likely to perform backlight shooting and is a strong light source, and therefore, the N cameras in different arrangement positions all have the same reflective region. For the situation, the function that the camera can move can be added, namely, a spherical chassis is added at the bottom of the camera, so that the camera can flexibly rotate up and down, left and right like human eyes.
In the embodiment of the invention, the bottom of the camera can be provided with a spherical chassis for moving the camera. Correspondingly, after the user clicks an automatic reflection removing button on the camera preview interface, the camera rotates anticlockwise or clockwise by 360 degrees to adjust the shooting angle of the target camera; or after the user clicks the light reflection removing button on the camera preview interface, referring to fig. 6, the user selects one virtual camera in the virtual camera dialog box 600 and moves up, down, left, or right to adjust the shooting angle of the target camera, which is not limited in the embodiment of the present invention. In addition, the virtual camera corresponding to the camera outputting the reference image cannot rotate or slide, so that the determined reference image is ensured not to be changed in the image processing method.
It should be noted that the operation of adjusting the shooting angle of the target camera in the sub-step J may be performed when it is determined that the second light reflection region exists in the replaced reference image in which the light reflection region is replaced by the reference region, or may be performed when there is no candidate sub-region whose definition meets the preset definition condition.
In this step, a user may click a reflection removing button on a camera preview interface, and when it is determined that there is no alternative subregion whose definition meets a preset definition condition, a virtual camera dialog box is popped up on the camera preview interface, and the user may select one virtual camera in the virtual camera dialog box, so that the virtual camera is moved up or down or left or right to adjust a shooting angle of a target camera corresponding to the virtual camera, and then in an adjustment process, it is required to detect in real time whether there is an alternative subregion whose definition meets the preset definition condition in a preview image collected by the target camera. For example, referring to fig. 6, the preset definition condition is that the definition of the candidate sub-region is greater than or equal to the preset definition, and the preset definition is 300, the user clicks a reflection removal button on the camera preview interface, and when it is determined that the definition of the candidate sub-region is not greater than or equal to the preset definition, a virtual camera dialog box 600 pops up on the camera preview interface, and the user selects one virtual camera 601 in the virtual camera dialog box 600, so that the virtual camera 601 moves up or down or left or right, and in the adjustment process, it is detected in real time whether a preview image acquired by a target camera corresponding to the virtual camera 601 has a sub-region candidate whose definition meets the preset definition condition.
It can be understood that whether the preview image acquired by the target camera has an alternative sub-region with definition meeting the preset definition condition is detected: firstly, acquiring a preview image acquired by a target camera; then, determining a sub-area matched with the position of each light reflecting sub-area in each preview image to obtain T alternative sub-areas; and finally, detecting whether the alternative sub-regions with the definition meeting the preset definition condition exist according to the definition of each alternative sub-region. The method comprises the steps of acquiring a preview image acquired by a target camera, wherein the preview image is similar to N images acquired by N cameras; determining a sub-area matched with the position of each light reflecting sub-area in each preview image to obtain T alternative sub-areas, wherein the T alternative sub-areas are similar to the T alternative sub-areas obtained by determining the sub-area matched with the position of each light reflecting sub-area in each reference image; whether the candidate sub-regions with the definition meeting the preset definition condition exist is detected according to the definition of each candidate sub-region, and the method is similar to the method for selecting the candidate sub-regions with the definition meeting the preset definition condition according to the definition of each candidate sub-region, and is not repeated here one by one.
In the embodiment of the invention, under the condition that the image shot by the camera does not meet the preset definition requirement, the virtual camera can be scratched by a user or the camera automatically rotates anticlockwise or clockwise for 360 degrees to obtain a required image, so that the reflected light can be removed under any condition, the effect of eliminating the reflective area of the reference image is ensured, the definition of the final picture shot by the camera is ensured, and the shooting effect is optimized.
Step K, stopping adjusting the shooting angle and controlling the target camera to shoot the target image when detecting that the definition of the designated area of the preview image collected by the target camera meets a preset definition condition and a second ratio of the designated area of the preview image is smaller than or equal to a second threshold value; the designated area is an area matched with the position of the light reflecting area; the second ratio is a ratio of a third number to a total number of pixels in the designated area, and the third number is a number of pixels in the designated area having a gray value greater than a fourth threshold.
In the embodiment of the invention, the gray value of the pixel point in the designated area can be determined by methods such as a floating point method, an integer method, a shift method, an average value method, a green-only method, a Gamma correction algorithm and the like; the definition of a designated region of a preview image can be determined by a Brenner gradient function, a Laplacian gradient function, a gray variance product function, a variance function, an energy gradient function, a Vollath function, an entropy function, an EAV point sharpness algorithm function, Reblur second-order blur, NRSS gradient structure similarity, and the like; the fourth threshold may be set by the user based on actual adjustment experience, or may be a default value of the system; the fourth threshold may be equal to the first threshold, or may not be equal to the first threshold, which is not limited in the embodiment of the present invention.
In the embodiment of the present invention, the condition for stopping adjusting the shooting angle is that no light reflection area exists after image processing such as pixel replacement is performed on the light reflection area by selecting a specified area in the preview image. And the total number of pixels of the adjacent reference sub-area of each light reflecting sub-area in the light reflecting area is determined to be equal to the second number in the sub-step D, and the adjustment of the shooting angle can be determined to stop only if the second ratio is less than or equal to the second threshold.
In this step, the definition of the designated area of the preview image collected by the target camera is determined through a Brenner gradient function, the number of pixels of which the gray value is greater than a fourth threshold in the designated area is determined to obtain a third number after the gray value of the pixels in the designated area is determined through an average value method, and then the ratio of the third number to the total number of pixels in the designated area is calculated to obtain a second ratio; and then detecting whether the definition of the designated area meets a preset definition condition and whether a second ratio is smaller than or equal to a second threshold value, if so, stopping adjusting the shooting angle, and controlling the target camera to shoot the target image.
It can be understood that the gray value of the pixel point in the specified region is determined to be similar to the gray value of the pixel point in the reference sub-region; controlling the target camera to shoot a target image, wherein the shooting is similar to the obtaining of N images collected by N cameras; comparing the second ratio with the second threshold is similar to comparing the first ratio with the second threshold, and is not repeated here.
In the embodiment of the invention, under the condition that the candidate subarea does not meet the preset definition requirement, firstly, whether the definition of the appointed area of the preview image acquired by the target camera meets the preset definition condition or not is detected in real time in the process that a user moves the virtual camera or the camera automatically rotates anticlockwise or clockwise by 360 degrees so as to obtain the preview image meeting the preset definition condition; then, the second ratio of the designated area of the preview image is compared with the second threshold value so as to ensure that the light reflecting area in the reference image can be removed under any condition of the designated area of the preview image, thereby ensuring the effect of removing the light reflecting area of the reference image, ensuring the definition of the final picture obtained by shooting through the camera and optimizing the shooting effect.
Optionally, in this embodiment of the present invention, the image processing method further includes step (12):
and (12) determining the image with the minimum area of the light reflecting area in the N images as the reference image.
In this embodiment of the present invention, the step (12) may be implemented after the step 301 and before the step 302.
In this step, an image having the smallest area of the light reflection region may be determined by a mathematical function among the N images, and the determined image may be used as the reference image. For example, referring to fig. 4, if it is determined that the image having the smallest area of the light reflection region is the third image among the 16 images acquired by the smartphone rear camera 400 by the mathematical function, the third image is used as the reference image, and 15 images other than the third image are used as the reference images.
In the embodiment of the invention, the image with the minimum area of the light reflecting region in the N images acquired by the N cameras is used as the reference image, so that the calculation amount for determining the light reflecting region can be reduced, the determination speed of the light reflecting region can be improved, the determination speed of the reference image of the reference region corresponding to the light reflecting region can be further improved, and the speed for processing the image of the light reflecting region based on the image of the reference region can be improved.
It should be noted that, in the image processing method provided in the embodiment of the present invention, the execution subject may be an image processing apparatus, or a control module in the image processing apparatus for executing the loaded image processing method. In the embodiment of the present invention, an image processing apparatus executes an image processing method as an example, and the image processing apparatus provided in the embodiment of the present invention is described.
Referring to fig. 7, a block diagram of an image processing apparatus according to an embodiment of the present invention is shown, where the apparatus 700 includes:
the obtaining module 701 obtains N images collected by the N cameras.
A processing module 702, configured to perform image processing on a reflection area in a reference image based on images of reference areas in N-1 reference images; wherein the N images comprise the reference image and the N-1 reference images, and N is an integer greater than 1.
Optionally, the light reflecting region includes M light reflecting subregions, where M is a positive integer; the device further comprises a light reflection area determining module, wherein the light reflection area determining module is used for dividing the reference image into T reference subregions; determining a first number of pixel points of which the gray value of each reference sub-region in the T reference sub-regions is greater than a first threshold; determining M reflector sub-regions based on the first number of each reference sub-region; wherein T is an integer greater than 1.
Optionally, the light reflection region determining module is further specifically configured to:
for each reference subarea, determining the reference subarea as a light reflecting subarea under the condition that the first ratio is greater than the second threshold value; wherein the first ratio is a ratio of the first number of each reference sub-region to a total number of pixels of each reference sub-region.
Optionally, the light reflection area determining module is further specifically configured to:
determining the number of pixel points of which the gray value is smaller than a third threshold value in the adjacent reference subarea of each reference subarea in the T reference subareas to obtain a second number of the reference subareas; determining M reflector sub-regions from the reference sub-regions based on the first number and the second number of each reference sub-region.
Optionally, the light reflection region determining module is further specifically configured to:
for each reference sub-region, under the condition that the first ratio is larger than a second threshold value and the second number is equal to the total number of pixels of the adjacent reference sub-region, determining the reference sub-region as a light reflecting sub-region; wherein the first ratio is a ratio of the first number of each reference sub-region to a total number of pixels of each reference sub-region.
Optionally, the reference region comprises M reference sub-regions; the device further comprises a reference region determining module, wherein the reference region determining module is used for determining a sub-region, matched with the position of each light reflecting sub-region, in each reference image to obtain T alternative sub-regions, of each light reflecting sub-region in the light reflecting region; selecting the candidate subarea with the definition meeting a preset definition condition as the reference subarea according to the definition of each candidate subarea; wherein T is a positive integer.
Optionally, the processing module 702 is further specifically configured to:
for each light reflecting sub-region in the light reflecting region, replacing the pixel value of the pixel point in each light reflecting sub-region with a first target pixel value; and the first target pixel value is the pixel value of a pixel point in each reference subarea corresponding to the light reflecting subarea.
Optionally, the device further includes an adjusting module, where the adjusting module is configured to adjust a shooting angle of the target camera and then control the target camera to shoot the target image when there is no candidate sub-region whose definition meets a preset definition condition; according to the target image, performing image processing on the light reflection area; the target camera is a camera used for shooting the reference image.
Optionally, the adjusting module is further specifically configured to:
adjusting the shooting angle of the target camera, and detecting whether a preview image acquired by the target camera has an alternative subarea with definition meeting a preset definition condition in the adjustment process; when the situation that the definition of the designated area of the preview image acquired by the target camera meets a preset definition condition and the second ratio of the designated area of the preview image is smaller than or equal to the second threshold value is detected, stopping adjusting the shooting angle and controlling the target camera to shoot the target image; the designated area is an area matched with the position of the light reflecting area; the second ratio is a ratio of a third number to a total number of pixels in the designated area, and the third number is a number of pixels in the designated area having a gray value greater than a fourth threshold.
Optionally, the apparatus further includes a reference image determining module, configured to determine, as the reference image, an image with a minimum area of a light reflection area in the N images.
In summary, the image processing apparatus provided in the embodiment of the present invention obtains N images acquired by N cameras; based on the images of the reference areas in the N-1 reference images, carrying out image processing on the reflection areas in the reference images; wherein the N images comprise the reference image and the N-1 reference images, and N is an integer greater than 1. In the embodiment of the invention, the electronic equipment can automatically acquire 1 piece of reference image and N-1 pieces of reference image through N cameras without manual removal by a user, and remove the reflective area in the reference image based on the image of the reference area in the reference image, thereby simplifying the user operation, improving the processing efficiency, removing the reflective area of the camera preview image in the shooting process and optimizing the shooting effect.
Meanwhile, due to the physical characteristics of light reflection, the positions of light reflection in the N images acquired by the different cameras with coincident shooting visual angles are also different, so that in the embodiment of the invention, the reference image and the reference image are determined from the N images, and the image processing is performed on the light reflection area based on the image of the reference area in the reference image, so that the removal effect of the light reflection area can be ensured.
The image processing apparatus in the embodiment of the present invention may be an apparatus, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiment of the present invention is not particularly limited.
The image processing apparatus in the embodiment of the present invention may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present invention are not limited in particular.
The image processing apparatus provided in the embodiment of the present invention can implement each process implemented in the method embodiment of fig. 3, and is not described herein again to avoid repetition.
Optionally, as shown in fig. 8, an electronic device 800 is further provided in an embodiment of the present invention, and includes at least two cameras 801, a memory 802, a processor 803, and a program or an instruction stored in the memory 802 and executable on the processor 803, where the program or the instruction, when executed by the processor 803, implements each process of the above-described image processing method embodiment, and can achieve the same technical effect, and details are not repeated here to avoid repetition.
It should be noted that the electronic device in the embodiment of the present invention includes the mobile electronic device and the non-mobile electronic device described above.
Referring to fig. 9, a hardware structure diagram of an electronic device implementing various embodiments of the present application is shown.
The electronic device 900 includes, but is not limited to: a radio frequency unit 901, a network module 902, an image input unit 903, a sensor 904, a display unit 905, a user input unit 906, an interface unit 907, a memory 908, and a processor 909.
Those skilled in the art will appreciate that the electronic device 900 may further include a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 909 through a power management system, so that functions of managing charging, discharging, and power consumption are implemented through the power management system. The electronic device structure shown in fig. 9 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is not repeated here.
The image input unit 903 is configured to acquire N images acquired by the N cameras.
A processor 909 for performing image processing on the glistening region in the base image based on the images of the reference regions in the N-1 reference images; wherein the N images comprise the reference image and the N-1 reference images, and N is an integer greater than 1.
In the embodiment of the invention, the electronic equipment can acquire N images acquired by N cameras; based on the images of the reference areas in the N-1 reference images, carrying out image processing on the reflection areas in the reference images; wherein the N images comprise the reference image and the N-1 reference images, and N is an integer greater than 1. In the embodiment of the invention, the electronic equipment can automatically acquire 1 piece of reference image and N-1 pieces of reference image through N cameras without manual removal by a user, and remove the reflective area in the reference image based on the image of the reference area in the reference image, thereby simplifying the user operation, improving the processing efficiency, removing the reflective area of the camera preview image in the shooting process and optimizing the shooting effect.
Meanwhile, due to the physical characteristics of light reflection, the positions where light reflection exists in the N images acquired by the different cameras with coincident shooting visual angles are also different, so that in the embodiment of the invention, the reference image and the reference image are determined from the N images, and the image reflection area of the reference area in the reference image pair is subjected to image processing, so that the removal effect of the light reflection area can be ensured.
It should be understood that in the embodiment of the present application, the input Unit 903 may include a Graphics Processing Unit (GPU) 9031, and the Graphics processor 9031 processes the at least two first preview images acquired by the at least two cameras. The display unit 905 may include a display panel 9051, and the display panel 9051 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 906 includes a touch panel 9061 and other input devices 9062. A touch panel 9061, also referred to as a touch screen. The touch panel 9061 may include two parts, a touch detection device and a touch controller. Other input devices 9062 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. Memory 908 may be used to store software programs as well as various data, including but not limited to applications and operating systems. The processor 909 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 909.
An embodiment of the present invention further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the display control method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present invention further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the above display control method embodiment, and the same technical effect can be achieved, and is not described herein again to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present invention may also be referred to as a system-on-chip, a system-on-chip or a system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of embodiments of the present invention is not limited to performing functions in the order illustrated or discussed, but may include performing functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (11)

1. An image processing method, characterized in that the method comprises:
acquiring N images acquired by N cameras;
dividing a reference image into T reference sub-regions, wherein T is an integer larger than 1;
determining a first number of pixel points of which the gray value of each reference sub-region in the T reference sub-regions is greater than a first threshold;
determining the number of pixel points of which the gray value is smaller than a third threshold value in the adjacent reference subarea of each reference subarea in the T reference subareas, and adding the pixel points to obtain a second number of the reference subareas;
determining a light reflection area from the reference sub-area based on the first number and the second number of each reference sub-area;
performing image processing on the light reflection area based on the images of the reference area in the N-1 reference images;
wherein the N images comprise the reference image and the N-1 reference images, and N is an integer greater than 1.
2. The method of claim 1, wherein after said determining a first number of pixels for which the grayscale value of each of the T reference sub-regions is greater than a first threshold, the method further comprises:
for each reference subregion, determining the reference subregion as a light reflecting subregion under the condition that the first ratio is greater than a second threshold value;
wherein the first ratio is a ratio of the first number of each reference sub-region to a total number of pixels of each reference sub-region.
3. The method of claim 1, wherein the retroreflective region includes M retroreflective subregions, M being a positive integer; the determining a light reflection area from the reference sub-area based on the first number and the second number of each reference sub-area comprises:
for each reference sub-region, under the condition that the first ratio is larger than a second threshold value and the second number is equal to the total number of pixels of the adjacent reference sub-region, determining the reference sub-region as a light reflecting sub-region; the total number of pixels is the sum of the number of pixels in all adjacent reference sub-regions of the reference sub-region;
wherein the first ratio is a ratio of the first number of each reference sub-region to a total number of pixels of each reference sub-region.
4. The method of any one of claims 1 to 3, wherein the reference region comprises M reference sub-regions; the method further comprises the following steps:
for each light reflection subarea in the light reflection area, determining a subarea matched with the position of each light reflection subarea in each reference image to obtain T alternative subareas;
selecting the candidate subarea with the definition meeting a preset definition condition as the reference subarea according to the definition of each candidate subarea;
wherein T is a positive integer.
5. The method of claim 4, further comprising:
under the condition that no alternative subarea with definition meeting the preset definition condition exists, adjusting the shooting angle of the target camera, and controlling the target camera to shoot a target image;
performing image processing on the light reflection region according to the target image;
the target camera is a camera used for shooting the reference image.
6. The method of claim 5, wherein the adjusting the shooting angle of the target camera and controlling the target camera to shoot the target image comprises:
adjusting the shooting angle of the target camera, and detecting whether a preview image acquired by the target camera has an alternative subarea with definition meeting a preset definition condition in the adjustment process;
when the situation that the definition of the designated area of the preview image acquired by the target camera meets a preset definition condition and a second ratio of the designated area of the preview image is smaller than or equal to a second threshold value is detected, stopping adjusting the shooting angle and controlling the target camera to shoot the target image;
the designated area is an area matched with the position of the light reflecting area; the second ratio is a ratio of a third number to a total number of pixels in the designated area, and the third number is a number of pixels in the designated area having a gray value greater than a fourth threshold.
7. The method according to claim 1, wherein the image processing of the light reflection area based on the image of the reference area in the N-1 reference images comprises:
for each light reflecting sub-region in the light reflecting region, replacing the pixel value of the pixel point in each light reflecting sub-region with a first target pixel value;
and the first target pixel value is the pixel value of a pixel point in each reference subarea corresponding to each light reflecting subarea.
8. The method of claim 1, further comprising:
and determining the image with the minimum area of the light reflecting area in the N images as the reference image.
9. An image processing apparatus, characterized in that the apparatus comprises:
the acquisition module acquires N images acquired by N cameras;
the device comprises a light reflection region determining module, a light reflection region determining module and a light reflection region determining module, wherein the light reflection region determining module is used for dividing a reference image into T reference subregions, and T is an integer larger than 1; determining a first number of pixel points of which the gray value of each reference sub-region in the T reference sub-regions is greater than a first threshold; determining the number of pixel points of which the gray value is smaller than a third threshold value in the adjacent reference subarea of each reference subarea in the T reference subareas, and adding the pixel points to obtain a second number of the reference subareas; determining a light reflection area from the reference sub-area based on the first number and the second number of each reference sub-area;
the processing module is used for carrying out image processing on the light reflecting area based on the images of the reference areas in the N-1 reference images; wherein the N images comprise the reference image and the N-1 reference images, and N is an integer greater than 1.
10. The apparatus of claim 9, wherein a spherical base is provided at the bottom of the camera head for moving the camera head.
11. An electronic device comprising at least two cameras, a processor, a memory, a program or instructions stored on the memory and executable on the processor, which program or instructions, when executed by the processor, implement the steps of the image processing method according to any one of claims 1 to 8.
CN202110114220.7A 2021-01-27 2021-01-27 Image processing method and device and electronic equipment Active CN112887614B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110114220.7A CN112887614B (en) 2021-01-27 2021-01-27 Image processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110114220.7A CN112887614B (en) 2021-01-27 2021-01-27 Image processing method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN112887614A CN112887614A (en) 2021-06-01
CN112887614B true CN112887614B (en) 2022-05-17

Family

ID=76052844

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110114220.7A Active CN112887614B (en) 2021-01-27 2021-01-27 Image processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN112887614B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113486862B (en) * 2021-08-04 2024-03-22 河南华辰智控技术有限公司 Financial safety protection system based on biological recognition technology

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011211329A (en) * 2010-03-29 2011-10-20 Fujifilm Corp Imaging apparatus and control method thereof, image processing apparatus and control method thereof, and image processing program
CN109302568A (en) * 2017-07-25 2019-02-01 梅克朗有限两合公司 The indirect image system of vehicle
CN109618098A (en) * 2019-01-04 2019-04-12 Oppo广东移动通信有限公司 A kind of portrait face method of adjustment, device, storage medium and terminal
CN111510623A (en) * 2020-04-02 2020-08-07 维沃移动通信有限公司 Shooting method and electronic equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5210091B2 (en) * 2008-08-29 2013-06-12 キヤノン株式会社 Image processing apparatus, control method therefor, imaging apparatus, and program

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011211329A (en) * 2010-03-29 2011-10-20 Fujifilm Corp Imaging apparatus and control method thereof, image processing apparatus and control method thereof, and image processing program
CN109302568A (en) * 2017-07-25 2019-02-01 梅克朗有限两合公司 The indirect image system of vehicle
CN109618098A (en) * 2019-01-04 2019-04-12 Oppo广东移动通信有限公司 A kind of portrait face method of adjustment, device, storage medium and terminal
CN111510623A (en) * 2020-04-02 2020-08-07 维沃移动通信有限公司 Shooting method and electronic equipment

Also Published As

Publication number Publication date
CN112887614A (en) 2021-06-01

Similar Documents

Publication Publication Date Title
US10805543B2 (en) Display method, system and computer-readable recording medium thereof
US9628709B2 (en) Systems and methods for capturing images using a mobile device
CN111770273B (en) Image shooting method and device, electronic equipment and readable storage medium
CN112437232A (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN110796600A (en) Image super-resolution reconstruction method, image super-resolution reconstruction device and electronic equipment
CN113329172B (en) Shooting method and device and electronic equipment
CN112532891A (en) Photographing method and device
CN112887614B (en) Image processing method and device and electronic equipment
CN114390181A (en) Shooting method and device and electronic equipment
KR20130081439A (en) Apparatus and method for displaying camera view area in a portable terminal
CN112312035B (en) Exposure parameter adjusting method, exposure parameter adjusting device and electronic equipment
CN107395983B (en) Image processing method, mobile terminal and computer readable storage medium
CN113473008B (en) Shooting method and device
CN114143461B (en) Shooting method and device and electronic equipment
EP4294000A1 (en) Display control method and apparatus, and electronic device and medium
CN112653841B (en) Shooting method and device and electronic equipment
CN112383708B (en) Shooting method and device, electronic equipment and readable storage medium
CN112367464A (en) Image output method and device and electronic equipment
CN114241127A (en) Panoramic image generation method and device, electronic equipment and medium
CN113012085A (en) Image processing method and device
CN114143448B (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN115170383A (en) Image blurring method and device, storage medium and terminal equipment
CN114302057B (en) Image parameter determining method, device, electronic equipment and storage medium
CN114339050B (en) Display method and device and electronic equipment
CN113873160B (en) Image processing method, device, electronic equipment and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant