CN115086558A - Focusing method, image pickup apparatus, terminal apparatus, and storage medium - Google Patents

Focusing method, image pickup apparatus, terminal apparatus, and storage medium Download PDF

Info

Publication number
CN115086558A
CN115086558A CN202210669584.6A CN202210669584A CN115086558A CN 115086558 A CN115086558 A CN 115086558A CN 202210669584 A CN202210669584 A CN 202210669584A CN 115086558 A CN115086558 A CN 115086558A
Authority
CN
China
Prior art keywords
image
focusing
pixel point
target
evaluation value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210669584.6A
Other languages
Chinese (zh)
Other versions
CN115086558B (en
Inventor
权威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202210669584.6A priority Critical patent/CN115086558B/en
Publication of CN115086558A publication Critical patent/CN115086558A/en
Application granted granted Critical
Publication of CN115086558B publication Critical patent/CN115086558B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Studio Devices (AREA)

Abstract

The application is suitable for the technical field of camera shooting and provides a focusing method. According to the method, the images of the picture to be processed under different image distances are obtained, and the images corresponding to the different image distances one by one can be generated when the picture to be processed is captured, so that the light field information of the picture to be processed is rich; the evaluation value of the pixel point at the same pixel position in each image is determined, the identification unit is reduced to the pixel point, the performance of the pixel point at the same pixel position in the picture to be processed in different images is quantized, and the depth analysis capability of the images at different image distances is improved; through according to the corresponding target image of target pixel in the region of focusing, the image of focusing is exported, and the image of focusing that obtains according to the image distance that the target image corresponds can improve the definition of target pixel, and other pixels can produce natural blurring effect when imaging according to above-mentioned image distance, avoid because light information is not enough to lead to blurring distortion, realize promoting blurring effect when improving the definition of formation of image.

Description

Focusing method, image pickup apparatus, terminal apparatus, and storage medium
Technical Field
The present application relates to the field of imaging technologies, and in particular, to a focusing method, an imaging device, a terminal device, and a storage medium.
Background
Along with the rapid development of digital images and multimedia technologies, the requirements of users on the shooting capability of electronic equipment such as mobile phones, tablet computers and wearable equipment are continuously improved, the electronic equipment is limited by the size of a volume, the size of a camera and a light sensor which can be used is limited, compared with shooting equipment with strong light sensing capability such as a half-picture camera or a full-picture camera, the light information captured by the electronic equipment during shooting is limited, the distortion is easy to occur when a blurring effect is generated, and the imaging effect is unnatural.
Disclosure of Invention
In view of this, embodiments of the present application provide a focusing method, an image capturing apparatus, a terminal apparatus, and a storage medium, so as to solve the problem that an imaging effect is unnatural due to limitations of existing electronic apparatuses on size, limited sizes of usable cameras and light sensors, limited light information captured during image capturing, and easy distortion when a blurring effect is generated.
A first aspect of an embodiment of the present application provides a focusing method, including:
acquiring images of a picture to be processed at different image distances;
determining the evaluation value of the pixel point at the same pixel position in each image;
outputting a focusing image according to a target image corresponding to a target pixel point in a focusing area;
the maximum evaluation value of the target pixel point is greater than or equal to the maximum evaluation values of other pixel points in the focusing area, and the evaluation value of the target pixel point in the target image is maximum; the image distance of the focusing image is the same as that of the target image.
A first aspect of the embodiments of the present application provides a focusing method, in which images of a to-be-processed picture at different image distances are obtained, and images corresponding to the different image distances one to one may be generated when the to-be-processed picture is captured, so as to obtain light field information rich in the to-be-processed picture; the evaluation value of the pixel point at the same pixel position in each image is determined, the identification unit is reduced to the pixel point, the performance of the pixel point at the same pixel position in the picture to be processed in different images is quantized, and the depth analysis capability of the images at different image distances is improved; through according to the corresponding target image of target pixel in the region of focusing, the output focuses the image, can improve the definition of target pixel according to the image distance that the target image corresponds obtained focuses the image, and other pixel can produce natural blurring effect when imaging according to above-mentioned image distance, avoid because light information is not enough to lead to blurring distortion, realize promoting blurring effect when improving the definition of formation of image.
A second aspect of the embodiments of the present application provides an image capturing apparatus, including a processor, a memory, a computer program stored in the memory and executable on the processor, and a light field camera, where the processor implements the steps of the focusing method provided by the first aspect of the embodiments of the present application when executing the computer program;
the light field camera is used for capturing a picture to be processed and recording the propagation path of light rays in any direction when the picture to be processed is captured through the light rays in any direction.
A third aspect of the embodiments of the present application provides a terminal apparatus including the image pickup apparatus provided by the second aspect of the embodiments of the present application.
A fourth aspect of the embodiments of the present application provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the steps of the focusing method provided by the first aspect of the embodiments of the present application.
It is understood that the beneficial effects of the second to fourth aspects can be seen from the description of the first aspect, and are not described herein again.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a first flowchart of a focusing method according to an embodiment of the present disclosure;
fig. 2 is a scene schematic diagram of an object distance and an image distance when an image is shot by the image pickup apparatus provided in the embodiment of the present application;
fig. 3 is a schematic view of a scene in which the image capturing apparatus provided in the embodiment of the present application collects different light rays of an imaging plane through a microlens array;
FIG. 4 is a first schematic diagram of a to-be-processed picture according to an embodiment of the present disclosure;
fig. 5 is a schematic diagram of a maximum evaluation value hotspot graph corresponding to a to-be-processed picture provided by an embodiment of the present application;
FIG. 6 is a second flowchart of a focusing method according to an embodiment of the present disclosure;
fig. 7 is a first schematic view illustrating object recognition performed on a to-be-processed image according to an embodiment of the present application;
fig. 8 is a second schematic view illustrating object recognition performed on a to-be-processed frame according to an embodiment of the present application;
fig. 9 is a second schematic diagram of a to-be-processed picture according to an embodiment of the present application;
fig. 10 is a schematic diagram illustrating saliency recognition of a to-be-processed picture according to an embodiment of the present application;
FIG. 11 is a third flowchart illustrating a focusing method according to an embodiment of the present disclosure;
fig. 12 is a schematic diagram illustrating that a recommended focusing area 210 is displayed on a terminal device 200 according to an embodiment of the present application;
FIG. 13 is a fourth flowchart illustrating a focusing method according to an embodiment of the present application;
FIG. 14 is a fifth flowchart illustrating a focusing method according to an embodiment of the present disclosure;
fig. 15 is a schematic structural diagram of an image pickup apparatus provided in an embodiment of the present application;
fig. 16 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing a relative importance or importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
In the application, electronic equipment is limited by the volume size, and camera and light sensor's that can use size is limited, compares in the strong camera equipment of sensitization ability such as half-frame camera or full-frame camera, and the light information that electronic equipment caught when making a video recording is limited, and easy distortion when producing the blurring effect leads to the imaging effect unnatural.
In order to solve the above technical problem, embodiments of the present application provide a focusing method, by acquiring images of a to-be-processed picture at different image distances, images corresponding to different image distances one to one may be generated when capturing the to-be-processed picture, so as to obtain rich light field information of the to-be-processed picture; the evaluation value of the pixel point at the same pixel position in each image is determined, the identification unit is reduced to the pixel point, the performance of the pixel point at the same pixel position in the picture to be processed in different images is quantized, and the depth analysis capability of the images at different image distances is improved; through according to the corresponding target image of target pixel in the region of focusing, the image of focusing is exported, and the image of focusing that obtains according to the image distance that the target image corresponds can improve the definition of target pixel, and other pixels can produce natural blurring effect when imaging according to above-mentioned image distance, avoid because light information is not enough to lead to blurring distortion, realize promoting blurring effect when improving the definition of formation of image.
The focusing method provided by the embodiment of the application can be applied to any camera equipment or terminal equipment carrying the camera equipment. The image capturing device may be a Camera (Camera), a Video Camera (Video Camera), or the like, and the Camera may specifically be a Digital Single Lens Reflex (DSLR) or a Micro Single Camera (Micro Single Camera), or the like; the terminal device may be a mobile phone, a tablet computer, a wearable device, an in-vehicle device, an Augmented Reality (AR)/Virtual Reality (VR) device, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a Personal Digital Assistant (PDA), and the like, and the embodiment of the present application does not limit the specific types of the camera device and the terminal device.
As shown in fig. 1, the focusing method provided in the embodiment of the present application includes the following steps S101 to S103:
step S101, images of the picture to be processed under different image distances are obtained.
In an application, the picture to be processed may be a picture captured by an Image capturing device, and when the Image capturing device captures the picture to be processed, light rays in the picture to be processed are collected through a Main Lens (Main Lens) and converged to an Image Sensor (Image Sensor) of the Image capturing device, so that an Image of the picture to be processed, which is defined as an Image of the picture to be processed, mapped to the Image Sensor can be obtained. The Distance between a focusing Object or a focusing plane in the picture to be processed and the main lens is an Object Distance (Object Distance), the Distance between an Image of the picture to be processed and the main lens is an Image Distance (Image Distance), generally speaking, the Object Distance and the Image Distance have a Conjugate relationship, and when the Image pickup device captures the picture, the farther the Object Distance is, the closer the Image Distance is; the closer the object distance, the farther the image distance. The specific content and the frame size of the picture to be processed are determined according to the picture actually captured by the image pickup apparatus.
In application, when light rays in a picture to be processed are collected through a main lens, light rays with different incident angles are obtained, so that light rays converging to different imaging surfaces can be obtained, the image distance of each imaging surface is different, the distance between the main lens and an image sensor can be defined as the image distance of an actual imaging surface, the distance between the main lens and the image sensor which is not equal to the image distance between the main lens and the image sensor is defined as the image distance of a virtual imaging surface, the actual image of the picture to be processed under the image distance of the actual imaging surface can be obtained by performing image processing according to the image distance of the actual imaging surface, and the virtual image of the picture to be processed under the image distance of the virtual imaging surface can be obtained by performing image processing according to the image distance of the virtual imaging surface.
Specifically, the light converging to different imaging surfaces can be obtained by changing the distance between the image sensor and the main lens; the Micro Lens (Micro Lens) array is additionally arranged between the main Lens and the image sensor, the Micro Lens array comprises a plurality of Micro Lens units, light collected by the main Lens can be transmitted to the Micro Lens array, each Micro Lens unit can map the collected light to the image sensor to form a Micro Lens sub-image, so that the light converged to an actual imaging surface and a virtual imaging surface can be mapped to the image sensor through the Micro Lens array to form a plurality of Micro Lens sub-images, and the actual image and the virtual image can be obtained through image processing, namely the images of the picture to be processed under different image distances.
In one embodiment, step S101 includes:
refocusing the picture to be processed under different image distances to obtain the images of the picture to be processed under different image distances.
In application, after the light rays converged to the actual imaging surface and the virtual imaging surface are mapped to the image sensor, refocusing (reflus) processing can be performed on the picture to be processed according to the image distance of the virtual imaging surface and the actual image of the picture to be processed, so that the actual image of the picture to be processed is refocused through a refocusing algorithm and the image distance of the virtual imaging surface, and virtual images of the picture to be processed at different image distances are obtained. The actual image of the to-be-processed picture represents an image obtained by focusing based on a focusing object in the to-be-processed picture and mapping on an image sensor (actual imaging plane). The specific working principle of the refocusing algorithm is explained below.
Fig. 2 exemplarily shows a relationship diagram of an object distance and an image distance when the image pickup apparatus takes a picture, where 10 is a picture area of a picture to be processed, 20 is a main lens, 30 is an image sensor, F is an object distance, and F is an image distance.
Fig. 3 exemplarily shows a schematic diagram of an image capturing apparatus capturing light rays different in imaging plane by a micro lens array, where 40 is the micro lens array, F1 is the 1 st object distance, F1 is the 1 st image distance corresponding to the 1 st object distance, F2 is the 2 nd object distance, and F2 is the 2 nd image distance corresponding to the 2 nd object distance.
In one embodiment, step S101 includes:
acquiring the image distance of the ith virtual imaging surface according to the ith refocusing parameter and the image distance of the actual imaging surface;
acquiring an ith image of a picture to be processed under the image distance of the ith virtual imaging surface according to the light information of the actual imaging surface, the light information of the ith virtual imaging surface, the ith refocusing parameter and the image distance of the actual imaging surface;
wherein i is 1, 2, …, n is an integer greater than or equal to 1.
In application, a plurality of refocusing parameters may be preset, and according to the refocusing parameters and the image distance of the actual imaging surface, the image distance of the virtual imaging surface may be obtained, it should be noted that the specific numerical value of the refocusing parameters may be determined according to the image distance of the virtual imaging surface supported by the image pickup device, and the calculation formula of the image distance of the virtual imaging surface may be:
ImageDistance i =ImageDistance*α i
wherein the ImageDistance i Representing the image distance of the ith virtual imaging surface, and the ImageDistance representing the image distance of the actual imaging surface, alpha i Denotes the ith refocusing parameter.
For example, assuming that the image distance of the actual imaging plane of the image pickup apparatus is 50 mm, and the image distances of the supported virtual imaging planes are 20 mm, 30 mm, 40 mm, 60 mm, and 70 mm, the refocusing parameters may be 2/5, 3/5, 4/5, 6/5, and 7/5, respectively; or, if the image distance of the supported virtual image plane is greater than or equal to 20 mm and less than 50 mm, and greater than 50 mm and less than or equal to 70 mm, the refocusing parameter ranges from greater than or equal to 2/5 and less than 1, and from greater than 1 and less than or equal to 7/5. The specific size or value range of the refocusing parameter is not limited in any way in the embodiments of the present application.
In application, the light ray information of the actual imaging surface, the light ray information of the ith virtual imaging surface, the ith refocusing parameter and the image distance of the actual imaging surface can be input into a refocusing algorithm, and the ith image of the picture to be processed under the image distance of the ith virtual imaging surface is obtained. The light ray information of the actual imaging surface may include the number of rows and columns of the unit pixels of the actual imaging surface, and the light ray information of the virtual imaging surface may include the number of rows and columns of the macro pixels of the virtual imaging surface; the number of rows/columns of the element pixels represents the number of rows/columns of photosensitive units (photosensitive units are arranged on the image sensor) covered by corresponding microlens units for collecting light rays when virtual imaging is performed, the number of rows/columns of the macro pixels represents the number of rows/columns of the microlens array, the specific position of the microlens unit for collecting light rays when an ith image under the image distance of an ith virtual imaging surface is generated can be determined according to the number of rows and columns of the macro pixels, and the specific brightness of the light rays can be determined according to the number of rows/columns of the element pixels.
In application, the formula of the refocusing algorithm may specifically be:
Figure BDA0003694318170000081
wherein E is i The image processing method comprises the steps of representing an ith image of a picture to be processed at the image distance of an ith virtual imaging surface, u representing the number of pixel rows of an actual imaging surface, v representing the number of pixel columns of the actual imaging surface, s representing the number of macro pixel rows of the virtual imaging surface, and v representing the number of macro pixel columns of the virtual imaging surface.
Step S102, determining the evaluation value of the pixel point of the same pixel position in each image.
In an application, the image of the to-be-processed frame includes a plurality of pixels, the specific number of the pixels is determined according to the resolution of the image of the to-be-processed frame (for example, the resolution of the image is 1920 × 1080, and the number of the corresponding pixels is 2073600), and the pixel positions of the pixels may be the pixel coordinates of the pixels in the image. The evaluation value of the pixel point in the image can be used for reversely mapping the pixel quality of the pixel point in the image, and the higher the evaluation value is, the higher the pixel quality of the corresponding pixel point is.
In application, after images of a picture to be processed at different image distances are obtained, evaluation values of pixel points at the same pixel position in each image can be determined. Specifically, the evaluation values of all pixel points in each image can be calculated respectively, and the evaluation values of the pixel points at the same pixel point position in each image are summarized; it is also possible to calculate the evaluation values of the pixels at the same pixel position in all the images, respectively. The embodiment of the present application does not set any limitation on the specific calculation flow of the evaluation value of the pixel point.
In one embodiment, step S102 is followed by:
and acquiring the maximum evaluation value of each pixel point, and recording the image corresponding to each pixel point when the maximum evaluation value is obtained.
In application, after obtaining multiple evaluation values of any pixel point in all images, the evaluation value with the largest numerical value can be used as the maximum evaluation value of any pixel point, and the image corresponding to the maximum evaluation value calculated by any pixel point is recorded, so as to establish a corresponding relation table of each pixel point and the image. The image with the highest pixel quality of each pixel point can be quickly determined in a table look-up mode, so that the image distance with the highest pixel quality of each pixel point can be determined. Before establishing the correspondence table between each pixel point and each image, the images may be numbered, and specifically, the images may be sorted and numbered according to the image distance, for example, the to-be-processed images have one-to-one correspondence images at image distances of 20 mm, 30 mm, 40 mm, and 50 mm, so the image with the image distance of 20 mm may be referred to as the 1 st image, the image with the image distance of 30 mm may be referred to as the 2 nd image, the image with the image distance of 40 mm may be referred to as the 3 rd image, and the image with the image distance of 50 mm may be referred to as the 4 th image.
In application, a maximum evaluation value heat point diagram can be generated according to the maximum evaluation value of each pixel point and the pixel position of each pixel point, the height of the maximum evaluation value is represented on the pixel position of each pixel point through color, and the size of the maximum evaluation value of each pixel point can be visually displayed. After obtaining the maximum evaluation hotspot graph, smoothing filtering processing may be performed through a Filter (Filter) to smooth the color change of the maximum evaluation hotspot graph and generate a peak value, and the type of the Filter may be a box Filter (box Filter).
Fig. 4 and 5 are schematic diagrams exemplarily showing images of a to-be-processed picture at any image distance and a maximum evaluation value heat point diagram corresponding to the to-be-processed picture, and it should be noted that the maximum evaluation value heat point diagram of fig. 5 is merely exemplary and is subjected to gray scale processing, and the corresponding relationship between colors and maximum evaluation values is not limited in the embodiment of the present application.
Step S103, outputting a focusing image according to a target image corresponding to a target pixel point in a focusing area;
the maximum evaluation value of the target pixel point is greater than or equal to the maximum evaluation values of other pixel points in the focusing area, and the evaluation value of the target pixel point in the target image is maximum; the image distance of the focusing image is the same as that of the target image.
In application, the focusing area may be a complete picture to be processed, or may be a partial area in the picture to be processed. After the focusing area of the picture to be processed is focused, target pixel points in the focusing area can be obtained, the maximum evaluation value of the target pixel points needs to be larger than or equal to the maximum evaluation values of other pixel points in the focusing area, when a plurality of target pixel points with the same maximum evaluation value are included in the focusing area, a target image corresponding to any target pixel point can be selected for single-point focusing to output a focusing image, and a plurality of target images corresponding to any number of target pixel points can also be selected for multi-point focusing to output the focusing image.
In application, the target image is an image corresponding to the target pixel point when the maximum evaluation value is obtained through calculation, after the target pixel point is determined, the target image corresponding to the target pixel point can be determined through the corresponding relation table of each pixel point and the image, and a focusing image can be output according to the target image.
In application, the image distance of a focusing image is the same as that of a target image, the target pixel point in the focusing image can reach the clearest state, and other pixel points can generate a blurring effect according to the image distance of the target image.
In one embodiment, step S103 includes:
focusing the picture to be processed according to the image distance of the target image corresponding to the target pixel point in the focusing area, and outputting a focusing image;
or acquiring a target image recorded by the target pixel point in the focusing area when the maximum evaluation value is obtained, and outputting the target image as a focusing image.
In application, two output modes for focusing images can be included, wherein the first output mode can read the image distance of a target image and focus a picture to be processed according to the image distance of the target image so as to output a focusing image; the second output mode may be to read the target image from the memory and output the target image directly as a focused image, and it should be noted that, when the second output mode is used, the corresponding image is stored in the memory when the highest evaluation value is obtained at each pixel point; when the second output mode is used, if a target pixel point is included in the focusing area, directly outputting a target image corresponding to the target pixel point as a focusing image; if the focusing area comprises a plurality of target pixel points, a plurality of target images corresponding to the plurality of target pixel points one by one can be subjected to image processing to be fused to obtain a multi-point focusing image, and the focusing image is output.
In application, images corresponding to different image distances one to one can be generated when the to-be-processed image is captured by acquiring the images of the to-be-processed image at different image distances, wherein the images comprise an actual image mapped on an image sensor and a plurality of virtual images with different image distances, so that Light Field (Light Field) information rich in the to-be-processed image is obtained, and the Light Field information acquisition capability of single-shot is improved; the evaluation value of the pixel point at the same pixel position in each image is determined, the identification unit is reduced to the pixel point, the performance of the pixel point at the same pixel position in the picture to be processed in different images is quantized, and the depth analysis capability of the images at different image distances is improved; the focusing image is output according to the target image corresponding to the target pixel point in the focusing area, the maximum evaluation value of the target pixel point is larger than or equal to the maximum evaluation value of other pixel points in the focusing area, the definition of the target pixel point can be improved according to the focusing image obtained according to the image distance corresponding to the target image, and the other pixel points can generate a natural blurring effect when imaging according to the image distance, so that blurring distortion caused by insufficient light information is avoided, and the blurring effect is improved while the definition of imaging is improved.
As shown in fig. 6, in an embodiment, based on the embodiment corresponding to fig. 1, the method includes the following steps S601 to S606:
step S601, acquiring images of the to-be-processed image at different image distances.
In application, the focusing method provided in step S601 is the same as the focusing method provided in step S101, and is not described herein again.
Step S602, performing target object identification on the image, and determining a first evaluation value of each pixel in the image based on a target object identification result.
In application, the image can be processed through a target object identification algorithm, and the first evaluation value of each pixel point in the image is determined based on the target object identification result. Specifically, when the image is processed by the target object recognition algorithm, each target object in the image can be recognized first, masks (masks) are generated around each target object, pixel points corresponding to the target object are recognized in each Mask, and the specific position of the target object in the image is determined according to the pixel positions of the pixel points. Further, the object identification algorithm may also identify the type of the object, and may be previously set with a correspondence relationship between the type of the object and the first evaluation value, for example, the type of the object may include a human, a pet, an automobile, a mobile phone, a computer, or the like, the first evaluation value of the human may be 20 points, the first evaluation value of the pet may be 10 points, the first evaluation value of the automobile may be 8 points, and the first evaluation values of the mobile phone and the computer may be 5 points. The embodiment of the present application does not set any limit to the specific type of the target object and the correspondence relationship between the type of the target object and the first evaluation value.
In application, the object identification algorithm may be built based on one or more different types of network structures such as a Convolutional Neural Network (CNN), an object detection Convolutional Neural network (R-CNN), a Full Convolutional Network (FCN), an object detection full Convolutional network (R-FCN), a Feature Pyramid Network (FPN), and the like, and the specific network structure of the object identification algorithm is not limited in any way in the embodiment of the present application.
Fig. 7 and 8 are schematic views exemplarily showing object recognition on an image.
Step S603, performing sharpness recognition on the image, and determining a second evaluation value of each pixel in the image based on a sharpness recognition result.
In application, the image can be processed through a definition recognition algorithm, and the second evaluation value of each pixel point in the image is determined based on a definition recognition result. After the image is processed through the target object recognition algorithm, the specific position of the target object in the image is obtained according to the target object recognition result, the specific position of the target object in the image is subjected to definition recognition through the definition recognition algorithm, and a second evaluation value of each pixel point in the target object is output; and the definition of the complete image can be directly identified through a definition identification algorithm, and second evaluation values of all pixel points in the corresponding image are output.
In the application, the setting and the selection of the network structure of the definition recognition algorithm are consistent with those of the network structure of the definition recognition algorithm, and are not described herein again. Specifically, the definition recognition algorithm can be built by adopting a gaussian pyramid (GaussianPyramid), and the working principle is as follows: firstly, converting an image into a gray image, and performing downward or upward decomposition through a Gaussian pyramid to obtain a plurality of gray images with reduced resolution or improved resolution; performing transverse and longitudinal edge detection on each gray image, specifically performing edge detection by using a Sobel operator to obtain a transverse gradient and a longitudinal gradient respectively; obtaining a gradient pyramid according to the transverse gradient and the longitudinal gradient of each gray level image; and when the Gaussian pyramid is decomposed downwards, reconstructing the gradient pyramid according to the sequence of the resolution from small to large, and when the Gaussian pyramid is decomposed upwards, reconstructing the gradient pyramid according to the sequence of the resolution from large to small until the resolution of the output image is equal to that of the image, so as to obtain the definition recognition result of the image.
Step S604, performing saliency recognition on the image, and determining a third evaluation value of each pixel in the image based on a saliency recognition result.
In application, the image can be processed through a saliency recognition algorithm, and the third evaluation value of each pixel point in the image is determined based on a saliency recognition result. Specifically, the predetermined saliency recognition rule may be adjusted to enable the saliency recognition algorithm to output the corresponding third evaluation value when the predetermined object is recognized, or the object with high attention in the image may be analyzed based on the default recognition rule of the saliency recognition algorithm.
In the application, the setting and the selection of the network structure of the significance recognition algorithm are consistent with the setting and the selection of the network structure of the definition recognition algorithm, and are not described herein again.
Fig. 9 and 10 are schematic diagrams illustrating saliency recognition of an image, wherein the higher the brightness of a pixel position in fig. 10, the higher the corresponding saliency is.
Step S605 determines the evaluation value of the pixel point at the same pixel position in each image according to the first evaluation value, the second evaluation value and the third evaluation value of the pixel point at the same pixel position in each image.
In application, after obtaining a first evaluation value, a second evaluation value and a third evaluation value of a pixel point at the same pixel position in each image, determining the evaluation values of the pixel point at the same pixel position in each image according to a calculation formula:
Score x,y =δ 1 *Type x,y2 *Clarity x,y3 *Significance x,y );
wherein x represents the pixel abscissa of the pixel point, y represents the pixel ordinate of the pixel point, Score x,y Evaluation value, Type, representing pixel point x,y First evaluation value, Clarity, indicating pixel point x,y Second evaluation value, Significance, indicating a pixel x,y Third evaluation value, delta, representing pixel point 1 Is a first coefficient, δ 2 Is the second coefficient, δ 3 Is the third coefficient.
The first coefficient, the second coefficient, and the third coefficient may be set according to actual evaluation requirements, for example, the first coefficient, the second coefficient, and the third coefficient may be 1, 0.8, and 0.4, respectively.
In application, when a plurality of target pixel points with the same maximum evaluation value are included in a focusing area, a target image corresponding to the target pixel point with the maximum first evaluation value, the maximum second evaluation value or the maximum third evaluation value can be selected for single-point focusing, and a focusing image is output.
It should be noted that, the evaluation of the quality of the pixel points according to the type, definition and saliency of the target is only exemplary, and the evaluation conditions may be increased, decreased or changed according to actual needs, for example, the evaluation conditions may further include vividness of color or light intensity, and the like.
Step S606, outputting a focusing image according to the target image corresponding to the target pixel point in the focusing area.
In application, the focusing method provided in step S606 is the same as the focusing method provided in step S103, and is not described herein again.
In application, the quality of each pixel point is evaluated according to the type, definition and significance of a target object, so that the quality of each pixel point can be effectively quantified, and the depth analysis capability of a picture to be processed is further improved.
As shown in fig. 11, in an embodiment, based on the embodiment corresponding to fig. 6, the method includes the following steps S1101 to S1108:
step S1101, acquiring images of a picture to be processed under different image distances;
step S1102, identifying a target object of the image, and determining a first evaluation value of each pixel point in the image based on a target object identification result;
step S1103, performing definition recognition on the image, and determining a second evaluation value of each pixel point in the image based on a definition recognition result;
step S1104, identifying the saliency of the image, and determining a third evaluation value of each pixel point in the image based on the result of the saliency identification;
step S1105 determines the evaluation value of the pixel point at the same pixel position in each image according to the first evaluation value, the second evaluation value, and the third evaluation value of the pixel point at the same pixel position in each image.
In application, the focusing method provided in steps S1101 to S1105 is the same as the focusing method provided in steps S601 to S605, and is not described herein again.
Step S1106, when the device is in an automatic focusing mode, taking a complete picture to be processed as a focusing area;
step S1107, when the device is in the manual focusing mode, at least one pixel point selected according to the manual focusing instruction is used as a focusing area.
In application, before outputting a focusing image according to a target image corresponding to a target pixel point in a focusing area, a focusing mode can be judged to determine a selected range of the focusing area. The focusing mode may include an auto focusing mode and a manual focusing mode, when the auto focusing mode is in the auto focusing mode, the complete picture to be processed may be used as a focusing area, and when the manual focusing mode is in the manual focusing mode, at least one pixel point selected according to the manual focusing instruction may be used as a focusing area.
In one embodiment, step S1107 includes:
when the image is in a manual focusing mode, acquiring a preset number of recommended pixel points according to the maximum evaluation value of each pixel point, wherein the maximum evaluation value of the recommended pixel points is greater than or equal to the maximum evaluation values of other pixel points in the image to be processed;
generating and displaying a recommended focusing area according to the pixel position of the recommended pixel point;
and receiving a manual focusing instruction, and taking at least one pixel point selected according to the manual focusing instruction as a focusing area.
In application, when the device is in a manual focusing mode and receives a manual focusing instruction, a plurality of recommended pixel points with the maximum numerical value of the maximum evaluation value can be obtained, and the preset number of the recommended pixel points can be set according to actual needs. And generating and displaying a recommended focusing area according to the pixel position of the recommended pixel point so as to provide an area with a better focusing effect for a user to select. When a manual focusing instruction is received, at least one pixel point selected according to the manual focusing instruction is used as a focusing area, and the focusing area can be a recommended focusing area or not, so that the flexibility of manual focusing is improved.
Fig. 12 exemplarily shows a schematic diagram of displaying the recommended focusing area 210 on the terminal device 200.
Step S1108, outputting a focusing image according to the target image corresponding to the target pixel point in the focusing area.
In application, the focusing method provided in step S1108 is the same as the focusing method provided in step S606, and is not described herein again.
In application, the focusing area can be flexibly adjusted according to the selected focusing mode, the complete picture to be processed can be used as the focusing area in the automatic focusing mode, the recommended focusing area with better focusing effect can be provided in the manual focusing mode, the use experience of manual focusing is improved, and the flexibility of focusing is improved.
As shown in fig. 13, in an embodiment, based on the embodiment corresponding to fig. 11, the method includes the following steps S1301 to S1308:
step S1301, obtaining images of the picture to be processed at different image distances;
step S1302, identifying a target object of the image, and determining a first evaluation value of each pixel point in the image based on a target object identification result;
step S1303, performing definition recognition on the image, and determining a second evaluation value of each pixel point in the image based on a definition recognition result;
step S1304, performing saliency recognition on the image, and determining a third evaluation value of each pixel point in the image based on a saliency recognition result;
step S1305, determining an evaluation value of a pixel point at the same pixel position in each image according to a first evaluation value, a second evaluation value and a third evaluation value of the pixel point at the same pixel position in each image;
step 1306, when the system is in an automatic focusing mode, taking a complete picture to be processed as a focusing area;
step 1307, when the device is in the manual focusing mode, at least one pixel point selected according to the manual focusing instruction is used as a focusing area.
In application, the focusing method provided in steps S1301 to S1307 is the same as the focusing method provided in steps S1101 to S1107 described above, and is not described herein again.
Step S1308, obtaining the image distance of the current display image;
step 1309, determining an image distance of a focused image according to the image distance of the target image corresponding to the target pixel point in the focusing area;
step S1310, switching the image distance of the currently displayed image to the image distance of the focused image according to the preset switching speed or the preset switching time, so as to output the focused image.
In application, the image distance of a current display image can be acquired, the image distance of a focusing image is determined according to the image distance of a target image corresponding to a target pixel point in a focusing area, when the focusing image is output, the image distance of the current display image is switched to the image distance of the focusing image according to the preset switching speed or the preset switching time, image switching is prevented from being blocked due to overlarge image distance change when the current display image is directly switched to the image distance of the focusing image, and smoothness in image distance switching is improved by performing transition through the spaced image distances when the image distances are switched.
The preset switching speed or the preset switching time can be set according to actual needs, and the specific size of the preset switching speed or the preset switching time is not limited in any way in the embodiment of the application.
In one embodiment, step S1310 includes:
acquiring the image distance between the image distance of the current display image and the image distance of the focusing image;
focusing the picture to be processed according to the q-th interval image distance when the picture is switched to the q-th interval image distance, and outputting a q-th transition image;
wherein q is 1, 2, …, m is an integer greater than or equal to 1.
In application, the interval image distance between the image distance of the current display image and the image distance of the focusing image can be acquired, and when the image distance is switched to the q-th interval image distance, focusing is carried out on the picture to be processed according to the q-th interval image distance, and a q-th transition image is output. The number of the interval image distances may be divided by the preset switching time and rounded up, so as to obtain the number of transition images to be output per unit time (specifically, 10 milliseconds or 100 milliseconds, etc.). By outputting the transition image, the images under different image distances in zooming can be displayed, so that a user can observe the image definition under different image distances in the zooming process conveniently, and the zooming flexibility and the zooming effect are improved.
As shown in fig. 14, in an embodiment, based on the embodiment corresponding to fig. 11, the method includes the following steps S1401, S1412:
step S1401, acquiring images of a picture to be processed under different image distances;
step S1402, carrying out target object identification on the image, and determining a first evaluation value of each pixel point in the image based on a target object identification result;
step 1403, performing definition identification on the image, and determining a second evaluation value of each pixel point in the image based on a definition identification result;
step S1404, identifying the saliency of the image, and determining a third evaluation value of each pixel point in the image based on the saliency identification result;
step S1405, determining an evaluation value of a pixel point at the same pixel position in each image according to a first evaluation value, a second evaluation value and a third evaluation value of the pixel point at the same pixel position in each image;
step S1406, when in the automatic focusing mode, taking the complete picture to be processed as the focusing area;
step S1407, when the mobile terminal is in the manual focusing mode, selecting at least one pixel point as a focusing area according to the manual focusing instruction;
step S1408, outputting a focusing image according to the target image corresponding to the target pixel point in the focusing area.
In the application, the focusing method provided in steps S1401 to S1408 is the same as the focusing method provided in steps S1101 to S1108, and will not be described herein again.
Step S1409, judging whether the pixel position of the target pixel point in the focusing area is changed according to the preset frequency;
step 1410, when the pixel position of the target pixel point in the focusing area is changed, returning to obtain the image of the to-be-processed picture at different image distances, so as to update the focusing image according to the changed target pixel point.
In application, whether the position of the target pixel point changes or not can be detected according to the preset frequency. Specifically, the highest evaluation value of all pixel points may be obtained based on steps S1401 to S1405, and it is determined whether the maximum evaluation value of the current target pixel point in the focusing region is greater than or equal to the maximum evaluation values of other pixel points in the focusing region, if so, it is determined that the position of the target pixel point is unchanged, and the focused image is continuously output according to the target influence corresponding to the target pixel point; if not, the position of the target pixel point is changed, and the step S1401 is returned to update the pixel position of the target pixel point, so as to update the focusing image according to the replaced target pixel point. The preset frequency can be 1 second each time, and the size of the preset frequency can be set according to actual needs.
In the application, whether the position of the target pixel point is changed or not is detected in real time, and the pixel position of the target pixel point is quickly updated when the position of the target pixel point is changed, so that a focusing image is output according to the target influence corresponding to the updated target pixel point, the focusing response speed is improved, and the zooming automation is improved.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
As shown in fig. 15, an image capturing apparatus 100 provided in an embodiment of the present application includes a processor 110, a memory 120, a computer program 121 stored in the memory 120 and executable on the processor 110, and a light field camera 130, where the processor 110 implements the steps of the focusing method provided in the above embodiment when executing the computer program 121;
the light field camera 130 is used to capture a picture to be processed and record the propagation path of light in any direction when capturing the picture to be processed by light in any direction.
In an Application, the Processor 110 may be a Central Processing Unit (CPU), and the Processor may also be other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, a discrete hardware component, and so on. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
In some embodiments, the storage 120 may be an internal storage unit of the terminal device, such as a hard disk or a memory of the terminal device. The memory 120 may also be an external storage device of the terminal device in other embodiments, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like provided on the terminal device. Further, the memory 120 may also include both an internal storage unit of the terminal device and an external storage device. The memory 120 is used for storing an operating system, an application program, a BootLoader (BootLoader), data, and other programs, such as program codes of a computer program. The memory 120 may also be used to temporarily store data that has been output or is to be output.
In application, the light field camera 130 may include elements such as a main lens, a micro lens array, and an image sensor, and the functions of the main lens, the micro lens array, and the image sensor may refer to the related descriptions in the above method embodiments, and are not described herein again. The embodiment of the present application does not set any limitation to the specific structure of the light field camera 130.
As shown in fig. 16, a terminal device 200 provided in an embodiment of the present application includes the image capturing device 100 provided in the above-described embodiment.
It is to be understood that the configuration illustrated in the embodiment of the present application does not constitute a specific limitation to the image pickup apparatus 100 and the terminal apparatus 200. In other embodiments of the present application, the device 100 and the terminal device 200 may include more or less components than those shown, or may combine some components, or different components, for example, may further include an input-output device, a network access device, and the like. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the steps in the foregoing focusing method embodiments.
The integrated module, if implemented in the form of a software functional module and sold or used as a separate product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable storage medium may include at least: any entity or apparatus capable of carrying computer program code to a photographing terminal device, recording medium, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, and software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc.
In the above embodiments, the description of each embodiment has its own emphasis, and reference may be made to the related description of other embodiments for parts that are not described or recited in any embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative modules and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed terminal device and method may be implemented in other ways. For example, the above-described terminal device embodiments are merely illustrative, and for example, the division of the modules is only one logical functional division, and there may be other divisions when actually implementing, for example, a plurality of modules or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or modules, and may be in an electrical, mechanical or other form.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (14)

1. A focusing method, comprising:
acquiring images of a picture to be processed at different image distances;
determining the evaluation value of the pixel point at the same pixel position in each image;
outputting a focusing image according to a target image corresponding to a target pixel point in a focusing area;
the maximum evaluation value of the target pixel point is greater than or equal to the maximum evaluation values of other pixel points in the focusing area, and the evaluation value of the target pixel point in the target image is maximum; the image distance of the focusing image is the same as that of the target image.
2. The focusing method of claim 1, wherein the acquiring images of the to-be-processed frame at different image distances comprises:
and refocusing the picture to be processed under different image distances to obtain images of the picture to be processed under different image distances.
3. The focusing method of claim 2, wherein the refocusing the to-be-processed picture at different image distances to obtain the images of the to-be-processed picture at different image distances comprises:
acquiring the image distance of the ith virtual imaging surface according to the ith refocusing parameter and the image distance of the actual imaging surface;
acquiring an ith image of a picture to be processed under the image distance of the ith virtual imaging surface according to the light ray information of the actual imaging surface, the light ray information of the ith virtual imaging surface, the ith refocusing parameter and the image distance of the actual imaging surface;
wherein i is 1, 2, …, n is an integer greater than or equal to 1.
4. The focusing method of claim 1, wherein determining the evaluation value of the pixel point at the same pixel position in each image comprises:
identifying a target object of the image, and determining a first evaluation value of each pixel point in the image based on a target object identification result;
performing definition recognition on the image, and determining a second evaluation value of each pixel point in the image based on a definition recognition result;
performing significance recognition on the image, and determining a third evaluation value of each pixel point in the image based on a significance recognition result;
and determining the evaluation value of the pixel point at the same pixel position in each image according to the first evaluation value, the second evaluation value and the third evaluation value of the pixel point at the same pixel position in each image.
5. The focusing method of claim 1, wherein after determining the evaluation value of the pixel at the same pixel position in each image, the method further comprises:
and acquiring the maximum evaluation value of each pixel point, and recording the image corresponding to each pixel point when the maximum evaluation value is obtained.
6. The focusing method of claim 1, wherein before outputting the focused image according to the target image corresponding to the target pixel point in the focusing area, the method further comprises:
when the system is in an automatic focusing mode, taking a complete picture to be processed as a focusing area;
and when the camera is in the manual focusing mode, at least one pixel point selected according to the manual focusing instruction is used as a focusing area.
7. The focusing method of claim 6, wherein when in the manual focusing mode, at least one pixel point selected according to the manual focusing instruction is used as a focusing area, and the method comprises:
when the image is in a manual focusing mode, acquiring a preset number of recommended pixel points according to the maximum evaluation value of each pixel point, wherein the maximum evaluation value of the recommended pixel points is greater than or equal to the maximum evaluation values of other pixel points in the image to be processed;
generating and displaying a recommended focusing area according to the pixel position of the recommended pixel point;
and receiving a manual focusing instruction, and taking at least one pixel point selected according to the manual focusing instruction as a focusing area.
8. The focusing method of claim 1, wherein outputting a focused image according to a target image corresponding to a target pixel point in a focusing area comprises:
focusing the picture to be processed according to the image distance of the target image corresponding to the target pixel point in the focusing area, and outputting a focusing image;
or acquiring a target image recorded by the target pixel point in the focusing area when the maximum evaluation value is obtained, and outputting the target image as a focusing image.
9. The focusing method of claim 1, wherein outputting a focused image according to a target image corresponding to a target pixel point in a focusing area comprises:
acquiring the image distance of a current display image;
determining the image distance of a focusing image according to the image distance of a target image corresponding to a target pixel point in a focusing area;
and switching the image distance of the current display image to the image distance of the focusing image according to a preset switching speed or preset switching time so as to output the focusing image.
10. The focusing method of claim 9, wherein the switching the image distance of the currently displayed image to the image distance of the focused image according to a preset switching speed or a preset switching time to output the focused image comprises:
acquiring the image distance of the current display image and the image distance of the focusing image;
focusing the picture to be processed according to the q-th interval image distance when the picture is switched to the q-th interval image distance, and outputting a q-th transition image;
wherein q is 1, 2, …, m is an integer greater than or equal to 1.
11. The focusing method according to any one of claims 1 to 10, wherein after outputting a focused image according to a target image corresponding to a target pixel point in a focusing area, the method further comprises:
judging whether the pixel position of the target pixel point in the focusing area is changed or not according to a preset frequency;
and when the pixel position of the target pixel point in the focusing area is changed, returning to the image of the acquired picture to be processed at different image distances so as to update the focusing image according to the changed target pixel point.
12. An image pickup apparatus comprising a processor, a memory, a computer program stored in the memory and executable on the processor, and a light field camera, the processor implementing the steps of the focusing method according to any one of claims 1 to 11 when executing the computer program;
the light field camera is used for capturing a picture to be processed and recording the propagation path of light rays in any direction when the picture to be processed is captured through the light rays in any direction.
13. A terminal device characterized by comprising the image pickup device according to claim 12.
14. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of a focusing method as claimed in any one of claims 1 to 11.
CN202210669584.6A 2022-06-14 2022-06-14 Focusing method, image pickup apparatus, terminal apparatus, and storage medium Active CN115086558B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210669584.6A CN115086558B (en) 2022-06-14 2022-06-14 Focusing method, image pickup apparatus, terminal apparatus, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210669584.6A CN115086558B (en) 2022-06-14 2022-06-14 Focusing method, image pickup apparatus, terminal apparatus, and storage medium

Publications (2)

Publication Number Publication Date
CN115086558A true CN115086558A (en) 2022-09-20
CN115086558B CN115086558B (en) 2023-12-01

Family

ID=83252316

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210669584.6A Active CN115086558B (en) 2022-06-14 2022-06-14 Focusing method, image pickup apparatus, terminal apparatus, and storage medium

Country Status (1)

Country Link
CN (1) CN115086558B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101547306A (en) * 2008-03-28 2009-09-30 鸿富锦精密工业(深圳)有限公司 Video camera and focusing method thereof
US20140219576A1 (en) * 2013-02-01 2014-08-07 Canon Kabushiki Kaisha Image processing apparatus and control method thereof
CN104104869A (en) * 2014-06-25 2014-10-15 华为技术有限公司 Photographing method and device and electronic equipment
US20150003740A1 (en) * 2011-06-28 2015-01-01 Sony Corporation Image processing device, method of controlling image processing device, and program for enabling computer to execute same method
US20180184070A1 (en) * 2016-12-27 2018-06-28 Qualcomm Incorporated Method and system for depth estimation based upon object magnification
CN110602397A (en) * 2019-09-16 2019-12-20 RealMe重庆移动通信有限公司 Image processing method, device, terminal and storage medium
CN112351196A (en) * 2020-09-22 2021-02-09 北京迈格威科技有限公司 Image definition determining method, image focusing method and device
CN114554085A (en) * 2022-02-08 2022-05-27 维沃移动通信有限公司 Focusing method and device, electronic equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101547306A (en) * 2008-03-28 2009-09-30 鸿富锦精密工业(深圳)有限公司 Video camera and focusing method thereof
US20150003740A1 (en) * 2011-06-28 2015-01-01 Sony Corporation Image processing device, method of controlling image processing device, and program for enabling computer to execute same method
US20140219576A1 (en) * 2013-02-01 2014-08-07 Canon Kabushiki Kaisha Image processing apparatus and control method thereof
CN104104869A (en) * 2014-06-25 2014-10-15 华为技术有限公司 Photographing method and device and electronic equipment
US20180184070A1 (en) * 2016-12-27 2018-06-28 Qualcomm Incorporated Method and system for depth estimation based upon object magnification
CN110602397A (en) * 2019-09-16 2019-12-20 RealMe重庆移动通信有限公司 Image processing method, device, terminal and storage medium
CN112351196A (en) * 2020-09-22 2021-02-09 北京迈格威科技有限公司 Image definition determining method, image focusing method and device
CN114554085A (en) * 2022-02-08 2022-05-27 维沃移动通信有限公司 Focusing method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN115086558B (en) 2023-12-01

Similar Documents

Publication Publication Date Title
KR102278776B1 (en) Image processing method, apparatus, and apparatus
CN111885294B (en) Shooting method, device and equipment
US10311649B2 (en) Systems and method for performing depth based image editing
CN108055452B (en) Image processing method, device and equipment
KR101265358B1 (en) Method of controlling an action, such as a sharpness modification, using a colour digital image
US8345986B2 (en) Image processing apparatus, image processing method and computer readable-medium
KR102266649B1 (en) Image processing method and device
CN110493527B (en) Body focusing method and device, electronic equipment and storage medium
KR20110053348A (en) System and method to generate depth data using edge detection
JP5766077B2 (en) Image processing apparatus and image processing method for noise reduction
KR20190068618A (en) Method and terminal for photographing a terminal
CN110324532A (en) A kind of image weakening method, device, storage medium and electronic equipment
CN106412423A (en) Focusing method and device
CN103888663A (en) Image processing apparatus, image pickup apparatus and image processing method
CN110392211B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN110365897B (en) Image correction method and device, electronic equipment and computer readable storage medium
JP2017060010A (en) Imaging device, method of controlling imaging device, and program
CN111669492A (en) Method for processing shot digital image by terminal and terminal
CN111866369A (en) Image processing method and device
CN115086558B (en) Focusing method, image pickup apparatus, terminal apparatus, and storage medium
JP7458723B2 (en) Image processing device, imaging device, control method, and program
JP2006140646A (en) Image composition device, imaging means, and program
US7330586B2 (en) Low-light exposure modes for digital photo sensors with free-running shutters
CN118250561A (en) Image processing method and device, electronic equipment and computer readable storage medium
CN115118889A (en) Image generation method, image generation device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant