CN114245023A - Focusing processing method and device, camera device and storage medium - Google Patents

Focusing processing method and device, camera device and storage medium Download PDF

Info

Publication number
CN114245023A
CN114245023A CN202210171470.9A CN202210171470A CN114245023A CN 114245023 A CN114245023 A CN 114245023A CN 202210171470 A CN202210171470 A CN 202210171470A CN 114245023 A CN114245023 A CN 114245023A
Authority
CN
China
Prior art keywords
image
focus
determining
target image
focusing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210171470.9A
Other languages
Chinese (zh)
Other versions
CN114245023B (en
Inventor
李�浩
王文龙
华旭宏
杨国全
俞鸣园
王克彦
曹亚曦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Huachuang Video Signal Technology Co Ltd
Original Assignee
Zhejiang Huachuang Video Signal Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Huachuang Video Signal Technology Co Ltd filed Critical Zhejiang Huachuang Video Signal Technology Co Ltd
Priority to CN202210171470.9A priority Critical patent/CN114245023B/en
Publication of CN114245023A publication Critical patent/CN114245023A/en
Application granted granted Critical
Publication of CN114245023B publication Critical patent/CN114245023B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/675Focus control based on electronic image sensor signals comprising setting of focusing regions

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • Automatic Focus Adjustment (AREA)

Abstract

The application relates to a focusing processing method and device, an image pickup device and a storage medium, wherein the method comprises the following steps: acquiring a target image, and dividing the target image into a plurality of image blocks; respectively determining the focusing definition and the area attribute of the image blocks, wherein the area attribute at least comprises a dynamic area, and the dynamic area is determined according to the change degree of the focusing definition of the image blocks in front and back frame images; respectively determining the focus evaluation values of the image blocks according to the focus definition and the region attributes; and determining the focusing position of the target image according to the focusing evaluation values of the image blocks.

Description

Focusing processing method and device, camera device and storage medium
Technical Field
The present application relates to the field of imaging technologies, and in particular, to a focus processing method and apparatus, an imaging apparatus, and a storage medium.
Background
Auto Focus (Auto Focus) is a principle based on reflection of light of an object, collects reflected light of the object using a plurality of lenses (convex lenses or concave lenses) in an image pickup apparatus, and transfers the collected light signal to an image sensor in the image pickup apparatus. And the image sensor generates an original image of the object according to the optical signal, and drives a focusing device to perform focusing after the original image is processed by a processor.
The related art proposes an automatic focusing technique based on image processing, and the basic principle of the technique is to pre-acquire a plurality of images, and the focusing positions of the images are different. Then, the evaluation values of these images are determined using a preset focus evaluation function, and finally, an appropriate focus position is determined from the evaluation values. According to the technical scheme, a plurality of images need to be collected, evaluation values of the plurality of images need to be calculated, and processing efficiency is low.
Therefore, there is a need in the art for an efficient and accurate focusing method.
Disclosure of Invention
The embodiment of the application provides a focusing processing method, a focusing processing device, electronic equipment and a storage medium, and aims to at least solve the problem that the processing efficiency of the focusing processing method in the related art is low.
In a first aspect, an embodiment of the present application provides a focus processing method, including:
acquiring a target image, and dividing the target image into a plurality of image blocks;
respectively determining the focusing definition and the area attribute of the image blocks, wherein the area attribute at least comprises a dynamic area, and the dynamic area is determined according to the change degree of the focusing definition of the image blocks in front and back frame images;
respectively determining the focus evaluation values of the image blocks according to the focus definition and the region attributes;
and determining the focusing position of the target image according to the focusing evaluation values of the image blocks.
Optionally, in an embodiment of the present application, the region attribute of the dynamic region is determined as follows:
acquiring at least one image frame before and/or after the target image;
determining a degree of change in focus sharpness of image blocks at a same location in the target image and the at least one image frame;
and under the condition that the change degree is greater than a second preset threshold value, determining that the area attribute of the image block at the position in the target image is a dynamic area.
Optionally, in an embodiment of the present application, the determining the focus evaluation values of the plurality of image partitions according to the focus definitions and the region attributes respectively includes:
determining weights of the image blocks according to the region attributes;
and respectively determining the focus evaluation values of the image blocks according to the focus definition and the weights.
Optionally, in an embodiment of the present application, the determining the focus evaluation values of the plurality of image partitions according to the focus definitions and the region attributes respectively includes:
determining a shooting scene corresponding to the target image;
and respectively determining the focus evaluation values of the image blocks according to the focus definition, the region attribute and the shooting scene.
Optionally, in an embodiment of the present application, determining the focus evaluation values of the plurality of image partitions according to the focus definition, the area attribute, and the shooting scene respectively includes:
respectively determining weight scores corresponding to different region attributes according to the shooting scene;
determining the weight of the focusing definition according to the weight scores corresponding to different region attributes;
and respectively determining the focus evaluation values of the image blocks according to the focus definition and the weights.
Optionally, in an embodiment of the present application, the determining the focus evaluation values of the plurality of image partitions according to the focus definitions and the region attributes respectively includes:
determining a degree of dispersion of focus sharpness for the plurality of image patches;
and under the condition that the discrete degree is determined to be larger than a first preset threshold, respectively determining the focus evaluation values of the image blocks according to the focus definition and the region attributes.
Optionally, in an embodiment of the present application, the determining the focus position of the target image according to the focus evaluation values of the plurality of image partitions includes:
determining a focus evaluation value of the target image according to the focus evaluation values of the image blocks;
and determining the focusing position of the target image under the condition that the focusing evaluation value of the target image meets a preset condition.
Optionally, in an embodiment of the present application, the preset condition includes at least one of:
the focus evaluation value of the target image is larger than a preset evaluation value threshold;
the focus evaluation value of the target image is larger than that of the previous frame of image on the target image.
In a second aspect, an embodiment of the present application further provides a focus processing apparatus, including:
the image acquisition module is used for acquiring a target image and dividing the target image into a plurality of image blocks;
the block information acquisition module is used for respectively determining the focusing definition and the area attribute of the plurality of image blocks, wherein the area attribute at least comprises a dynamic area, and the dynamic area is determined according to the change degree of the focusing definition of the image blocks in front and back frame images;
an evaluation value determining module, configured to determine focus evaluation values of the plurality of image blocks according to the focus sharpness and the region attribute, respectively;
and the focusing position determining module is used for determining the focusing position of the target image according to the focusing evaluation values of the image blocks.
In a third aspect, an image pickup apparatus includes a lens group, an image sensor, a memory in which a computer program is stored, and a processor configured to execute the computer program to perform the focus processing method.
In a fourth aspect, a non-transitory computer readable storage medium has stored thereon computer program instructions which, when executed by a processor, implement the focus processing method.
In a fifth aspect, a computer program product comprises computer readable code or a non-transitory computer readable storage medium carrying computer readable code which, when run in a processor of an electronic device, the processor in the electronic device performs the focus processing method.
In a sixth aspect, a chip comprises at least one processor for executing a computer program or computer instructions stored in a memory for performing the focusing processing method.
The details of one or more embodiments of the application are set forth in the accompanying drawings and the description below to provide a more thorough understanding of the application.
According to the focusing processing method and device, the camera device and the storage medium provided by the embodiments of the application, the focusing position in the target image can be determined by focusing the target image of the shooting scene, and in terms of processing cost, the focusing processing method provided by the embodiments of the application can determine the focusing position by processing only one target image, so that the processing cost is low, and the efficiency is high. Specifically, in the actual process of focusing processing, the target image is divided into a plurality of image blocks, the focusing definition and the area attribute of each image block are determined, and the focusing evaluation value of each image block is determined according to the focusing definition and the area attribute. The area attribute is given to each image block, and the area attribute can be used as an important factor to influence whether the image block can be in the focusing position, so that the accuracy of determining the focusing position is improved. In addition, the image blocks with the dynamic region attributes in the target image are determined based on the target image and the change degree of the focusing definition of the image blocks in the previous and next image frames of the target image, so that the method has high identification efficiency and identification accuracy, and meets the requirement for identifying the dynamic regions in the shooting scene.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a schematic view of an application scenario provided in an embodiment of the present application;
FIG. 2 is a flowchart of a method of a focus processing method according to an embodiment of the present application;
fig. 3 is a schematic diagram of an application scenario provided in an embodiment of the present application;
fig. 4 is a schematic block diagram of a focus processing apparatus according to an embodiment of the present application;
FIG. 5 is a schematic block diagram of a processing apparatus according to an embodiment of the present disclosure;
fig. 6 is a schematic block diagram of a computer program product according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be described and illustrated below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments provided in the present application without any inventive step are within the scope of protection of the present application. Moreover, it should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the specification. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of ordinary skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments without conflict.
Unless defined otherwise, technical or scientific terms referred to herein shall have the ordinary meaning as understood by those of ordinary skill in the art to which this application belongs. Reference to "a," "an," "the," and similar words throughout this application are not to be construed as limiting in number, and may refer to the singular or the plural. The present application is directed to the use of the terms "including," "comprising," "having," and any variations thereof, which are intended to cover non-exclusive inclusions; for example, a process, method, system, article, or apparatus that comprises a list of steps or modules (elements) is not limited to the listed steps or elements, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. Reference to "connected," "coupled," and the like in this application is not intended to be limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. Reference herein to "a plurality" means greater than or equal to two. "and/or" describes an association relationship of associated objects, meaning that three relationships may exist, for example, "A and/or B" may mean: a exists alone, A and B exist simultaneously, and B exists alone. Reference herein to the terms "first," "second," "third," and the like, are merely to distinguish similar objects and do not denote a particular ordering for the objects.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present application. It will be understood by those skilled in the art that the present application may be practiced without some of these specific details. In some instances, devices, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present application.
In order to clearly illustrate the technical solutions of the embodiments of the present application, an application environment of the embodiments of the present application is described below with reference to fig. 1.
Referring to fig. 1, fig. 1 is a schematic view of an application scenario provided in an embodiment of the present application. Fig. 1 illustrates that the focusing processing method provided by the embodiment of the present application is applied to a conference scene. Specifically, the conference scene may be focused by the camera 101, and the camera 101 may include any electronic device with an image capturing function, which is not limited herein. In a conference scene, target objects such as participants, conference tables, notebooks, etc. can be included. The embodiment of the present application provides a focus processing apparatus 103, and the focus processing apparatus 103 may include various forms such as an electronic device, a non-volatile computer-readable storage medium, a computer program product, and a chip. As an electronic device, the focus processing device and the image capturing device 101 can perform data transmission, and process a target image captured by the image capturing device 101. As a non-volatile computer-readable storage medium, a computer program product, and a chip, the focus processing device may be coupled to an interior of the camera device 101, so that the camera device 101 has a function of focus processing, and of course, the focus processing device may be disposed in another terminal (such as a smart phone) or a server or a cloud, and transmit a target image acquired by the camera device 101 to the another terminal or the server or the cloud in a manner of network transmission, and after the focus processing is completed by the another terminal or the server or the cloud, transmit a focus position to the camera device 101.
It should be noted that the focusing method and the focusing device provided in the embodiments of the present application can perform focusing processing on not only a conference scene, but also a portrait scene, a landscape scene, a food scene, a motion scene, and other different scenes, and the application scene is not limited in any way.
The following describes the focusing method in detail with reference to the drawings. Fig. 2 is a schematic flowchart of an embodiment of a focusing processing method provided in the present application. Although the present application provides method steps as shown in the following examples or figures, more or fewer steps may be included in the method based on conventional or non-inventive efforts. In the case of steps where no necessary causal relationship exists logically, the order of execution of the steps is not limited to that provided by the embodiments of the present application. The method can be executed sequentially or in parallel (for example, in the context of a parallel processor or a multi-thread process) in the method shown in the embodiment or the figures, during the actual focusing process or when the method is executed.
Specifically, as shown in fig. 2, an embodiment of the focusing processing method provided in the present application may include:
s201: a target image is acquired and divided into a plurality of image blocks.
In the embodiment of the present application, the target image may be obtained under various conditions, for example, the target image may be obtained during a video recording process, or before a shutter is pressed during a photographing process, and the target image may be applied to any photographing scene that needs to be focused, which is not limited herein. After the target image is acquired, the target image may be divided into a plurality of image tiles, for example, the target image may be divided into M × N image tiles. Since the focus position is a small image area in the target image, the focus position can be determined quickly by dividing the target image into a plurality of image blocks and then determining the focus position from the plurality of image blocks based on the division.
S203: and respectively determining the focusing definition and the area attribute of the image blocks, wherein the area attribute at least comprises a dynamic area, and the dynamic area is determined according to the change degree of the focusing definition of the image blocks in the front frame image and the back frame image.
In this embodiment of the present application, first, the focus definitions of the plurality of image blocks may be respectively determined, if an image block is taken as a focus position, the image block needs to meet a definition requirement, and the focus definitions may be used to measure the definition of the corresponding image block. In one embodiment of the present application, the focus sharpness of the image block may be represented by a two-dimensional gaussian function. In one example, the two-dimensional gaussian function for an image block (x, y) can be expressed as:
Figure DEST_PATH_IMAGE001
(1)
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE002
is the center point of the image block,
Figure DEST_PATH_IMAGE003
Figure DEST_PATH_IMAGE004
is the standard deviation of the image block in the horizontal and vertical directions.
Of course, in other embodiments, the focus sharpness of the image block may also be determined by using a function, such as Brenner gradient function, Laplacian gradient function, gray variance (SMD), gray variance product (SMD 2), variance function, energy gradient function, Vollath function, entropy function, and the like, which is not limited herein.
In this embodiment, the region attribute may include information of a target category, an image category, and the like to which the image block belongs, where the target category includes a category of a target object in the image block, such as a category of a face, a scene, food, a pet, and the like, and the image category may include a category of the image block in an image dimension, which may specifically be determined according to an image parameter, such as a region-of-interest category, a static category, a dynamic category, and the like. The block attribute and the determination method thereof may be preset. In one embodiment of the present application, the region attribute may include at least a dynamic region, and the dynamic region may be determined according to a degree of change in focus sharpness of the image block in previous and subsequent frame images. In the case where the area attribute of the image patch includes a dynamic area, it may be determined that the image patch includes a dynamic target object, such as a walking pedestrian or the like. In one embodiment of the present application, the region attribute of the dynamic region is determined as follows:
s301: acquiring at least one image frame before and/or after the target image;
s303: determining a degree of change in focus sharpness of image blocks at a same location in the target image and the at least one image frame;
s305: and under the condition that the change degree is greater than a second preset threshold value, determining that the area attribute of the image block at the position in the target image is a dynamic area.
In an embodiment of the application, at least one image frame before and/or after the target image is first acquired. The at least one image frame may include at least one image frame acquired before the target image and/or at least one image frame acquired after the target image. The at least one image frame may be divided into a plurality of image blocks, respectively, in the same processing manner as the target image, and the size of the image block is the same as the size of the image block divided for the target image. Then, focus sharpness for a plurality of image blocks in the at least one image frame is determined, respectively. By comparing the degree of change in the focus sharpness of image areas at the same position, it can be determined whether the area attribute of each image area includes a dynamic area. In the case of determining the degree of change in the focus definition, the degree of change may include a difference in focus definition of image blocks at the same position in the case of only one image frame; in the case of multiple image frames, the degree of change may include any parameter that can be used to measure the degree of dispersion of multiple variables, such as the standard deviation, variance, and the like of the focus sharpness of image blocks at the same position in the target image and the multiple image frames, and the application is not limited herein. In a case where the degree of transformation of the focus sharpness of the image block at the same position is large, for example, larger than the second preset threshold, it may be determined that the region attribute of the image block at the position includes a dynamic region. Of course, in other embodiments, the dynamic target in the target image may also be determined by a dynamic target detection method based on the Yolo model, so as to determine the image block having the dynamic area attribute.
By the embodiment, the image block with the dynamic area attribute in the target image can be quickly and accurately determined based on the divided image blocks and the calculated focusing definition.
In other embodiments, the region attribute may further include a region of interest, a face region. The region of interest may include an image region that is selected by a user and can be used as a focusing position, for example, the user may select the region of interest in a viewfinder frame, specifically, select the region of interest by using a square frame, a circle, an ellipse, or a frame with another shape, which is not limited herein. Specifically, in the process of determining the region of interest, an operation of selecting the region of interest in the finder frame by the user may be received, in response to the operation, at least one image block covered by the region of interest selected by the user may be determined, and the region attribute of the at least one image block may be set to include the region of interest. In the case where the region attribute of the image partition includes a face region, it may be determined that the image partition includes a face image. Specifically, in the process of determining the face region, face detection may be performed on the target image, and in the case that the target image obtained by detection includes a face image, at least one image block covered by the face image may be determined, and the region attribute of the at least one image block is set to include the face region.
It should be noted that the region attribute is not limited to the above-mentioned region of interest, face region and dynamic region, and may also include, for example, a scene region, a pet face region, and the like, and for different application scenarios, the region attribute matched with the application scenario may be set accordingly, which is not limited herein. In addition, for the same image region, a plurality of different region attributes may be corresponded, for example, one image region may have a region attribute of a region of interest, or a region attribute of a face region. As shown in fig. 3, the image block 301 has two region attributes of a region of interest and a face region, the image block 302 has two region attributes of a face region and a dynamic region, and the image block 303 has a region attribute of a dynamic region.
S205: and respectively determining the focus evaluation values of the image blocks according to the focus definition and the region attributes.
In the embodiment of the present application, the focus evaluation value of each image partition is associated with the focus definition and the area attribute of the image partition. In an embodiment of the present application, determining a weight of each image partition according to the region attribute, specifically, determining focus evaluation values of the plurality of image partitions according to the focus sharpness and the region attribute, respectively, may include:
s401: determining weights of the image blocks according to the region attributes;
s403: and respectively determining the focus evaluation values of the image blocks according to the focus definition and the weights.
In the embodiment of the present application, weights of the image areas are respectively set according to the area attributes, so that the area attributes can influence whether the image areas can be focused. For example, in one embodiment of the present application, the region of interest, the face region and the weight may be positively correlated, and the dynamic region and the weight are negatively correlated, that is, the region of interest is increasedThe probability that the image block corresponding to the domain and the human face region becomes the focus position is reduced. The focusing processing method in each embodiment of the application is particularly suitable for the shooting process of a conference scene, in which the face image of a participant should be a focusing position, and of course, if the user selects an interested region, the interested region may also be the focusing position, and a dynamic region in the conference should not be the focusing position because of its dynamically changing characteristics. After determining the focus sharpness and the weight of each image partition, a focus evaluation value of the image partition may be determined according to the focus sharpness and the weight. In one example thereof, the focus evaluation value of the i-th image block
Figure DEST_PATH_IMAGE005
Can be expressed as:
Figure DEST_PATH_IMAGE006
(2)
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE007
represents the weight of the ith image block,
Figure DEST_PATH_IMAGE008
indicating the focus sharpness of the ith image block.
In the embodiment of the application, in the process of determining the weight, the weight scores corresponding to different region attributes may be set, for example, the weight score corresponding to the region of interest is set to be a, the weight score corresponding to the face region is set to be b, and the weight score corresponding to the dynamic region is set to be c, where a, b, and c are positive values. Based on the region of interest, the face region and the weight, the dynamic region is inversely related to the weight, and the weight of the ith image block is the region attribute of the region of interest, the face region and the dynamic region
Figure 914074DEST_PATH_IMAGE007
Can be expressed as:
Figure DEST_PATH_IMAGE009
(3)
of course, in other embodiments, the weight scores of different region attributes may also be dynamically adjusted, for example, for two image blocks each having a face region, the weight score of the face region at the target image position is higher than the weight scores of the face regions at other positions.
For example, in a portrait shooting mode, compared with other area attributes, it is more appropriate to take an image block corresponding to a face area as a focusing position, and in an outdoor dynamic mode, compared with other area attributes, it is more appropriate to take an image block corresponding to a dynamic area as a focusing position. Based on this, in an embodiment of the present application, the determining the focus evaluation values of the plurality of image partitions according to the focus definitions and the region attributes respectively includes:
s501: determining a shooting scene corresponding to the target image;
s503: and respectively determining the focus evaluation values of the image blocks according to the focus definition, the region attribute and the shooting scene.
In the embodiment of the present application, the shooting scene may include a main purpose of a user to shoot an image or a video, and typically includes, for example, a portrait scene, a landscape scene, a food scene, a sports scene, a conference scene, and the like. In some embodiments, the shooting scene may be determined according to a shooting mode selected by a user, and in many shooting devices, a plurality of shooting modes, such as a portrait mode, a gourmet mode, a sport mode, etc., may be preset, from which the user may select a desired shooting model. In other embodiments, the target image may be analyzed to obtain the shooting scene. Specifically, the target image may be subjected to target detection, parameters such as a type, a number, and a size of the target object may be detected, and then, a shooting scene corresponding to the target object may be determined according to the parameters. In an example, when the target image is subjected to target detection, and the target image includes a face image, and the ratio of the face image in the whole target image is greater than a preset ratio threshold, it may be determined that the shooting scene corresponding to the target image is a portrait scene.
After the shooting scene corresponding to the target image is determined, the focus evaluation values of the image blocks can be respectively determined according to the focus definition, the area attribute and the shooting scene. Based on the shooting scene, the determining the weight scores corresponding to the area attributes, specifically, the determining the focus evaluation values of the image areas according to the focus definition, the area attributes and the shooting scene, may include:
s601: respectively determining weight scores corresponding to different region attributes according to the shooting scene;
s603: determining the weights of the image blocks according to the weight scores corresponding to the different regional attributes;
s605: and respectively determining the focus evaluation values of the image blocks according to the focus definition and the weights.
In the embodiment of the application, after the shooting scene of the target image is determined, the weight scores corresponding to different region attributes can be respectively determined according to the shooting scene. In some specific examples, for a portrait scene, the weight score of the face region may be set to be the highest, for example, 8, the weight score of the region of interest may be set to be 7, and the dynamic region may be-2. As another example, for a landscape scene, the weight score of the landscape region may be set to be the highest, for example, to be the value 8, the weight score of the region of interest may be set to be 6, the face region may be 2, and the dynamic region may be-2. After determining the weight scores corresponding to the respective region attributes, the weights of the plurality of image patches may be determined, respectively. For example, after determining the region attributes included in the image block, the weighting scores corresponding to the region attributes may be summed to determine the weighting corresponding to the image block. Finally, the focus evaluation value of each image partition may be determined according to the above formula (2).
In the embodiment of the present application, the focus evaluation values of the plurality of image partitions may be determined again when the sharpness of the target image meets a certain requirement. Based on this, in an embodiment of the present application, the determining the focus evaluation values of the plurality of image areas according to the focus sharpness and the area attributes respectively may include:
s701: determining a degree of dispersion of focus sharpness for the plurality of image patches;
s703: and under the condition that the discrete degree is determined to be larger than a first preset threshold, respectively determining the focus evaluation values of the image blocks according to the focus definition and the region attributes.
In this embodiment of the application, first, according to an average value of the focus definitions of the plurality of image blocks, in a case that the average value is greater than a preset threshold, a discrete degree of the focus definitions of the plurality of image blocks may be determined. And when the average value is larger than a preset threshold value, the average definition of the target image can reach a certain requirement. However, even for an image with a certain average definition, a virtual focus phenomenon may occur, which means that the entire target image is not clear. Based on the above, the image without the virtual focus phenomenon can be screened out by using the discrete degree of the focusing definition. In the embodiment of the application, the larger the dispersion degree of the focusing definitions of the plurality of image blocks is, the larger the difference between the focusing definitions of different image blocks is, and then, it can be determined that no virtual focus phenomenon occurs. In an embodiment of the present application, the degree of dispersion may include a standard deviation, and may specifically be represented by the following expression:
Figure DEST_PATH_IMAGE010
(4)
wherein s represents a variance of focus sharpness for a plurality of image blocks,
Figure DEST_PATH_IMAGE011
represents an average of the focus sharpness of a plurality of image blocks,
Figure DEST_PATH_IMAGE012
represents the focus resolution of the ith image block, and n is the total number of image blocks.
S207: and determining the focusing position of the target image according to the focusing evaluation values of the image blocks.
In the embodiment of the present application, after the focus evaluation values of the plurality of image patches are determined, respectively, the focus position of the target image may be determined. In one embodiment of the present application, an image block having the highest focus evaluation value may be used as the focus position of the target image. In one possible embodiment, if there are a plurality of image blocks with the highest focus evaluation value and the positions of the image blocks are scattered, the user may be allowed to specify the corresponding focus position. Specifically, a plurality of image blocks having the highest focus evaluation value may be highlighted within the finder frame of the image pickup device, so that the user can freely select the focus position. Of course, in other embodiments, the focus position may also be determined based on the target detection result. For example, for a plurality of image blocks with the highest focus evaluation value, if one of the image blocks is detected to correspond to a face image, the image block can be used as a focus position. Of course, the focusing position of the target image can also be determined according to the position of the image area in the target image. For example, an image block located at the center of the target image is more likely to be the focus position of the target image than an image block located at the edge.
In the embodiment of the present application, the focus position of the target image may be determined when the focus evaluation value of the target image satisfies a preset condition. The determining the focus position of the target image according to the focus evaluation values of the plurality of image blocks comprises:
s801: determining a focus evaluation value of the target image according to the focus evaluation values of the image blocks;
s803: and determining the focusing position of the target image under the condition that the focusing evaluation value of the target image meets a preset condition.
In the embodiment of the present application, the focus evaluation value of the target image may include a sum value, an average value, and the like of the focus evaluation values of the plurality of image patches. The preset condition comprises at least one of the following conditions: the focus evaluation value of the target image is larger than a preset evaluation value threshold; the focus evaluation value of the target image is larger than that of the previous frame of image on the target image. In one embodiment, when the focus evaluation value of the target image is relatively large, the selection of the focus position on the target image is determined; otherwise, the focus position may not be selected. In another embodiment of the present application, the target image is valuable only if the focus evaluation value of the image is better and better, otherwise, if the focus evaluation value is lower than that of the previous frame image or the historical image, the focus position may not be selected on the target image.
According to the focusing processing method provided by each embodiment of the application, the focusing position in the target image can be determined by focusing the target image of the shooting scene, and in terms of processing cost, the focusing processing method provided by the embodiment of the application can determine the focusing position by processing only one target image, so that the processing cost is low, and the efficiency is high. Specifically, in the actual process of focusing processing, the target image is divided into a plurality of image blocks, the focusing definition and the area attribute of each image block are determined, and the focusing evaluation value of each image block is determined according to the focusing definition and the area attribute. The area attribute is given to each image block, and the area attribute can be used as an important factor to influence whether the image block can be in the focusing position, so that the accuracy of determining the focusing position is improved. In addition, the image blocks with the dynamic region attributes in the target image are determined based on the target image and the change degree of the focusing definition of the image blocks in the previous and next image frames of the target image, so that the method has high identification efficiency and identification accuracy, and meets the requirement for identifying the dynamic regions in the shooting scene.
Embodiments of the present application also provide a focus processing apparatus, as shown in fig. 4, the focus processing module 400 may include:
an image obtaining module 401, configured to obtain a target image and divide the target image into a plurality of image blocks;
a block information obtaining module 403, configured to determine the focusing definitions and the region attributes of the multiple image blocks respectively, where the region attributes at least include a dynamic region, where the dynamic region is determined according to a change degree of the focusing definitions of the image blocks in previous and subsequent frame images;
an evaluation value determining module 405, configured to determine focus evaluation values of the plurality of image blocks according to the focus definitions and the region attributes, respectively;
a focus position determining module 407, configured to determine a focus position of the target image according to the focus evaluation values of the plurality of image partitions.
Embodiments of the present application further provide an image capturing apparatus, comprising a lens group, an image sensor, a memory, and a processor, wherein the memory stores a computer program, and the processor is configured to execute the computer program to perform the method of any of the above embodiments. The lens group may include a plurality of lenses (convex lenses or concave lenses) for collecting light signals reflected by a target object in a photographing scene and transferring the collected light signals to the image sensor. The image sensor generates an original image of the target object from the optical signal.
An embodiment of the present application further provides a processing device 700, where the processing device 700 may be a physical device or a physical device cluster, or may be a virtualized cloud device, such as at least one cloud computing device in a cloud computing cluster. For ease of understanding, the present application illustrates the structure of the processing device 700 as a separate physical device from the processing device 700.
As shown in fig. 5, the processing apparatus 700 includes: a processor and a memory for storing processor-executable instructions; wherein the processor is configured to implement the above-described apparatus when executing the instructions. The processing device 700 includes a memory 701, a processor 703, a bus 705, and a communication interface 707. Memory 701, processor 703 and communication interface 707 communicate over a bus 701. The bus 705 may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in fig. 5, but this does not represent only one bus or one type of bus. The communication interface 707 is used for communication with the outside.
The processor 703 may be a Central Processing Unit (CPU). The memory 701 may include a volatile memory (volatile memory), such as a Random Access Memory (RAM). The memory 701 may also include a non-volatile memory (non-volatile memory), such as a read-only memory (ROM), a flash memory, an HDD, or an SSD.
The memory 701 stores executable code, and the processor 703 executes the executable code to execute the test scenario construction method described above.
Embodiments of the present application provide a non-transitory computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method.
Embodiments of the present application provide a computer program product comprising computer readable code, or a non-transitory computer readable storage medium carrying computer readable code, which when run in a processor of an electronic device, the processor in the electronic device performs the above method.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a Read-Only Memory (ROM), an erasable Programmable Read-Only Memory (EPROM or flash Memory), a Static Random Access Memory (SRAM), a portable Compact Disc Read-Only Memory (CD-ROM), a Digital Versatile Disc (DVD), a Memory stick, a floppy disk, a mechanical coding device, a punch card or an in-groove protrusion structure, for example, having instructions stored thereon, and any suitable combination of the foregoing.
The computer readable program instructions or code described herein may be downloaded to the respective computing/processing device from a computer readable storage medium, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present application may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of Network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry can execute computer-readable program instructions to implement aspects of the present application by utilizing state information of the computer-readable program instructions to personalize custom electronic circuitry, such as Programmable Logic circuits, Field-Programmable Gate arrays (FPGAs), or Programmable Logic Arrays (PLAs).
In some embodiments, the disclosed methods may be implemented as computer program instructions encoded on a computer-readable storage medium in a machine-readable format or encoded on other non-transitory media or articles of manufacture. Fig. 6 schematically illustrates a conceptual partial view of an example computer program product comprising a computer program for executing a computer process on a computing device, arranged in accordance with at least some embodiments presented herein. In one embodiment, the example computer program product 800 is provided using a signal bearing medium 801. The signal bearing medium 801 may include one or more program instructions 802 that, when executed by one or more processors, may provide the functions or portions of the functions described above with respect to fig. 1. Further, program instructions 802 in FIG. 6 also describe example instructions.
In some examples, signal bearing medium 801 may include a computer readable medium 803, such as, but not limited to, a hard disk drive, a Compact Disc (CD), a Digital Video Disc (DVD), a digital tape, a Memory, a Read-Only Memory (ROM), a Random Access Memory (RAM), and so forth. In some implementations, the signal bearing medium 801 may include a computer recordable medium 804 such as, but not limited to, a memory, a read/write (R/W) CD, a R/W DVD, and so forth. In some implementations, the signal bearing medium 801 may include a communication medium 805 such as, but not limited to, a digital and/or analog communication medium (e.g., a fiber optic cable, a waveguide, a wired communications link, a wireless communication link, etc.). Thus, for example, the signal bearing medium 801 may be conveyed by a wireless form of communication medium 805 (e.g., a wireless communication medium that complies with the IEEE 802.11 standard or other transport protocol). The one or more program instructions 802 may be, for example, computer-executable instructions or logic-implementing instructions. In some examples, a computing device, such as the computing device described with respect to fig. 2, may be configured to provide various operations, functions, or actions in response to program instructions 802 conveyed to the computing device by one or more of computer-readable media 803, computer-recordable media 804, and/or communication media 805. It should be understood that the arrangements described herein are for illustrative purposes only. Thus, those skilled in the art will appreciate that other arrangements and other elements (e.g., machines, interfaces, functions, orders, and groupings of functions, etc.) can be used instead, and that some elements may be omitted altogether depending upon the desired results. In addition, many of the described elements are functional entities that may be implemented as discrete or distributed components or in conjunction with other components, in any suitable combination and location.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
It is also noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by hardware (e.g., a Circuit or an ASIC) for performing the corresponding function or action, or by combinations of hardware and software, such as firmware.
While the invention has been described in connection with various embodiments, other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed invention, from a review of the drawings, the disclosure, and the appended claims. In the claims, the word "comprising" does not exclude other elements or steps, and the word "a" or "an" does not exclude a plurality. A single processor or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.
Having described embodiments of the present application, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (13)

1. A focus processing method, characterized by comprising:
acquiring a target image, and dividing the target image into a plurality of image blocks;
respectively determining the focusing definition and the area attribute of the image blocks, wherein the area attribute at least comprises a dynamic area, and the dynamic area is determined according to the change degree of the focusing definition of the image blocks in front and back frame images;
respectively determining the focus evaluation values of the image blocks according to the focus definition and the region attributes;
and determining the focusing position of the target image according to the focusing evaluation values of the image blocks.
2. The method of claim 1, wherein the region attribute of the dynamic region is determined as follows:
acquiring at least one image frame before and/or after the target image;
determining a degree of change in focus sharpness of image blocks at a same location in the target image and the at least one image frame;
and under the condition that the change degree is greater than a second preset threshold value, determining that the area attribute of the image block at the position in the target image is a dynamic area.
3. The method according to claim 1, wherein the determining the focus evaluation values of the plurality of image partitions according to the focus definitions and the region attributes respectively comprises:
determining weights of the image blocks according to the region attributes;
and respectively determining the focus evaluation values of the image blocks according to the focus definition and the weights.
4. The method according to claim 1, wherein the determining the focus evaluation values of the plurality of image partitions according to the focus definitions and the region attributes respectively comprises:
determining a shooting scene corresponding to the target image;
and respectively determining the focus evaluation values of the image blocks according to the focus definition, the region attribute and the shooting scene.
5. The method according to claim 4, wherein determining the focus evaluation values of the plurality of image partitions respectively according to the focus sharpness, the area attribute, and the shooting scene comprises:
respectively determining weight scores corresponding to different region attributes according to the shooting scene;
determining the weight of the focusing definition according to the weight scores corresponding to different region attributes;
and respectively determining the focus evaluation values of the image blocks according to the focus definition and the weights.
6. The method according to claim 1, wherein the determining the focus evaluation values of the plurality of image partitions according to the focus definitions and the region attributes respectively comprises:
determining a degree of dispersion of focus sharpness for the plurality of image patches;
and under the condition that the discrete degree is determined to be larger than a first preset threshold, respectively determining the focus evaluation values of the image blocks according to the focus definition and the region attributes.
7. The method of claim 1, wherein determining the focus position of the target image according to the focus evaluation values of the plurality of image partitions comprises:
determining a focus evaluation value of the target image according to the focus evaluation values of the image blocks;
and determining the focusing position of the target image under the condition that the focusing evaluation value of the target image meets a preset condition.
8. The method of claim 7, wherein the preset condition comprises at least one of:
the focus evaluation value of the target image is larger than a preset evaluation value threshold;
the focus evaluation value of the target image is larger than that of the previous frame of image on the target image.
9. A focus processing apparatus characterized by comprising:
the image acquisition module is used for acquiring a target image and dividing the target image into a plurality of image blocks;
the block information acquisition module is used for respectively determining the focusing definition and the area attribute of the plurality of image blocks, wherein the area attribute at least comprises a dynamic area, and the dynamic area is determined according to the change degree of the focusing definition of the image blocks in front and back frame images;
an evaluation value determining module, configured to determine focus evaluation values of the plurality of image blocks according to the focus sharpness and the region attribute, respectively;
and the focusing position determining module is used for determining the focusing position of the target image according to the focusing evaluation values of the image blocks.
10. An image pick-up device comprising a lens group, an image sensor, a memory and a processor, wherein the memory has stored therein a computer program, the processor being arranged to execute the computer program to perform the method of any one of claims 1-6.
11. A non-transitory computer readable storage medium having computer program instructions stored thereon, wherein the computer program instructions, when executed by a processor, implement the method of any of claims 1-8.
12. A computer program product comprising computer readable code or a non-transitory computer readable storage medium carrying computer readable code which, when run in a processor of an electronic device, the processor in the electronic device performs the method of any of claims 1-8.
13. A chip comprising at least one processor for executing a computer program or computer instructions stored in a memory for performing the method of any of the preceding claims 1-8.
CN202210171470.9A 2022-02-24 2022-02-24 Focusing processing method and device, camera device and storage medium Active CN114245023B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210171470.9A CN114245023B (en) 2022-02-24 2022-02-24 Focusing processing method and device, camera device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210171470.9A CN114245023B (en) 2022-02-24 2022-02-24 Focusing processing method and device, camera device and storage medium

Publications (2)

Publication Number Publication Date
CN114245023A true CN114245023A (en) 2022-03-25
CN114245023B CN114245023B (en) 2022-06-03

Family

ID=80748035

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210171470.9A Active CN114245023B (en) 2022-02-24 2022-02-24 Focusing processing method and device, camera device and storage medium

Country Status (1)

Country Link
CN (1) CN114245023B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7221805B1 (en) * 2001-12-21 2007-05-22 Cognex Technology And Investment Corporation Method for generating a focused image of an object
US20090136148A1 (en) * 2007-11-26 2009-05-28 Samsung Electronics Co., Ltd. Digital auto-focusing apparatus and method
CN101840055A (en) * 2010-05-28 2010-09-22 浙江工业大学 Video auto-focusing system based on embedded media processor
US20140085507A1 (en) * 2012-09-21 2014-03-27 Bruce Harold Pillman Controlling the sharpness of a digital image
CN107240092A (en) * 2017-05-05 2017-10-10 浙江大华技术股份有限公司 A kind of image blur detection method and device
WO2018059158A1 (en) * 2016-09-29 2018-04-05 华为技术有限公司 Auto-focusing method and apparatus
WO2020019295A1 (en) * 2018-07-27 2020-01-30 深圳市大疆创新科技有限公司 Image acquisition method, imaging apparatus, and photographing system
CN112601027A (en) * 2020-10-27 2021-04-02 浙江华创视讯科技有限公司 Automatic focusing method and device
CN113382155A (en) * 2020-03-10 2021-09-10 浙江宇视科技有限公司 Automatic focusing method, device, equipment and storage medium
WO2021207945A1 (en) * 2020-04-14 2021-10-21 深圳市大疆创新科技有限公司 Focusing control method, apparatus, and device, movable platform, and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7221805B1 (en) * 2001-12-21 2007-05-22 Cognex Technology And Investment Corporation Method for generating a focused image of an object
US20090136148A1 (en) * 2007-11-26 2009-05-28 Samsung Electronics Co., Ltd. Digital auto-focusing apparatus and method
CN101840055A (en) * 2010-05-28 2010-09-22 浙江工业大学 Video auto-focusing system based on embedded media processor
US20140085507A1 (en) * 2012-09-21 2014-03-27 Bruce Harold Pillman Controlling the sharpness of a digital image
WO2018059158A1 (en) * 2016-09-29 2018-04-05 华为技术有限公司 Auto-focusing method and apparatus
CN107240092A (en) * 2017-05-05 2017-10-10 浙江大华技术股份有限公司 A kind of image blur detection method and device
WO2020019295A1 (en) * 2018-07-27 2020-01-30 深圳市大疆创新科技有限公司 Image acquisition method, imaging apparatus, and photographing system
CN113382155A (en) * 2020-03-10 2021-09-10 浙江宇视科技有限公司 Automatic focusing method, device, equipment and storage medium
WO2021207945A1 (en) * 2020-04-14 2021-10-21 深圳市大疆创新科技有限公司 Focusing control method, apparatus, and device, movable platform, and storage medium
CN112601027A (en) * 2020-10-27 2021-04-02 浙江华创视讯科技有限公司 Automatic focusing method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
P. MATHIYALAGAN等: "Image Fusion Using Convolutional Neural Network with Bilateral Filtering", 《2018 9TH INTERNATIONAL CONFERENCE ON COMPUTING, COMMUNICATION AND NETWORKING TECHNOLOGIES (ICCCNT)》 *
田文利: "基于图像清晰度评价函数与自动取窗对焦的目标视觉调焦算法", 《微型电脑应用》 *

Also Published As

Publication number Publication date
CN114245023B (en) 2022-06-03

Similar Documents

Publication Publication Date Title
CN109035304B (en) Target tracking method, medium, computing device and apparatus
WO2019134504A1 (en) Method and device for blurring image background, storage medium, and electronic apparatus
JP2018533805A (en) Face position tracking method, device and electronic device
US9058655B2 (en) Region of interest based image registration
CN104333748A (en) Method, device and terminal for obtaining image main object
EP4226322A1 (en) Segmentation for image effects
CN106488215B (en) Image processing method and apparatus
JP2008165792A (en) Image processing method and device
CN113837079B (en) Automatic focusing method, device, computer equipment and storage medium of microscope
CN113255685B (en) Image processing method and device, computer equipment and storage medium
CN112602319B (en) Focusing device, method and related equipment
WO2008147724A2 (en) Methods, systems and apparatuses for motion detection using auto-focus statistics
CN113313626A (en) Image processing method, image processing device, electronic equipment and storage medium
CN114650361B (en) Shooting mode determining method, shooting mode determining device, electronic equipment and storage medium
CN113312949B (en) Video data processing method, video data processing device and electronic equipment
CN114245023B (en) Focusing processing method and device, camera device and storage medium
CN108764206B (en) Target image identification method and system and computer equipment
CN110689565A (en) Depth map determination method and device and electronic equipment
CN105467741A (en) Panoramic shooting method and terminal
CN111161211B (en) Image detection method and device
CN111767757B (en) Identity information determining method and device
US20170235728A1 (en) Information processing apparatus, method, program and storage medium
CN112822410B (en) Focusing method, focusing device, electronic device and storage medium
WO2022227916A1 (en) Image processing method, image processor, electronic device, and storage medium
JP6705486B2 (en) Object detection apparatus, system, method and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant