CN117221736A - Automatic regulating AI camera system for low-illumination clear collection - Google Patents

Automatic regulating AI camera system for low-illumination clear collection Download PDF

Info

Publication number
CN117221736A
CN117221736A CN202311486223.9A CN202311486223A CN117221736A CN 117221736 A CN117221736 A CN 117221736A CN 202311486223 A CN202311486223 A CN 202311486223A CN 117221736 A CN117221736 A CN 117221736A
Authority
CN
China
Prior art keywords
image
estimation
value
camera
real
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311486223.9A
Other languages
Chinese (zh)
Other versions
CN117221736B (en
Inventor
王威
廖峪
杨万兴
吴宗凯
王迎春
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Zhonggui Track Equipment Co ltd
Original Assignee
Chengdu Zhonggui Track Equipment Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Zhonggui Track Equipment Co ltd filed Critical Chengdu Zhonggui Track Equipment Co ltd
Priority to CN202311486223.9A priority Critical patent/CN117221736B/en
Publication of CN117221736A publication Critical patent/CN117221736A/en
Application granted granted Critical
Publication of CN117221736B publication Critical patent/CN117221736B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to the technical field of image processing, in particular to an automatic adjustment AI camera system for low-illumination clear collection. The system comprises: a template image acquisition unit configured to acquire, as a template image group, as many images Zhang Liangqing degrees higher than a set value; a real-time image acquisition unit configured to acquire a real-time image of a target scene; an image restoration unit configured to perform image restoration based on the real-time image, obtain a restored image, and calculate a gray value of the restored image; an image quality evaluation unit configured to use the information entropy as a quality value of the restored image while calculating a brightness of the restored image; and an adaptive adjustment unit configured to adaptively adjust the aperture size and the focusing distance of the camera. The invention realizes the capability of capturing high-quality images under various illumination conditions by intelligently and adaptively adjusting the aperture size and the focusing distance, thereby enhancing the image quality and improving the operation efficiency.

Description

Automatic regulating AI camera system for low-illumination clear collection
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to an automatic adjustment AI camera system for low-illumination clear collection.
Background
Image acquisition in low-light environments has been a challenge in the field of imaging. In many application scenarios, such as night time monitoring, underground exploration, deep sea observation, etc., the light conditions may be extremely limited. In these cases, conventional image acquisition techniques may not provide satisfactory results and the image may appear blurred, distorted, or lost in detail.
Existing solutions typically rely on hardware improvements, such as the use of more sensitive image sensors, or the addition of light filling devices. On the other hand, there are also some software algorithms that try to improve the image quality in low light by post-processing. However, these methods have limitations. Hardware improvements may increase the complexity and cost of the system, while software post-processing may not fully recover all details of the image.
In addition, focusing and aperture adjustment in low light environments is also a complex problem. The automatic adjustment of the aperture and focus is typically dependent on the contrast and sharpness of the image, but under low light conditions these features may be difficult to accurately detect. Manual adjustment may be limited by operator experience and judgment, lacking consistency and repeatability.
In general, the problem of image acquisition in low-light environments is far from being satisfactorily solved. Existing hardware and software schemes are usually independent of each other and lack overall optimization. The automatic adjustment technique may fail under complex and varying light conditions. An integrated, adaptive solution remains an unmet need.
Disclosure of Invention
The invention mainly aims to provide an automatic adjusting AI camera system for collecting low-illumination brightness, which can realize the capability of capturing high-quality images under various illumination conditions by intelligently and adaptively adjusting the aperture size and the focusing distance, thereby enhancing the image quality and improving the operation efficiency.
In order to solve the problems, the technical scheme of the invention is realized as follows:
an automatically adjusted AI camera system for low-light clear acquisition, the system comprising: a template image acquisition unit configured to acquire, as a template image group, a plurality of images Zhang Liangqing degrees higher than a set value, and record a diaphragm size and a focusing distance at the time of acquiring each image; the brightness definition is defined as a weighted average of brightness and definition of an image; a real-time image acquisition unit configured to acquire a real-time image of a target scene; an image restoration unit configured to perform image restoration based on the real-time image, obtain a restored image, and calculate a gray value of the restored image; an image quality evaluation unit configured to calculate information entropy of the restored image based on the gray value of the restored image, take the information entropy as the quality value of the restored image, and calculate brightness of the restored image at the same time; the self-adaptive adjusting unit is configured to perform difference calculation on the calculated brightness of the restored image and the brightness of each image in the template image group, so as to obtain a difference set consisting of a plurality of differences, and calculate standard deviations of all differences in the difference set; based on the calculated standard deviation and the quality value of the restored image, the aperture size and the focusing distance of the camera are adaptively adjusted so that the quality value of the real-time image of the target scene acquired by the camera at the subsequent time exceeds a set first threshold value, and the brightness exceeds a set second threshold value.
Further, the method for calculating the definition of the image comprises the following steps: converting the image into a gray scale image; calculating a gradient value of each pixel in the gray level image by using a Sobel operator or a Prewitt filter to generate a gradient image, wherein each pixel represents the gradient size of a corresponding position in the original image; calculating gradient histogram features of the gradient image as the definition of the original image; the brightness of the image is the gray value of the gray image, and the higher the gray value is, the higher the original image brightness is.
Further, the image restoration unit performs image restoration based on the real-time image, and the method for obtaining the restored image includes:
step 1: the method for carrying out the initialization estimation on the real-time image specifically comprises the following steps: selecting a uniform initial point spread function as an initial fuzzy estimation of the real-time image, and taking the real-time image as an initial clear estimation of the image;
step 2: the image sharpness estimation method specifically comprises the following steps: fixing initial fuzzy estimation, and carrying out image definition estimation to obtain an intermediate result of the image definition estimation;
step 3: the image blurring estimation method specifically comprises the following steps: fixing initial clear estimation, and carrying out image fuzzy estimation to obtain an image fuzzy estimation intermediate result;
step 4: and performing image aberration value operation on the image sharpness estimation result and the image blurring estimation result to obtain a final recovered image.
Further, the method for fixing the initial blur estimation in the step 2 and performing image sharpness estimation to obtain an intermediate result of the image sharpness estimation includes: the following loss functions were minimized using the gradient descent method for image sharpness estimation:
wherein,is an initial clear estimate, +.>Is an initial blur estimate, +.>Is regularized item, +.>Is a regularization parameter;clearly estimating an intermediate result for the obtained image; />Is a convolution operation.
Further, the method for fixing the initial clear estimation in the step 3 and performing the image blur estimation to obtain the intermediate result of the image blur estimation includes: the following objective functions are minimized by the expectation maximization method for image blur estimation:
wherein,is the square of the L2 norm, +.>And->Is about->And->Regularized item of->Is a regularization parameter; />Intermediate results are estimated for the resulting image blur.
Further, the image quality evaluation unit calculates the information entropy of the restored image based on the gray value of the restored image, and the method includes: the information entropy of the restored image is calculated using the following formula:
wherein,is the information entropy of the image, < >>Is a pixel value +.>Probability of->Is a range of pixel values.
Further, the adaptive adjustment unit includes: a standard deviation calculating section, a diaphragm size adjusting section, and a focusing distance adjusting section; the standard deviation calculating part is configured to perform difference calculation on the calculated brightness of the restored image and the brightness of each image in the template image group, so as to obtain a difference set consisting of a plurality of differences, and calculate standard deviations of all differences in the difference set; the aperture size adjustment section adaptively adjusts the aperture size of the camera using a preset aperture adjustment model based on the calculated standard deviation and the quality value of the restored image; the focus distance adjusting section is configured to adaptively adjust a focus distance of the camera using a preset focus adjustment model based on the calculated standard deviation and a quality value of the restored image. So that the quality value of the real-time image of the target scene acquired by the camera at the subsequent time exceeds a set first threshold value, and the brightness exceeds a set second threshold value
Further, the aperture adjustment model is expressed using the following formula:
wherein,for the calculated adjusted aperture size, the aperture size adjustment is based in part on +.>The aperture size of the optical camera is adjusted; />Is a first threshold, ++>Is a second threshold, ++>The aperture size when acquiring a real-time image for a camera; />The value range is 10-20 for the first adjustment coefficient; />The value range of the second adjustment coefficient is 5-9.
Further, the focus adjustment model is expressed using the following formula:
wherein,for the calculated adjusted focus distance, the focus distance adjustment part is based on +.>Adjusting a focusing distance of the optical camera; />Is a first threshold, ++>Is a second threshold, ++>Focusing distance when acquiring real-time images for a camera; />The value range of the third adjustment coefficient is 0.6-0.95; />And the value range of the fourth adjustment coefficient is 1.5-3.
The automatic adjusting AI camera system for collecting the low-illumination clear light has the following beneficial effects:
enhancing image quality: the system can automatically adjust and optimize the image quality through advanced definition and fuzzy estimation algorithm, and can ensure that clear and accurate images can be captured in low illumination environment. This is particularly important in many critical applications, such as night monitoring, medical imaging, etc.
Intelligent self-adaptive adjustment: the system not only can automatically adjust the aperture size according to the brightness of the real-time image, but also can adaptively adjust the focusing distance. This integrated solution provides an all-round automatic control system that reduces the need for manual intervention while increasing response speed and accuracy.
The hardware requirements are reduced: traditional low-light image acquisition may rely on expensive hardware solutions. The invention can realize high-quality image acquisition on common hardware by advanced algorithm and self-adaptive control, thereby reducing the cost of the whole system.
Flexible and versatile: the invention is not only suitable for specific low-light application scenes, but also can work under various different light conditions. The universality enables the device to be widely applied to different fields and environments, and the advantages of the device can be exerted from industrial detection to outdoor exploration.
The operation efficiency is improved: manually adjusting the aperture and focusing can be a time consuming and skill intensive process. By automating these tasks, the operator can focus more on other more critical tasks, thereby improving overall operating efficiency and efficiency.
Drawings
Fig. 1 is a flowchart of a method for automatically adjusting an AI camera system for low-illuminance clear collection according to an embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
Example 1: referring to fig. 1, an automatically adjusted AI camera system for low-light bright clear acquisition, the system comprising: a template image acquisition unit configured to acquire, as a template image group, a plurality of images Zhang Liangqing degrees higher than a set value, and record a diaphragm size and a focusing distance at the time of acquiring each image; the brightness definition is defined as a weighted average of brightness and definition of an image; a real-time image acquisition unit configured to acquire a real-time image of a target scene; an image restoration unit configured to perform image restoration based on the real-time image, obtain a restored image, and calculate a gray value of the restored image; an image quality evaluation unit configured to calculate information entropy of the restored image based on the gray value of the restored image, take the information entropy as the quality value of the restored image, and calculate brightness of the restored image at the same time; the self-adaptive adjusting unit is configured to perform difference calculation on the calculated brightness of the restored image and the brightness of each image in the template image group, so as to obtain a difference set consisting of a plurality of differences, and calculate standard deviations of all differences in the difference set; based on the calculated standard deviation and the quality value of the restored image, the aperture size and the focusing distance of the camera are adaptively adjusted so that the quality value of the real-time image of the target scene acquired by the camera at the subsequent time exceeds a set first threshold value, and the brightness exceeds a set second threshold value.
Specifically, the system further utilizes the template image group to compare and adjust the real-time images by combining the brightness and the definition into a brightness index of a weighted average value, thereby realizing the effective optimization of the image quality under the condition of low illumination. The self-adaptive adjusting unit is also an innovation point, and the aperture size and the focusing distance of the camera are dynamically adjusted by calculating standard deviation and quality values, so that the camera can obtain high-quality images under different environmental lights. This design may provide better adaptability and performance in automated scenarios, such as unmanned, robotic vision, etc.
The whole system continuously adjusts the physical parameters of the camera to adapt to different illumination environments through dynamic comparison with a group of template images. This not only can improve the image quality in low illumination environments, but also can provide stable performance in different application scenarios through a flexible adaptive mechanism.
The system described in this patent utilizes advanced image processing and analysis techniques to achieve fine-grained control of image quality as compared to conventional camera systems. By integrating a plurality of indexes such as information entropy, gray values, standard deviation and the like, the device can more accurately understand and respond to different illumination conditions, thereby realizing higher image quality and adaptability.
Example 2: the method for calculating the definition of the image comprises the following steps: converting the image into a gray scale image; calculating a gradient value of each pixel in the gray level image by using a Sobel operator or a Prewitt filter to generate a gradient image, wherein each pixel represents the gradient size of a corresponding position in the original image; calculating gradient histogram features of the gradient image as the definition of the original image; the brightness of the image is the gray value of the gray image, and the higher the gray value is, the higher the original image brightness is.
Specifically, in sharpness calculations, gray scale images allow the algorithm to focus on the brightness variation of the image by eliminating color information. This helps to detect edges and textures, which is the basis for evaluating image sharpness. Gradient values were calculated using the Sobel operator or Prewitt waver: both filters are spatial convolution filters for detecting edges in the image.
Sobel operator: the method is mainly used for edge detection and calculating the gradient strength and direction of each pixel point. The Sobel operator can obtain gradients in the horizontal and vertical directions after convolving with the image.
Wherein,is the original image, < >>Representing a convolution operation. The gradient strength and direction can be calculated by the following formula:
prewitt filter: similar to the Sobel operator, but using a different convolution kernel. The Prewitt is less sensitive to noise than Sobel.
The gradient image represents the gradient magnitude of each pixel position in the original image, and represents the edge information of the image. The sharpness index of the image can be obtained by analyzing the gradient histogram features of the gradient image. The histogram may reflect the distribution of the image gradients, providing a quantitative measure of the sharpness of the image. The brightness calculation gray value is the brightness of the image: the gray value of each pixel in a gray image directly reflects the brightness of the image. The higher the gray value, the higher the image brightness.
By combining gradient and gray scale analysis, the method provides a reliable way to quantify the sharpness and brightness of an image. The computation of sharpness takes the complexity of edges and textures into account, reflecting the richness of the details of the image. The calculation of the brightness captures the overall lighting conditions of the image. The two factors are combined into a weighted average to form the brightness index, and an innovative quality measurement and adjustment mechanism is provided for automatically adjusting the AI camera system.
Example 3: the image restoration unit performs image restoration based on the real-time image, and the method for obtaining the restored image comprises the following steps:
step 1: the method for carrying out the initialization estimation on the real-time image specifically comprises the following steps: selecting a uniform initial point spread function as an initial fuzzy estimation of the real-time image, and taking the real-time image as an initial clear estimation of the image; the point spread function (PointSpreadFunction, PSF) describes how a point object is spread and unfolded in an imaging system. Using a uniform PSF means that each point in the image is initially assumed to be uniformly diffused. This assumption provides an initial estimate of blur. Initially, there is no additional information about the sharpness of the real-time image, so it is reasonable to use the real-time image itself as the initial estimate.
Step 2: the image sharpness estimation method specifically comprises the following steps: fixing initial fuzzy estimation, and carrying out image definition estimation to obtain an intermediate result of the image definition estimation;
step 3: the image blurring estimation method specifically comprises the following steps: fixing initial clear estimation, and carrying out image fuzzy estimation to obtain an image fuzzy estimation intermediate result;
step 4: and performing image aberration value operation on the image sharpness estimation result and the image blurring estimation result to obtain a final recovered image. This approach accommodates different types of blurring and noise more flexibly by alternately estimating sharpness and blurring. This allows the system to achieve good recovery in a range of complex and dynamic environments.
Example 4: in the step 2, the initial fuzzy estimation is fixed, the image definition estimation is carried out, and the method for obtaining the intermediate result of the image definition estimation comprises the following steps: the following loss functions were minimized using the gradient descent method for image sharpness estimation:
wherein,is an initial clear estimate, +.>Is an initial blur estimate, +.>Is regularized item, +.>Is a regularization parameter;clearly estimating an intermediate result for the obtained image; />Is a convolution operation.
Specifically, the loss functionIs composed of the following components:
a first part:: the purpose of this part is to estimate +.>And its and initial blur estimation +.>The result of the convolution calculates the sharpness of the image. />Representing clear image +.>And (2) fuzzy core->Is a convolution of (a) and (b). This convolution operation simulates the process of image blurring. By minimizing +.>And->The difference between the two can find an image estimation with higher definition, so that the image estimation is as close as possible to the initial clear estimation after being convolved with the fuzzy core.
A second part:: this part is a regularization term whose purpose is to prevent overfitting, ensuring that the estimated sharp image has reasonable smoothness and natural appearance. />Is a regularization function including, for example, the sum of squares of image gradients, etc. />Is a regularization parameter for controlling the impact weight of the regularization term on the loss function. Greater->The value will enhance the smoothing effect, resulting in an excessively smooth estimation result; less->The values then lead to overfitting. Minimizing the loss function using gradient descent method can find the sharpness estimate +.>. Gradient descent is an iterative optimization algorithm by progressively updating +.>Approaching the minimum of the loss function. This step achieves sharpness estimation of the image by minimizing a loss function that comprehensively considers image reconstruction errors and smoothness. Using the gradient descent method, the system can find a set of appropriate parameters so that the reconstructed image can approximate the initial sharp estimate and have good smoothing characteristics. The method has a certain innovation in the fields of computer vision and image processing, and the image reconstruction and smoothness are simultaneously considered through the loss function, so that the problem of over-fitting is avoided, and the image recovery is more natural and accurate.
Example 5: in the step 3, the initial clear estimation is fixed, the image blurring estimation is carried out, and the method for obtaining the intermediate result of the image blurring estimation comprises the following steps: the following objective functions are minimized by the expectation maximization method for image blur estimation:
wherein,is the square of the L2 norm, +.>And->Is about->And->Regularized item of->Is a regularization parameter; />Intermediate results are estimated for the resulting image blur.
Specifically, objective functionIs composed of the following components:
a first part:: the purpose of this part is to find a blur kernel +.>So that it is +.>Is as close as possible to the original image>。/>Representing blur kernel->And clear image->Simulating the blurring process of the image. />Is the square of the L2 norm and is used to measure the error between the convolved image and the original image. By minimizing this error, a fuzzy kernel estimate can be obtained that can reasonably account for the observations.
A second part:: this part is a regularization term comprising +.>And fuzzy core->Is a regularization of (2). The purpose of regularization term is to ensure that the blur kernel found +.>And clear image->Has good properties such as smoothness or sparsity. This can prevent the over-fitting problem during the optimization process. Expectation Maximization (EM) method is an iterative optimization technique that is commonly used for estimation of probabilistic model parameters in statistics. The EM method is used to iteratively optimize an objective function, gradually finding the minimum of the objective function by alternately fixing one set of variables and optimizing the other set of variables. The method of step 3 performs fuzzy estimation by combining the reconstruction error and regularization term. In combination with the previous sharpness estimation step,this step provides a bi-directional optimized path for image recovery: on the one hand, clear image estimation is optimized, and on the other hand, fuzzy kernel estimation is optimized. By the alternate optimization method, the system can restore the original clear image more accurately, and meanwhile, the estimation of the blurring effect is considered. The main difference between the step and the prior art is that the method adopts a method for jointly optimizing the fuzzy core and the clear image, and realizes more accurate image recovery by considering the mutual dependency relationship between the fuzzy core and the clear image. The regularization term is introduced to further ensure rationality and robustness of knowledge, and the innovation and practicability of the method are enhanced.
Example 6: the image quality evaluation unit calculates information entropy of the restored image based on the gray value of the restored image, and the method comprises the following steps: the information entropy of the restored image is calculated using the following formula:
wherein,is the information entropy of the image, < >>Is a pixel value +.>Probability of->Is a range of pixel values.
In particular, the method comprises the steps of,: information entropy, and the richness of image information is measured. The greater the entropy of information, the higher the complexity of the image. />: summing the sign, traversing all pixel values, wherein +.>Is a range of pixel values. For example, for an 8-bit gray scale image, +.>I.e. the pixel value ranges from 0 to 255./>: this section does not give an explicit explanation in the description, but appears as a weighting factor in relation to the information entropy calculation. Wherein->Is a parameter used to adjust the calculation of the information entropy. />: the pixel value is +.>Is, the gray value is +.>The proportion of pixels in the whole image. />: logarithm of 2 as base reflects pixel value +.>Is used for the information amount of the (a). The calculation of the information entropy reflects the complexity and the information richness of the image. In image processing, information entropy is often used to evaluate image quality. For example, high entropy images typically contain many different gray levels and complex textures, while low entropy images appear flatter and more blurred. This method evaluates image quality by calculating information entropy, which is common in many image processing techniques. However, the specific formula will vary from application to application and from need to need. The weighting coefficients here ∈ ->Provides the method with a certain kindThe specific adjustment capability enables it to more accurately reflect a specific type of image quality.
Example 7: the adaptive adjustment unit includes: a standard deviation calculating section, a diaphragm size adjusting section, and a focusing distance adjusting section; the standard deviation calculating part is configured to perform difference calculation on the calculated brightness of the restored image and the brightness of each image in the template image group, so as to obtain a difference set consisting of a plurality of differences, and calculate standard deviations of all differences in the difference set; the aperture size adjustment section adaptively adjusts the aperture size of the camera using a preset aperture adjustment model based on the calculated standard deviation and the quality value of the restored image; the focus distance adjusting section is configured to adaptively adjust a focus distance of the camera using a preset focus adjustment model based on the calculated standard deviation and a quality value of the restored image. So that the quality value of the real-time image of the target scene acquired by the camera at the subsequent time exceeds a set first threshold value, and the brightness exceeds a set second threshold value.
Example 8: the aperture adjustment model is expressed using the following formula:
wherein,for the calculated adjusted aperture size, the aperture size adjustment is based in part on +.>The aperture size of the optical camera is adjusted; />Is a first threshold, ++>Is a second threshold, ++>Acquisition for cameraAperture size at real-time image; />The value range is 10-20 for the first adjustment coefficient; />The value range of the second adjustment coefficient is 5-9.
In particular, the aperture adjustment model uses a combination of information entropyTwo thresholds->And->Current aperture size->And two adjustment coefficients +>And->Is a complex formula of (a). In the following i will analyze the components of this model, explaining how they work together to adjust the aperture size. Influence of information entropy: information entropy->Typically for measuring the complexity of the image. An image with higher entropy typically contains more detail and contrast. And a first threshold valueThe logarithm of the difference of (2) represents the deviation from the ideal case, which is multiplied by the current aperture size +.>And a first adjustment factor->To calculate the adjustment amount. Effects of other parameters: similarly, the second part of the expression uses a second threshold +.>And another variable->(representing other image quality metrics) to calculate a second adjustment. This section also takes into account the current aperture size +.>And a second adjustment factor->. Adjustment coefficient: parameter->And->Is an adjustment coefficient that controls the sensitivity of the aperture adjustment. These values may be adjusted according to the needs of a particular application or scenario. For example, if the camera needs to be very sensitive to light changes, then +.>And->Is a higher value of (2). Threshold value: />And->Are preset thresholds that can be set according to the required image quality or the requirements of a particular scene. The difference between these thresholds and the actual image quality metric (e.g., entropy) represents the degree of necessity of adjustment. This aperture adjustment model integrates the information entropy and the current light of the imageThe aperture is dynamically adjusted by the size of the aperture and some preset parameters. Such a design allows it to flexibly respond to different image capturing conditions and requirements. By using both log and absolute value operations, the model can maintain stable operation in various situations while providing sufficient sensitivity to capture subtle changes in image quality. This composite model allows for precise control and flexible adjustment, making it suitable for use in a variety of complex and dynamic image capturing scenarios, such as low light environments, fast changing light conditions, and the like.
Example 9: the focus adjustment model is expressed using the following formula:
wherein,for the calculated adjusted focus distance, the focus distance adjustment part is based on +.>Adjusting a focusing distance of the optical camera; />Is a first threshold, ++>Is a second threshold, ++>Focusing distance when acquiring real-time images for a camera; />The value range of the third adjustment coefficient is 0.6-0.95; />And the value range of the fourth adjustment coefficient is 1.5-3.
In particular, the focusing distance: the adjusted focusing distance is calculated by the combination of the following two parts.The method comprises the steps of carrying out a first treatment on the surface of the This part uses the information entropy of the image +.>Subtracting a preset first threshold +.>And enhances the effect by exponential operations. Then, this value is divided by the current focus distance +.>And multiplying by the adjustment factor->. This part may reflect the effect of image complexity or detail level on focus distance. A second part: />The method comprises the steps of carrying out a first treatment on the surface of the This part is combined with other image quality indicators +.>Is related to, and is associated with, a second threshold +.>And (5) adding. This value is divided by the current focus distance +.>Then multiplying the current aperture size +.>And adjustment coefficient->. This part may reflect the effect of other image quality parameters on the focus distance. Threshold and adjustment coefficient: />And->Is a threshold value for comparison with the image quality metric to determine if an adjustment is required. Adjustment coefficient->And->Allowing you to fine tune the focus adjustment effect according to the actual needs. Current focus distance: the focal distance when the camera acquires the real-time image. Used as a normalization factor to ensure that the adjustment is consistent with the dimensions of the current focus setting.
The above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (9)

1. Automatically adjusted AI camera system for low light clear collection, the system comprising: a template image acquisition unit configured to acquire, as a template image group, a plurality of images Zhang Liangqing degrees higher than a set value, and record a diaphragm size and a focusing distance at the time of acquiring each image; the brightness definition is defined as a weighted average of brightness and definition of an image; a real-time image acquisition unit configured to acquire a real-time image of a target scene; an image restoration unit configured to perform image restoration based on the real-time image, obtain a restored image, and calculate a gray value of the restored image; an image quality evaluation unit configured to calculate information entropy of the restored image based on the gray value of the restored image, take the information entropy as the quality value of the restored image, and calculate brightness of the restored image at the same time; the self-adaptive adjusting unit is configured to perform difference calculation on the calculated brightness of the restored image and the brightness of each image in the template image group, so as to obtain a difference set consisting of a plurality of differences, and calculate standard deviations of all differences in the difference set; based on the calculated standard deviation and the quality value of the restored image, the aperture size and the focusing distance of the camera are adaptively adjusted so that the quality value of the real-time image of the target scene acquired by the camera at the subsequent time exceeds a set first threshold value, and the brightness exceeds a set second threshold value.
2. The self-adjusting AI camera system of claim 1 wherein the method of computing sharpness of the image comprises: converting the image into a gray scale image; calculating a gradient value of each pixel in the gray level image by using a Sobel operator or a Prewitt filter to generate a gradient image, wherein each pixel represents the gradient size of a corresponding position in the original image; calculating gradient histogram features of the gradient image as the definition of the original image; the brightness of the image is the gray value of the gray image, and the higher the gray value is, the higher the original image brightness is.
3. The self-adjusting AI camera system of claim 2 wherein the image restoration unit performs image restoration based on a real-time image, the method of obtaining a restored image comprising:
step 1: the method for carrying out the initialization estimation on the real-time image specifically comprises the following steps: selecting a uniform initial point spread function as an initial fuzzy estimation of the real-time image, and taking the real-time image as an initial clear estimation of the image;
step 2: the image sharpness estimation method specifically comprises the following steps: fixing initial fuzzy estimation, and carrying out image definition estimation to obtain an intermediate result of the image definition estimation;
step 3: the image blurring estimation method specifically comprises the following steps: fixing initial clear estimation, and carrying out image fuzzy estimation to obtain an image fuzzy estimation intermediate result;
step 4: and performing image aberration value operation on the image sharpness estimation result and the image blurring estimation result to obtain a final recovered image.
4. The system of claim 3, wherein the method for fixing the initial blur estimation in step 2, performing the image sharpness estimation, and obtaining the intermediate result of the image sharpness estimation comprises: the following loss functions were minimized using the gradient descent method for image sharpness estimation:
wherein,is an initial clear estimate, +.>Is an initial blur estimate, +.>Is regularized item, +.>Is a regularization parameter; />Clearly estimating an intermediate result for the obtained image; />Is a convolution operation.
5. The system of automatically adjusting AI camera for low-light clear collection according to claim 4, wherein the method of fixing the initial clear estimate in step 3, performing the image blur estimation, and obtaining the intermediate result of the image blur estimation comprises: the following objective functions are minimized by the expectation maximization method for image blur estimation:
wherein,is the square of the L2 norm, +.>And->Is about->And->Regularized item of->Is a regularization parameter; />Intermediate results are estimated for the resulting image blur.
6. The self-adjusting AI camera system of claim 5, wherein the image quality evaluation unit calculates an entropy of information of the restored image based on a gray value of the restored image, the method comprising: the information entropy of the restored image is calculated using the following formula:
wherein,is the information entropy of the image, < >>Is a pixel value +.>Probability of->Is a range of pixel values.
7. The self-adjusting AI camera system of claim 6 wherein the adaptive adjustment unit includes: a standard deviation calculating section, a diaphragm size adjusting section, and a focusing distance adjusting section; the standard deviation calculating part is configured to perform difference calculation on the calculated brightness of the restored image and the brightness of each image in the template image group, so as to obtain a difference set consisting of a plurality of differences, and calculate standard deviations of all differences in the difference set; the aperture size adjustment section adaptively adjusts the aperture size of the camera using a preset aperture adjustment model based on the calculated standard deviation and the quality value of the restored image; the focusing distance adjusting part is configured to adaptively adjust the focusing distance of the camera by using a preset focusing adjustment model based on the calculated standard deviation and the quality value of the restored image, so that the quality value of the real-time image of the target scene acquired by the camera at the subsequent time exceeds a set first threshold value, and the brightness exceeds a set second threshold value.
8. The low-light clear acquisition self-adjusting AI camera system of claim 7 wherein the aperture adjustment model is expressed using the formula:
wherein,for the calculated adjusted aperture size, the aperture size adjustment is based in part on +.>The aperture size of the optical camera is adjusted; />Is a first threshold, ++>Is a second threshold, ++>The aperture size when acquiring a real-time image for a camera; />The value range is 10-20 for the first adjustment coefficient; />The value range is 5-9 for the second adjustment coefficient; />Is the gradient strength.
9. The low-light, clear-to-clear acquisition auto-adjust AI camera system of claim 8 wherein the focus adjustment model is expressed using the formula:
wherein,for the calculated adjusted focusing distance, the focusing distance adjusting part basesIn->Adjusting a focusing distance of the optical camera; />Is a first threshold, ++>Is a second threshold, ++>Focusing distance when acquiring real-time images for a camera; />The value range of the third adjustment coefficient is 0.6-0.95; />And the value range of the fourth adjustment coefficient is 1.5-3.
CN202311486223.9A 2023-11-09 2023-11-09 Automatic regulating AI camera system for low-illumination clear collection Active CN117221736B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311486223.9A CN117221736B (en) 2023-11-09 2023-11-09 Automatic regulating AI camera system for low-illumination clear collection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311486223.9A CN117221736B (en) 2023-11-09 2023-11-09 Automatic regulating AI camera system for low-illumination clear collection

Publications (2)

Publication Number Publication Date
CN117221736A true CN117221736A (en) 2023-12-12
CN117221736B CN117221736B (en) 2024-01-26

Family

ID=89046703

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311486223.9A Active CN117221736B (en) 2023-11-09 2023-11-09 Automatic regulating AI camera system for low-illumination clear collection

Country Status (1)

Country Link
CN (1) CN117221736B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5387930A (en) * 1990-05-25 1995-02-07 European Visions Systems Centre Limited Electronic image acquistion system with image optimization by intensity entropy analysis and feedback control
JPH09298681A (en) * 1996-04-26 1997-11-18 Kokusai Electric Co Ltd Image input device
CN104284095A (en) * 2014-10-28 2015-01-14 福建福光数码科技有限公司 Quick and automatic focusing method and system for long-focus visible-light industrial lens
CN107767353A (en) * 2017-12-04 2018-03-06 河南工业大学 A kind of adapting to image defogging method based on definition evaluation
CN108259759A (en) * 2018-03-20 2018-07-06 北京小米移动软件有限公司 focusing method, device and storage medium
CN108932700A (en) * 2018-05-17 2018-12-04 常州工学院 Self-adaption gradient gain underwater picture Enhancement Method based on target imaging model
CN110312083A (en) * 2019-08-28 2019-10-08 今瞳半导体技术(上海)有限公司 Cruise regulating device and method, hardware accelerator certainly for automatic exposure
US20210176388A1 (en) * 2018-06-26 2021-06-10 Gopro, Inc. Entropy maximization based auto-exposure
CN113206949A (en) * 2021-04-01 2021-08-03 广州大学 Semi-direct monocular vision SLAM method based on entropy weighted image gradient
CN114845042A (en) * 2022-03-14 2022-08-02 南京大学 Camera automatic focusing method based on image information entropy
CN116112795A (en) * 2023-04-13 2023-05-12 北京城建智控科技股份有限公司 Adaptive focusing control method, camera and storage medium
CN116805353A (en) * 2023-08-21 2023-09-26 成都中轨轨道设备有限公司 Cross-industry universal intelligent machine vision perception method

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5387930A (en) * 1990-05-25 1995-02-07 European Visions Systems Centre Limited Electronic image acquistion system with image optimization by intensity entropy analysis and feedback control
JPH09298681A (en) * 1996-04-26 1997-11-18 Kokusai Electric Co Ltd Image input device
CN104284095A (en) * 2014-10-28 2015-01-14 福建福光数码科技有限公司 Quick and automatic focusing method and system for long-focus visible-light industrial lens
CN107767353A (en) * 2017-12-04 2018-03-06 河南工业大学 A kind of adapting to image defogging method based on definition evaluation
CN108259759A (en) * 2018-03-20 2018-07-06 北京小米移动软件有限公司 focusing method, device and storage medium
CN108932700A (en) * 2018-05-17 2018-12-04 常州工学院 Self-adaption gradient gain underwater picture Enhancement Method based on target imaging model
US20210176388A1 (en) * 2018-06-26 2021-06-10 Gopro, Inc. Entropy maximization based auto-exposure
CN110312083A (en) * 2019-08-28 2019-10-08 今瞳半导体技术(上海)有限公司 Cruise regulating device and method, hardware accelerator certainly for automatic exposure
CN113206949A (en) * 2021-04-01 2021-08-03 广州大学 Semi-direct monocular vision SLAM method based on entropy weighted image gradient
CN114845042A (en) * 2022-03-14 2022-08-02 南京大学 Camera automatic focusing method based on image information entropy
CN116112795A (en) * 2023-04-13 2023-05-12 北京城建智控科技股份有限公司 Adaptive focusing control method, camera and storage medium
CN116805353A (en) * 2023-08-21 2023-09-26 成都中轨轨道设备有限公司 Cross-industry universal intelligent machine vision perception method

Also Published As

Publication number Publication date
CN117221736B (en) 2024-01-26

Similar Documents

Publication Publication Date Title
CN111968044B (en) Low-illumination image enhancement method based on Retinex and deep learning
CN108259774B (en) Image synthesis method, system and equipment
CN111986120A (en) Low-illumination image enhancement optimization method based on frame accumulation and multi-scale Retinex
US11303793B2 (en) System and method for high-resolution, high-speed, and noise-robust imaging
JP7256902B2 (en) Video noise removal method, apparatus and computer readable storage medium
US20100150401A1 (en) Target tracker
CN111062293B (en) Unmanned aerial vehicle forest flame identification method based on deep learning
CN109086675B (en) Face recognition and attack detection method and device based on light field imaging technology
CN113129236B (en) Single low-light image enhancement method and system based on Retinex and convolutional neural network
CN111064904A (en) Dark light image enhancement method
CN105100632A (en) Adjusting method and apparatus for automatic exposure of imaging device, and imaging device
JPWO2017047494A1 (en) Image processing device
KR20130123525A (en) Image processing apparatus for image haze removal and method using that
Guthier et al. Flicker reduction in tone mapped high dynamic range video
CN116167932A (en) Image quality optimization method, device, equipment and storage medium
Wen et al. Autonomous robot navigation using Retinex algorithm for multiscale image adaptability in low-light environment
Bijelic et al. Recovering the Unseen: Benchmarking the Generalization of Enhancement Methods to Real World Data in Heavy Fog.
CN113362253B (en) Image shading correction method, system and device
CN117221736B (en) Automatic regulating AI camera system for low-illumination clear collection
CN117916765A (en) System and method for non-linear image intensity transformation for denoising and low precision image processing
CN112651945A (en) Multi-feature-based multi-exposure image perception quality evaluation method
Zhou et al. Improving lens flare removal with general-purpose pipeline and multiple light sources recovery
WO2023215371A1 (en) System and method for perceptually optimized image denoising and restoration
CN111652821A (en) Low-light-level video image noise reduction processing method, device and equipment based on gradient information
US20230194847A1 (en) Microscopy System and Method for Modifying Microscope Images in the Feature Space of a Generative Network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant