CN115314627A - Image processing method, system and camera - Google Patents

Image processing method, system and camera Download PDF

Info

Publication number
CN115314627A
CN115314627A CN202110501398.7A CN202110501398A CN115314627A CN 115314627 A CN115314627 A CN 115314627A CN 202110501398 A CN202110501398 A CN 202110501398A CN 115314627 A CN115314627 A CN 115314627A
Authority
CN
China
Prior art keywords
image
weight
weight map
processing
frames
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110501398.7A
Other languages
Chinese (zh)
Other versions
CN115314627B (en
Inventor
谢建磊
范蒙
於敏杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN202110501398.7A priority Critical patent/CN115314627B/en
Publication of CN115314627A publication Critical patent/CN115314627A/en
Application granted granted Critical
Publication of CN115314627B publication Critical patent/CN115314627B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The application provides an image processing method, a system and a camera, wherein the image processing method comprises the following steps: acquiring an output image of a single image sensor in a first working mode, wherein the output image comprises N continuous first images, and performing first processing on the N continuous first images to obtain a second image; the first process comprises a weighting process; the signal-to-noise ratio of the second image is greater than that of the first image, wherein N is a positive integer greater than 1; and carrying out second processing on the first image and the second image to obtain a third image, wherein the second processing at least comprises synthesis processing. The image processing method can improve the quality of an output image.

Description

Image processing method, system and camera
Technical Field
The present application relates to the field of data processing technologies, and in particular, to an image processing method and system, and a camera.
Background
With the development of image sensor technology and image processing technology, for scenes with sufficient illumination, mainstream image processing systems can obtain images with higher quality. In contrast, in a low-illuminance environment, the obtained image is likely to have problems of low brightness and large noise. The mainstream image processing system increases the light input amount by increasing the exposure time, increases the image brightness, and reduces the image noise, but the increase of the exposure time may cause the smear of the motion region in the image to be emphasized, and the frame rate of the video stream to be reduced, and the video blocking phenomenon may occur.
How to improve the image quality under the low-illumination environment becomes a technical problem to be solved urgently.
Disclosure of Invention
In view of this, the present application provides an image processing method, system and camera.
Specifically, the method is realized through the following technical scheme:
according to a first aspect of embodiments of the present application, there is provided an image processing method, including:
acquiring an output image of a single image sensor in a first working mode, wherein the output image comprises N continuous first images, and performing first processing on the N continuous first images to obtain a second image; the first process comprises a weighting process; the signal-to-noise ratio of the second image is greater than that of the first image, wherein N is a positive integer greater than 1;
and performing second processing on the first image and the second image to obtain a third image, wherein the second processing at least comprises synthesis processing.
According to a second aspect of embodiments of the present application, there is provided an image processing system including: an image sensor, an image generation unit, and a processing unit; wherein:
the image sensor is used for outputting a first image in a first working mode;
the image generating unit is used for carrying out first processing on N continuous frames of first images to obtain second images; the first process comprises a weighting process; the signal-to-noise ratio of the second image is greater than that of the first image, wherein N is a positive integer greater than 1;
and the processing unit is used for processing the first image and the second image to obtain a third image, and the processing at least comprises synthesis processing.
According to a third aspect of embodiments of the present application, there is provided a camera including: the camera comprises a lens, an image sensor, a processor and a memory; wherein:
the lens is used for processing incident light into a light signal incident to the image sensor;
the image sensor is used for outputting an image in a first working mode, and the output image comprises a first image;
the processor is used for carrying out first processing on N continuous frames of first images to obtain second images; the first process comprises a weighting process; the signal-to-noise ratio of the second image is greater than that of the first image, wherein N is a positive integer greater than 1;
the memory is used for storing the first image and the second image;
the processor is further configured to perform a second processing on the first image and the second image to obtain a third image, where the second processing at least includes a synthesizing processing
According to the image processing method, the second image with higher signal-to-noise ratio is obtained by performing first processing on N frames of continuous first images output by a single image sensor, and the first image and the second image are processed to obtain the third image, so that the quality of the output image is improved; in addition, since the second image is obtained by processing the first image, the second image may not be limited by the first video frame rate any more, and the second image with a higher signal-to-noise ratio than that of the first image may be acquired even at the lowest frame rate.
Drawings
FIG. 1 is a flow chart diagram illustrating an image processing method according to an exemplary embodiment of the present application;
FIG. 2A is a block diagram of an image processing system in accordance with an exemplary embodiment of the present application;
FIGS. 2B to 2D are schematic structural diagrams of image processing systems of different weight map determination manners according to exemplary embodiments of the present application;
3A-3B are schematic diagrams illustrating the generation of a second image from at least two frames of first image weighting according to an exemplary embodiment of the present application;
fig. 4A to fig. 4H are schematic diagrams illustrating that the weight calculating unit performs filtering processing according to the pixel value difference to obtain a weight map according to an exemplary embodiment of the present application;
FIG. 5 is a diagram illustrating a weight calculation unit utilizing a motion detection model to obtain a weight map according to an exemplary embodiment of the present application;
6A-6C are schematic diagrams of a weight calculation unit utilizing a target detection model to obtain a weight map according to an exemplary embodiment of the application;
FIG. 7 is a block diagram of an image processing system in accordance with an exemplary embodiment of the present application;
FIG. 8 is a block diagram of an image processing apparatus according to an exemplary embodiment of the present application;
FIG. 9 is a diagram illustrating a hardware configuration of an electronic device according to an exemplary embodiment of the present application;
fig. 10 is a schematic diagram of a camera according to an exemplary embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
In order to make the technical solutions provided in the embodiments of the present application better understood and make the above objects, features and advantages of the embodiments of the present application more comprehensible, the technical solutions in the embodiments of the present application are described in further detail below with reference to the accompanying drawings.
Referring to fig. 1, a schematic flow chart of an image processing method provided in an embodiment of the present application is shown, and as shown in fig. 1, the image processing method may include the following steps:
s100, acquiring an output image of a single image sensor in a first working mode, wherein the output image comprises N continuous first images, and performing first processing on the N continuous first images to obtain a second image; the first process includes a weighting process; the signal-to-noise ratio of the second image is greater than the signal-to-noise ratio of the first image, wherein N is a positive integer greater than 1.
In the embodiment of the present application, in consideration of the scheme for acquiring the long and short frame images, when the exposure time of the short frame image is equal to 1/(2 × f), the long frame image with the exposure time longer than that of the short frame image cannot be acquired due to the limitation of the video frame rate, and therefore the long frame image with the signal-to-noise ratio higher than that of the short frame image cannot be acquired.
Illustratively, f is the video frame rate (i.e., the frame rate of the composite image).
In order to break through the frame rate limitation, improve the quality of an image output by a single image sensor, and reduce the requirement for outputting a high-quality image to the image sensor, a process (referred to as a first process herein) may be performed according to N consecutive first images output by the single sensor in the first operating mode to obtain a second image having a signal-to-noise ratio greater than that of the first image. The second image is no longer limited by the video frame rate, and the second image with higher signal-to-noise ratio than that of the first image can be obtained at the lowest frame rate.
Illustratively, the first processing may include weighting processing, that is, the second image may be obtained by weighting processing N consecutive first images.
For example, the signal-to-noise ratio of the second image being greater than the signal-to-noise ratio of the first image may include the signal-to-noise ratio of non-target regions of the second image being greater than the signal-to-noise ratio of non-target regions of the first image.
For example, the target area may include, but is not limited to, an object motion area in the image, a key target area such as a pedestrian, a vehicle, an animal, or a signal light in the image.
Step S110, the first image and the second image are processed to obtain a third image, and the second processing at least includes a synthesizing processing.
In the embodiment of the present application, the first image and the second image may be processed (referred to as a second process herein), for example, the first image and the second image are subjected to a synthesis process to obtain a third image, so as to improve the quality of the image output by the image sensor.
In the method flow shown in fig. 1, a second image with a higher signal-to-noise ratio is obtained by performing first processing on N continuous frames of first images output by a single image sensor, and the first image and the second image are processed to obtain a third image, so that the quality of the output image is improved; in addition, because the second image is obtained by processing the first image, the second image can be no longer limited by the first video frame rate, and the second image with higher signal-to-noise ratio can be acquired under the lowest frame rate.
And moreover, a second image with higher signal-to-noise ratio is obtained by processing the first image output by the image sensor, and two images with different signal-to-noise ratios are generated by a single image sensor, so that the structure of the imaging system is simplified.
In some embodiments, the weighting process is to perform weighting process on M frames of the first image to obtain the second image, where M is a positive integer less than or equal to N and greater than 1.
For example, the second image may be obtained by weighting M frames of the N consecutive first images.
For example, the number of frames of the second image obtained by weighting the M frames of the first image may be less than or equal to M frames.
For example, in N consecutive frames of first images, every two frames of first images are weighted to obtain a frame of second image.
For another example, a first frame first image in N consecutive first images is used as a first frame second image, first two frames first images in the N consecutive first images are weighted to obtain a second frame second image, first three frames first images in the N consecutive first images are weighted to obtain a third frame second image, and so on.
In an example, the weighting process may be to perform weighting process on the M frames of the first image according to the M frames of the first weighted image to obtain the second image.
For example, to implement the weighting process for the first image, a weight map (referred to as a first weight map herein) for weighting the first image to obtain the second image may be obtained.
For example, the second image may be obtained by performing weighting processing on the M frames of the first image according to the M frames of the first weight map.
In one example, the arrangement weight of each pixel position in the M-frame first weight map is set in advance.
For example, in order to improve the efficiency of the weighting process for the first image, the arrangement weight of each pixel position in the M-frame first weight map may be set in advance.
When N consecutive first images are obtained, weighting processing may be performed on M first images in the N consecutive first images according to a preset first weight map, so as to obtain a second image.
As an example, when the arrangement weight of each pixel position in the M frames of first weight maps is preset, the arrangement weight of each pixel position in the frames of first weight maps is equal to the arrangement weight of each pixel position in the frames of first weight maps for any one of the M frames of first weight maps.
For example, in order to improve the efficiency of the weighting process on the first image and reduce the difficulty of setting the first weight map, when setting the disposition weight of each pixel position in the M frames of first weight maps, the disposition weight of each pixel position in the frame of first weight maps may be made equal in size for any one frame of first weight maps in the M frames of first weight maps.
For example, the configuration weights in different first weight maps may be the same or different.
In another example, the N-frame first weight map is obtained according to a pixel value relationship between M frames of the first image.
For example, in order to improve the reasonableness of the first weight map and improve the quality of an image obtained by performing weighting processing according to the first weight map, M frames of first weight images may be determined according to the pixel value relationship between N frames of first images.
As an example, obtaining the first weight map by calculating pixel value differences between the M frames of the first image includes obtaining the first weight map based on each pixel value difference between the M frames of the first image.
For example, the first weight map may be obtained by calculating pixel value differences between M frames of the first images, that is, obtaining the first weight map according to each pixel value difference between the M frames of the first images.
As another example, obtaining the first weight map by calculating differences in pixel values between the M frames of the first image includes obtaining the first weight map based on differences in mean values of a plurality of pixel values in a designated area between the M frames of the first image.
For example, in order to improve the pertinence of the weighting process and improve the effect of the weighting process on the image quality, the first weight map may be obtained according to a mean difference of a plurality of pixel values in a specified area between the M frames of first images.
In some embodiments, the image processing method provided in the embodiments of the present application may further include:
when the image sensor is switched from the first working mode to the second working mode, an output image of the image sensor in the second working mode is obtained, and the output image is a fourth image.
For example, in order to improve the flexibility of the operation of the image sensor, a plurality of different operation modes can be set for the image sensor, and the image sensor can be controlled to switch among the plurality of operation modes according to the actual scene.
For example, when the operation mode of the image sensor includes a first operation mode and a second operation mode, the image sensor may be controlled to generate and output a fourth image when the image sensor is switched from the first operation mode to the second operation mode.
For example, when the image sensor operates in the second operation mode, the synthesis process for the fourth images of two or more consecutive frames output by the image sensor may not be required.
As an example, the switching between the first operation mode and the second operation mode is performed based on the intensity of the ambient light.
For example, considering that when the ambient light intensity is different, the image sensor needs to perform image generation according to different exposure parameters to ensure the image quality, and when the ambient light intensity is stronger, the image quality generated by the image sensor may generally meet the requirement, and when the ambient light intensity is weaker, the image quality generated by the image sensor may be poorer, and processing needs to be performed to improve the image quality, as in the manner described in the above embodiments.
For example, the first operation mode and the second operation mode may be switched based on the intensity of the ambient light.
In one example, the image sensor may be controlled to operate in a first operating mode when ambient light is weak; and when the ambient light intensity is high, controlling the image sensor to work in a second working mode.
In some embodiments, the processing the first image and the second image to obtain the third image in step S110 may include:
synthesizing the first image and the second image based on the second weight map to obtain a third image; the weight map is determined according to the first image sequence or the second image sequence; the first image sequence comprises at least one frame of the first image, and the second image sequence comprises at least one frame of the second image.
For example, to improve the image quality improvement effect of image synthesis, a first image sequence may be determined according to a first image output by the image sensor in the first operation mode, and a weight map (referred to as a second weight map herein) for performing synthesis processing on the first image and the second image may be determined according to the first image sequence.
Alternatively, a second image sequence may be determined from the processed second image, and a weight map (referred to herein as a second weight map) for performing a synthesizing process on the first image and the second image may be determined from the second image sequence.
Illustratively, the first image sequence includes at least one frame of the first image; the second image sequence comprises at least one frame of the second image.
For example, the first image and the second image may be synthesized according to the determined second weight map to obtain a third image.
In one example, the determining the second weight map according to the first image sequence may include:
the first image sequence comprises a current frame first image and a historical frame first image, and a second weight map is determined according to the difference of pixel values of the current frame first image and the historical frame first image.
For example, the history frame first image may refer to a first image whose output time is prior to the output time of the current frame first image.
For example, the history frame first image may include one or more frames of first images adjacent to the current frame first image.
Illustratively, considering that the long and short frame fusion scheme calculates the weight through the difference between the long and short frames and the brightness of the long and short frames, the oscillation phenomena such as stroboscopic lamps and the like caused by different exposure times cannot be solved.
In view of the above problems, in the embodiment of the present application, when at least two frames of first images exist in the first image sequence, the second weight map may be determined according to a difference between pixel values of the current frame of first image and the historical frame of first image, and because exposure times of the current frame of first image and the historical frame of first image are the same, oscillation phenomena such as stroboscopic light and the like caused by different exposure times may be avoided, and accuracy and a detection rate of object motion information detection in a low-illumination environment may be improved.
For example, the current frame first image and the historical frame first image may be as close as possible to improve the quality of the composite image.
In one example, the current frame first image and the historical frame first image are adjacent frames in the first image sequence.
In one example, determining the second weight map from the second sequence of images may include:
the second image sequence comprises a current frame second image and a historical frame second image, and a second weight map is determined according to the pixel value difference of the current frame second image and the historical frame second image.
For example, the history frame second image may refer to a second image whose output time is prior to the output time of the current frame second image.
For example, the historical frame second image may include one or more frames of second images adjacent to the current frame second image.
For example, considering that the long and short frame fusion scheme calculates the weight through the difference between the long and short frames and the brightness of the long and short frames, the ringing phenomena such as stroboscopic light caused by different exposure times cannot be solved.
In view of the above problems, in the embodiment of the present application, when at least two frames of second images exist in the second image sequence, the second weight map may be determined according to a difference between pixel values of the current frame of second images and the historical frame of second images, thereby avoiding oscillation phenomena such as stroboscopic lamps caused by different exposure times, and improving accuracy and detection rate of object motion information detection in a low-illumination environment.
For example, the current frame second image and the historical frame second image may be as close as possible to improve the quality of the composite image.
In one example, the current frame second image and the historical frame second image are adjacent frames in the second image sequence.
It should be noted that, in the embodiment of the present application, in addition to determining the second weight map according to the pixel value difference between the current frame first image and the historical frame first image or determining the second weight map according to the pixel value difference between the current frame second image and the historical frame second image, the second weight map may also be determined according to the pixel value difference between the current frame first image and the historical frame first image and the pixel value difference between the second image and the historical frame second image.
For example, a corresponding second weight map (assumed as weight map 1) may be determined according to a difference in pixel values between the current frame first image and the historical frame first image, a corresponding second weight map (assumed as weight map 2) may be determined according to a difference in pixel values between the current frame second image and the historical frame second image, and a final second weight map may be determined according to the weight maps 1 and 2.
For example, the weight maps 1 and 2 may be fused, e.g., weighted, to determine the final second weight map.
In addition, the current frame first image and the current frame second image can be weighted respectively to obtain a current frame weighted image, the historical frame first image and the historical frame second image are weighted to obtain a historical frame weighted image, and the weighting image is determined according to the current frame weighted image and the historical frame weighted image.
In one example, determining the second weight map from pixel value differences may include:
and filtering the pixel value difference to obtain a second weight map.
Illustratively, the second weight map may be obtained by performing a filtering process on the pixel value difference.
For example, taking the example of determining the second weight map according to the pixel value difference between the current frame first image and the historical frame first image included in the first image sequence, the current frame first image and the historical frame first image may be subtracted by the subtraction unit to obtain a residual image, then the residual image is subjected to mean filtering processing within a specified neighborhood size, and the residual mean is converted into the second weight map by thresholding means according to the residual mean of each pixel point.
In another example, determining the second weight map from pixel value differences may include:
and sending the pixel value difference into a pre-trained convolutional neural network to obtain a second weight map, wherein the pre-trained convolutional neural network is used for carrying out motion detection on the input pixel value difference and obtaining the second weight map.
For example, in order to improve the efficiency and accuracy of motion detection, a convolutional neural network model (which may be referred to as a motion detection model) for performing motion detection on the input pixel value difference to obtain a second weight map may be trained in advance, and when the pixel difference value is determined in the above manner, the determined pixel difference value may be input into the pre-trained motion detection model to obtain a corresponding second weight map.
For example, taking the example of determining the second weight map according to the pixel value difference between the current frame second image and the historical frame second image included in the second image sequence, the pixel value difference between the current frame second image and the historical frame second image may be input into a pre-trained motion detection model to obtain a corresponding second weight map.
For another example, taking the example of determining the second weight map according to the pixel value difference between the current frame first image and the historical frame first image and the pixel value difference between the second image and the historical frame second image, the pixel value difference between the current frame first image and the historical frame first image may be input into a pre-trained motion detection model to obtain a corresponding second weight map (assumed as weight map 1), the pixel value difference between the current frame second image and the historical frame second image may be input into a pre-trained motion detection model to obtain a corresponding second weight map (assumed as weight map 2), and the final second weight map may be obtained according to the weight map 1 and the weight map 2.
In some embodiments, the position information of the key target in the image can be determined by performing target detection on the image, and the second weight map is determined according to the position information of the key target in the image, so that the composite weight can be obtained through at least one frame of image, and the consumption of cache and calculation amount is saved.
Illustratively, key targets may include, but are not limited to, vehicles, pedestrians, animals, or signal lights, etc.
In one example, determining the second weight map from the first sequence of images may include:
the first image sequence is a first image, and a second weight map is determined according to the position information of the key target in the first image.
For example, object detection may be performed on the first image, position information of a key object in the first image may be determined, and the second weight map may be determined according to the position information of the key object in the first image.
In one example, determining the second weight map from the second sequence of images may include:
the second image sequence is a second image, and a second weight map is determined according to the position information of the key target in the second image.
For example, the target detection may be performed on the second image, the position information of the key target in the second image may be determined, and the second weight map may be determined according to the position information of the key target in the second image.
It should be noted that, in the embodiment of the present application, in addition to determining the second weight map according to the position information of the key object in the first image as described above, or determining the second weight map according to the position information of the key object in the second image, the second weight map may also be determined according to the position information of the key object in the first image and the position information of the key object in the second image.
For example, the first image may be subjected to target detection, the position information of the key target in the first image may be determined, the second image may be subjected to target detection, the position information of the key target in the second image may be determined, and the second weight map may be determined according to the position information of the key target in the first image and the position information of the key target in the second image.
In one example, determining the second weight map from the location information of the key target may include:
and obtaining a second weight map by using a pre-trained convolutional neural network, wherein the pre-trained convolutional neural network is used for carrying out target detection on the input image and obtaining the second weight map.
For example, in order to improve the accuracy of target detection, a convolutional neural network model (which may be referred to as a target detection model) for performing target detection on an input image and obtaining a second weight map may be trained in advance, and the trained target detection model is used to determine the second weight map.
For example, taking the example of determining the second weight map according to the position information of the key target in the first image, the first image may be input into a pre-trained target detection model to obtain a corresponding second weight map.
For another example, taking the example of determining the weight map according to the position information of the key target in the first image and the position information of the key target in the second image as an example, the first image may be input into a pre-trained target detection model to obtain a corresponding weight map (assumed as weight map 1), the second image may be input into a pre-trained target detection model to obtain a corresponding weight map (assumed as weight map 2), and the final weight map may be determined according to the weight map 1 and the weight map 2.
In some embodiments, processing the first image and the second image to obtain a third image may include:
and weighting the pixel values of the first image and the second image according to the configuration weight of each pixel position in the second weight map to obtain a third image.
For example, when the second weight map for combining the first image and the second image is obtained in the above manner, the third image may be obtained by performing weighting processing on each pixel value of the first image and the second image according to the arrangement weight of each pixel position in the second weight map.
For example, the first image and the second image may be processed according to the second weight map to obtain a third image according to the following formula:
img_fus=(img_1*alpha+img_2*(n-alpha))/n
where img _ fus denotes the composite image (i.e., the third image), img _1 denotes the first image, img _2 denotes the second image, alpha denotes the second weight map, and n denotes the normalized weight.
In one example, processing the first image and the second image to obtain a third image may include:
when the average luminance of the first image is different from the average luminance of the second image, the average luminance of the first image is adjusted to be the same as the average luminance of the second image before the combining process is performed.
Illustratively, in order to ensure that the dynamic range of the synthesized image is higher than that of the image before synthesis, the wide dynamic synthesis scheme must ensure that the brightness of the long and short frames is different, which limits the applicable scenes of the scheme.
For example, in order to improve the quality of the synthesized image, it may be possible to ensure that the average brightness of the first image is the same as that of the second image.
It should be noted that the average brightness mentioned in the embodiments of the present application does not require that the average brightness is exactly equal, which allows a tolerable deviation, that is, if the difference between the average brightness of the first image and the average brightness of the second image is within a preset difference range, the average brightness of the first image and the average brightness of the second image are considered to be the same; if the difference between the average brightness of the first image and the average brightness of the second image is not within the preset difference range, the average brightness of the first image and the average brightness of the second image may be considered to be different.
For example, when the average luminance of the first image is different from the average luminance of the second image, the average luminance of the first image may be adjusted to be the same as the average luminance of the second image before the first image and the second image are subjected to the combining process.
As an example, the average luminance of the first image may be the same as the average luminance of the second image by multiplying the first image by the ratio of the average luminance of the second image to the average luminance of the first image.
For example, assuming that the average luminance of the first image is L1 and the average luminance of the second image is L2, the average luminance of the first image may be made the same as the average luminance of the second image by multiplying the first image by L2/L1.
In order to enable those skilled in the art to better understand the technical solutions provided in the embodiments of the present application, the technical solutions provided in the embodiments of the present application are described below with reference to specific examples.
Referring to fig. 2A, an image processing system according to an embodiment of the present disclosure may include a control unit, an image sensor, a weight calculation unit, and a processing unit.
In practical applications, the weight calculating unit may be a functional sub-unit of the processing unit, that is, the weight calculating unit may also be a function of the processing unit, and the processing unit implements the weight calculating function.
As shown in fig. 2A, in order to improve image quality in a low illumination condition, a second image may be generated by the image generation unit using a first image output from a single image sensor. The weight calculation unit performs weight calculation according to at least one frame of image in the first image and the second image to obtain a second weight map, and the processing unit performs synthesis processing on the first image and the second image according to the second weight map to output a third image.
As shown in fig. 2A, the weight calculating unit may determine the second weight map according to the first image, or may determine the second weight map according to the second image, or may determine the second weight map according to the first image and the second image.
The method can greatly improve the problem of high image noise under the low-illumination condition and improve the image quality under the low-illumination condition.
The technical solution provided by the embodiment of the present application is described in detail below based on fig. 2A.
1. Main technical characteristics
1. Image sensor with a plurality of pixels
1.1, image sensor: in the first operating mode, an image is output, which comprises N successive frames of the first image.
2. Image generation unit
2.1, an image generation unit: and carrying out first processing on the N continuous first images to obtain a second image.
3. Weight calculation unit
3.1, weight calculation unit: and performing weight calculation according to at least one frame of image in the first image and the second image to obtain a second weight map.
4. Processing unit
4.1, a processing unit: performing second processing on the first image and the second image to obtain three images;
4.2, a processing unit: the second process includes at least a synthesis process.
2. Other technical features
2. Image generation unit
2.2, an image generation unit: the signal-to-noise ratio of the background region (i.e., the non-target region) in the second image is higher than the signal-to-noise ratio of the background region in the first image.
2.3, an image generation unit: the number of image frames of the second image is equal to the number of image frames of the first image;
or, the number of image frames of the second image is smaller than the number of image frames of the first image.
3. Weight calculation unit
3.2, a weight calculation unit: calculating a weight map according to the pixel value difference between the current frame image and the historical frame image;
3.3, weight calculation unit: obtaining a second weight map according to the pixel value difference between the first image of the current frame and the first image of the historical frame;
or, obtaining a second weight map according to the pixel value difference between the current frame second image and the historical frame second image;
or, the two second weight maps are weighted to obtain a second weight map;
or weighting the current frame first image and the current frame second image to obtain a current frame weighted image, weighting the historical frame first image and the historical frame second image to obtain a historical frame weighted image, and obtaining a second weighted image according to the pixel value difference between the current frame weighted image and the historical frame weighted image.
3.4, weight calculation unit: the current frame image and the history frame image are as close as possible.
3.5, weight calculating unit: the image used for determining the weight map at least comprises any one of the current frame first image and the current frame second image;
3.6, weight calculating unit: carrying out target detection according to the first image to obtain a second weight map;
or, performing target detection according to the second image to obtain a second weight map.
Or, the two weight maps are weighted to obtain a second weight map.
4. Processing unit
4.3, a processing unit: the average brightness of the first image and the second image subjected to the combining process is the same.
4.4, a processing unit: and performing weighting processing on the first image and the second image according to the weight map to obtain a third image, and finally outputting the third image.
4.5, a processing unit: the target area of the third image is preferably the first image.
4.6, a processing unit: the target area is at least one of an object motion area in the image, a key target area in the image, such as a pedestrian, a vehicle, an animal, or a signal light.
The above features are explained below with reference to the embodiments.
Example one
When the image sensor is in the first working mode, the output image is a first image.
Illustratively, N is a positive integer greater than 1.
The image generation unit performs first processing on N continuous frames of first images to obtain second images.
Illustratively, the first process includes a weighting process; the signal-to-noise ratio of the second image is greater than the signal-to-noise ratio of the first image.
The processing unit carries out second processing on the first image and the second image to obtain a third image.
Illustratively, the second process includes at least a composition process.
Example two
And controlling the image sensor to switch between the first working mode and the second working mode according to the weak environmental light intensity.
Illustratively, when the ambient light is weak, the image sensor is controlled to be switched into a first working mode; and when the ambient light intensity is high, controlling the image sensor to be switched into a second working mode.
When the image sensor is in a first working mode, outputting a first image;
the image generation unit carries out first processing on N continuous frames of first images to obtain second images;
the processing unit processes the first image and the second image to obtain a third image.
And outputting a fourth image when the image sensor is in the second working mode.
The processing unit outputs the fourth image.
The respective units will be described below.
2. Image generation unit
And the image generation unit is used for obtaining a second image by carrying out first processing on the N continuous frames of the first image.
Illustratively, the signal-to-noise ratio of the background region in the second image is higher than the signal-to-noise ratio of the background region in the first image.
EXAMPLE III
The second image is generated by weighting at least two frames of the first image.
One of the weighting methods is as follows:
referring to fig. 3A, the first images of two adjacent frames may be weighted to obtain the second image.
Illustratively, the number of image frames of the second image is smaller than the number of image frames of the first image.
Example four
The second image is generated by weighting at least two frames of the first image.
Another weighting method is as follows:
referring to fig. 3B, a first frame of the first image in the N consecutive frames of the first image may be used as a first frame of the second image, a first two frames of the first image in the N consecutive frames of the first image are weighted to obtain a second frame of the second image, a first three frames of the first image in the N consecutive frames of the first image are weighted to obtain a third frame of the second image, and so on.
EXAMPLE five
The image generation unit performs weighting processing on the M frames of first images according to a preset M frames of first weight images to obtain second images.
For example, for any one of the preset M frames of the first weight map, the configuration weights of the pixel positions in the frame of the first weight map are equal in size.
EXAMPLE six
The image generating unit obtains M frames of first weight images according to the difference of each pixel value among the M frames of first image sequences, and carries out weighting processing on the M frames of first images according to the M frames of first weight images to obtain a second image.
EXAMPLE seven
The image generation unit obtains M frames of first weight images according to the mean difference of a plurality of pixel values in the designated area among the M frames of first images, and performs weighting processing on the M frames of first images according to the M frames of first weight images to obtain a second image.
3. Weight calculation unit
The main role of the weight calculation unit is to obtain a weight map by using at least one of motion detection and object detection.
Illustratively, the weight map represents at least one of object motion information and key target position information such as pedestrians, vehicles, animals or signal lights.
Example eight
Referring to fig. 2B and fig. 4A, the weight calculating unit performs motion detection on at least two frame image differences at different time points in the first image sequence to obtain a second weight map.
Illustratively, taking two frames as an example, as shown in fig. 4A, the current frame first image is a current time image in the first image sequence (i.e., a first image currently participating in image synthesis in the first image sequence), and the historical frame first image (which may be referred to as a fifth image) is an earlier time image in the first image sequence.
For example, as shown in fig. 4A, the first image and the fifth image may be subtracted by a subtraction unit to obtain a residual image, and then the residual image is subjected to mean filtering processing within a specified neighborhood size.
Illustratively, the size of the neighborhood is not limited in the embodiments of the present application.
The residual mean value of each pixel point is obtained through the method, and the residual mean value is converted into a second weight map through a thresholding means.
Example nine
Referring to fig. 2C and fig. 4B, the weight calculating unit performs motion detection on at least two frame image differences at different times in the second image sequence to obtain a second weight map.
For example, taking two frames as an example, as shown in fig. 4B, the current frame second image is a current time image in the second image sequence (i.e., a second image currently participating in image synthesis in the second image sequence), and the historical frame second image is an earlier time image in the second image sequence (which may be referred to as a sixth image).
For example, as shown in fig. 4B, the second image and the sixth image may be subtracted by a subtraction unit to obtain a residual image, and then the residual image is subjected to mean filtering processing in a specified neighborhood size.
Illustratively, the size of the neighborhood is not limited in the embodiments of the present application.
The residual mean value of each pixel point is obtained through the method, and the residual mean value is converted into a second weight map through a thresholding means.
EXAMPLE ten
Referring to fig. 2D and fig. 4C, the weight calculating unit performs motion detection on the difference between the first image and the second image to obtain a second weight map.
For example, as shown in fig. 4C, the first image and the second image may be subtracted by a subtraction unit to obtain a residual image, and then the residual image is subjected to mean filtering processing within a specified neighborhood size.
Illustratively, the size of the neighborhood is not limited in the embodiments of the present application.
The residual mean value of each pixel point is obtained through the method, and the residual mean value is converted into a second weight map through a thresholding means.
In this embodiment of the present application, a total set of object motion information included in the weight maps participating in the fusion may also be obtained as a final second weight map (i.e., the fusion weight map) by fusing the weight maps obtained in the above-described manner.
EXAMPLE eleven
Combining the eighth embodiment with the ninth embodiment, and performing fusion processing on the second weight maps determined in the eighth embodiment and the ninth embodiment to obtain a total set of motion information of the two objects as a final second weight map.
One combination is as follows:
referring to fig. 2D and 4D, on one hand, the weight calculating unit may subtract the first image and the fifth image by the subtracting unit to obtain a residual image, then perform mean filtering on the residual image in a specified neighborhood size, obtain a residual mean value of each pixel point by the above method, and convert the residual mean value into the weight map 1 by thresholding.
On the other hand, the weight calculation unit can perform subtraction on the second image and the sixth image through the subtraction unit to obtain a residual image, then perform mean filtering processing on the residual image in a specified neighborhood size, obtain a residual mean value of each pixel point through the method, and convert the residual mean value into the weight map 2 through a thresholding means.
The weight calculation unit may fuse the weight map 1 and the weight map 2 to obtain a final second weight map.
Example twelve
Combining the eighth embodiment with the ninth embodiment, and performing fusion processing on the second weight maps determined in the eighth embodiment and the ninth embodiment to obtain a total set of motion information of the two objects as a final second weight map.
Another combination is as follows:
referring to fig. 2D and 4E, the weight calculating unit may perform weighting according to the first image and the second image to obtain a seventh image (i.e., the current frame weighted image), and perform weighting according to the fifth image and the sixth image to obtain an eighth image (i.e., the history frame weighted image).
The weight calculation unit can make a difference between the seventh image and the eighth image through the subtraction unit to obtain a residual image, then the residual image is subjected to mean filtering processing in a specified neighborhood size, the residual mean value of each pixel point is obtained through the method, and the residual mean value is converted into the second weight map through a thresholding means.
Thirteen examples
And combining the decimal lines of the embodiment eight and the embodiment ten, and performing fusion processing on the second weight maps determined in the embodiment eight and the embodiment ten respectively to obtain a complete set of the motion information of the objects of the embodiment eight and the embodiment ten, wherein the complete set is used as a final second weight map.
Referring to fig. 2D and fig. 4F, on one hand, the weight calculating unit may perform a subtraction on the first image and the fifth image through the subtracting unit to obtain a residual image, then perform mean filtering on the residual image in a specified neighborhood size, obtain a residual mean value of each pixel point through the above method, and convert the residual mean value into the weight map 1 through a thresholding means.
On the other hand, the weight calculation unit may perform subtraction on the first image and the second image through the subtraction unit to obtain a residual image, then perform mean filtering on the residual image in a specified neighborhood size, obtain a residual mean value of each pixel point through the above method, and convert the residual mean value into the weight map 2 through a thresholding means.
The weight calculation unit may fuse the weight map 1 and the weight map 2 to obtain a final second weight map.
Example fourteen
And combining the decimal lines of the ninth embodiment and the tenth embodiment, and performing fusion processing on the second weight maps determined in the ninth embodiment and the tenth embodiment to obtain a complete set of the motion information of the two objects as a final second weight map.
Referring to fig. 2D and 4G, on one hand, the weight calculating unit may subtract the second image and the sixth image by the subtracting unit to obtain a residual image, then perform mean filtering on the residual image in a specified neighborhood size, obtain a residual mean value of each pixel point by the above method, and convert the residual mean value into the weight map 1 by thresholding.
On the other hand, the weight calculation unit may perform subtraction on the first image and the second image through the subtraction unit to obtain a residual image, then perform mean filtering on the residual image in a specified neighborhood size, obtain a residual mean value of each pixel point through the above method, and convert the residual mean value into the weight map 2 through a thresholding means.
The weight calculation unit may fuse the weight map 1 and the weight map 2 to obtain a final second weight map.
Example fifteen
And combining decimal lines of the eight embodiments to the ten embodiments, and fusing the second weight maps determined by the eight embodiments to the ten embodiments to obtain a complete set of the motion information of the three objects as a final second weight map.
Referring to fig. 2D and fig. 4H, on one hand, the weight calculating unit may perform a subtraction on the first image and the fifth image through the subtracting unit to obtain a residual image, then perform mean filtering on the residual image in a specified neighborhood size, obtain a residual mean value of each pixel point through the above method, and convert the residual mean value into the weight map 1 through a thresholding means.
On the other hand, the weight calculation unit may perform subtraction on the second image and the sixth image through the subtraction unit to obtain a residual image, then perform mean filtering on the residual image in a specified neighborhood size, obtain a residual mean value of each pixel point through the above method, and convert the residual mean value into the weight map 2 through a thresholding means.
Furthermore, the weight calculation unit can perform subtraction on the first image and the second image through the subtraction unit to obtain a residual image, then perform mean filtering processing on the residual image in a specified neighborhood size, obtain a residual mean value of each pixel point through the method, and convert the residual mean value into the weight map 3 through a thresholding means.
The weight calculation unit may fuse the weight map 1, the weight map 2, and the weight 3 to obtain a final second weight map.
Example sixteen
As for any one of the eighth to fifteenth embodiments, the second weight map may be obtained by estimation through a convolutional neural network, instead of the subtraction unit and the residual accumulation unit, by a convolutional neural network (i.e., a motion detection model) for performing motion detection on the input image and obtaining the second weight map.
Taking the weight calculation shown in the eighth embodiment as an example, the first image and the fifth image may be input to a pre-trained motion detection model, and a second weight map is obtained through estimation by the pre-trained motion detection model, and a schematic diagram thereof may be as shown in fig. 5.
Example seventeen
And performing target detection on at least one frame of image in the first image and the second image by using a pre-trained convolutional neural network model (namely a target detection model) for performing target detection on the input image and obtaining a second weight map to obtain position information of a key target, wherein the position information is used as the second weight map.
Taking the target detection of the first image as an example, please refer to fig. 2B and fig. 6A, the first image may be input into a pre-trained target detection model, and the target detection model is used to perform target detection on the first image, so as to obtain the position information of the key target in the first image, which is used as the second weight map.
EXAMPLE eighteen
And performing target detection on at least one frame of image in the first image and the second image by using a pre-trained convolutional neural network model (namely a target detection model) for performing target detection on the input image and obtaining a second weight map to obtain the position information of the key target as the second weight map.
Taking the target detection on the second image as an example, please refer to fig. 2C and fig. 6B, the second image may be input into a pre-trained target detection model, and the target detection model is used to perform target detection on the second image, so as to obtain the position information of the key target in the second image, which is used as the second weight map.
Example nineteenth
And performing target detection on at least one frame of image in the first image and the second image by using a pre-trained convolutional neural network model (namely a target detection model) for performing target detection on the input image and obtaining a second weight map to obtain target position information as the second weight map.
Taking target detection on the first image and the second image as an example, please refer to fig. 2D and fig. 6C, the first image may be input to a pre-trained target detection model, and the target detection model is used to perform target detection on the first image, so as to obtain position information of a key target in the first image, which is used as the weight map 1; and inputting the second image into a pre-trained target detection model, and performing target detection on the second image by using the target detection model to obtain the position information of the key target in the second image as a weight map 2.
The weight calculation unit may fuse the weight map 1 and the weight map 2 to obtain a final second weight map.
4. Processing unit
The processing unit is mainly used for synthesizing the first image and the second image according to the weight map (the second weight map) output by the weight calculating unit and outputting a synthesized image.
Illustratively, the first image is preferentially selected as the target area in the synthesized image, and the first image and the second image are used for weighting the non-target area, so that the aims of keeping the target area of the image clear and free from smear and remarkably improving the signal-to-noise ratio of the non-target area are fulfilled.
For example, the target area may refer to a moving area of an object in the image, or may be key information in the image, such as a pedestrian, a vehicle, an animal, or a signal light.
For example, before the first image and the second image are subjected to the synthesis processing, the processing unit needs to ensure that the average brightness of the first image and the average brightness of the second image are the same.
Example twenty
The average brightness of the first image is the same as the average brightness of the second image:
the processing unit synthesizes the first image and the second image according to the following formula:
img_fus=(img_1*alpha+img_2*(n-alpha))/n
where img _ fus represents a synthesized image, img _1 represents a first image, img _2 represents a second image, alpha represents a second weight map, and n represents a normalized weight.
Example twenty one
When the average brightness of the first image is different from the average brightness of the second image:
the processing unit adjusts the average brightness of the first image to be the same as the average brightness of the second image.
For example, the processing unit may multiply the first image by a ratio of the average luminance of the second image to the average luminance of the first image such that the average luminance of the first image is the same as the average luminance of the second image.
Then, the processing unit synthesizes the first image and the second image according to the following formula:
img_fus=(img_1*alpha+img_2*(n-alpha))/n
where img _ fus represents the composite image, img _1 represents the first image, img _2 represents the second image, alpha represents the second weight map, and n represents the normalized weight.
It should be noted that the foregoing embodiments are merely specific examples of implementations of the embodiments of the present application, and do not limit the scope of the present application, and based on the foregoing embodiments, new embodiments may be obtained through combination between the embodiments or modification of the embodiments, which all belong to the scope of the present application.
The methods provided herein are described above. The following describes the apparatus and image processing system provided in the present application:
referring to fig. 7, a schematic structural diagram of an image processing system according to an embodiment of the present disclosure is shown in fig. 7, where the image processing system may include: an image sensor 710, an image generation unit 720, and a processing unit 730; wherein:
an image sensor 710 for outputting a first image in a first operation mode;
the image generating unit 720 is configured to perform first processing on N consecutive frames of first images to obtain second images; the first process comprises a weighting process; the signal-to-noise ratio of the second image is greater than that of the first image, wherein N is a positive integer greater than 1;
the processing unit 730 is configured to process the first image and the second image to obtain a third image, where the processing at least includes synthesis processing.
In some embodiments, the image generating unit 720 is specifically configured to perform weighting processing on M frames of the first image to obtain a second image, where M is a positive integer smaller than or equal to N and greater than 1.
In some embodiments, the image generating unit 720 is specifically configured to perform weighting processing on the M frames of first images according to the M frames of first weight maps to obtain second images.
In some embodiments, the configuration weight of each pixel position in the M frames of the first weight map is preset;
and/or obtaining the M frame first weight graph according to the pixel value relation among the M frame first images.
In some embodiments, when the configuration weight of each pixel position in the M frames of first weight maps is preset, the configuration weight of each pixel position in the frames of first weight maps is equal to that of any one of the M frames of first weight maps.
In some embodiments, the image generating unit 720 is specifically configured to obtain the first weight map by calculating a pixel value difference between the M frames of first images, and includes: obtaining the first weight map according to the difference of each pixel value among the M frames of first image sequences; or, the first weight map is obtained according to the mean difference of a plurality of pixel values in the appointed area among the M frames of first images.
In some embodiments, the image sensor 710 is further configured to output a fourth image when the first operation mode is switched to the second operation mode.
In some embodiments, the first operating mode and the second operating mode are switched based on the intensity of the ambient light.
In some embodiments, the second processing unit 730 performs a second processing on the first image and the second image to obtain a third image, including:
according to the configuration weight of each pixel position in a second weight map, weighting each pixel position of the first image and the second image to obtain a third image; the second weight map is determined according to a first image sequence or a second image sequence, the first image sequence at least comprises one frame of first image, and the second image sequence at least comprises one frame of second image; the configuration weight of any pixel position is used for determining the weighted weight of the first image and the second image at the pixel position.
In some embodiments, the processing unit 730 determines the second weight map according to the first image sequence, including:
the first image sequence comprises a current frame first image and a historical frame first image, and the second weight map is determined according to the difference of pixel values of the current frame first image and the historical frame first image.
In some embodiments, the processing unit 730 determines the second weight map according to the second image sequence, including:
the second image sequence comprises a current frame second image and a historical frame second image, and the second weight map is determined according to the pixel value difference of the current frame second image and the historical frame second image.
In some embodiments, the processing unit 730 determines the second weight map according to the pixel value difference, including:
filtering the pixel value difference to obtain the second weight map;
and/or sending the pixel value difference into a pre-trained convolutional neural network to obtain the second weight map, wherein the pre-trained convolutional neural network is used for carrying out motion detection on the input pixel value difference and obtaining the second weight map.
In some embodiments, the processing unit 730 determines the second weight map according to the first image sequence or the second image sequence, including:
the first image sequence is a first image, and the second weight map is determined according to the position information of the key target in the first image;
and/or the second image sequence is a second image, and the second weight map is determined according to the position information of the key target in the second image.
In some embodiments, the processing unit 730 determines the second weight map according to the position information of the key target, including:
and obtaining a weight map by using a pre-trained convolutional neural network, wherein the pre-trained convolutional neural network is used for carrying out target detection on an input image and obtaining the second weight map.
In some embodiments, the second processing unit 730 performs a second processing on the first image and the second image to obtain a third image, including:
when the average luminance of the first image is different from the average luminance of the second image, the average luminance of the first image is adjusted to be the same as the average luminance of the second image before the combining process is performed.
In some embodiments, the processing unit 730 is specifically configured to multiply the first image by a ratio of the average luminance of the second image to the average luminance of the first image, so that the average luminance of the first image is the same as the luminance of the second image.
Referring to fig. 8, a schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure is shown in fig. 8, where the image processing apparatus may include:
an obtaining unit 810, configured to obtain an output image of a single image sensor in a first operating mode, where the output image includes N consecutive frames of the first image, where N is a positive integer greater than 1;
a first processing unit 820, configured to perform first processing on N consecutive frames of a first image to obtain a second image; the first process comprises a weighting process; the first process comprises a weighting process; the signal-to-noise ratio of the second image is greater than the signal-to-noise ratio of the first image;
the second processing unit 830 is configured to perform second processing on the first image and the second image to obtain a third image, where the second processing at least includes synthesis processing.
In some embodiments, the first processing unit 820 is specifically configured to perform weighting processing on M frames of the first image to obtain a second image, where M is a positive integer smaller than or equal to N and greater than 1.
In some embodiments, the first processing unit 820 is specifically configured to perform weighting processing on the M frames of the first image according to the M frames of the first weight map to obtain the second image.
In some embodiments, the configuration weight of each pixel position in the M frames of the first weight map is preset;
and/or obtaining the M frame first weight graph according to the pixel value relation among the M frame first images.
In some embodiments, when the configuration weight of each pixel position in the M frames of first weight maps is preset, the configuration weight of each pixel position in the frames of first weight maps is equal to that of any one of the M frames of first weight maps.
In some embodiments, the first processing unit 820 is specifically configured to obtain the first weight map by calculating pixel value differences between the M frames of the first image, and includes: obtaining the first weight map according to the difference of each pixel value among the M frames of first image sequences; or, the first weight map is obtained according to the mean difference of a plurality of pixel values in the appointed area among the M frames of first images.
In some embodiments, the obtaining unit 810 is further configured to obtain an output image of the image sensor in the second operation mode when the single image sensor is switched from the first operation mode to the second operation mode, where the output image is a fourth image.
In some embodiments, the switching between the first and second operating modes is based on the intensity of the ambient light.
In some embodiments, the second processing unit 830 performs a second processing on the first image and the second image to obtain a third image, including:
according to the configuration weight of each pixel position in a second weight map, weighting each pixel value of the first image and the second image to obtain a third image; the second weight map is determined according to a first image sequence or a second image sequence, the first image sequence at least comprises one frame of first image, and the second image sequence at least comprises one frame of second image; the configured weight of any pixel location is used to determine the weighted weight of the first image and the second image at that pixel location.
In some embodiments, the second processing unit 830 determines the second weight map according to the first image sequence, including:
the first image sequence comprises a current frame first image and a historical frame first image, and the second weight map is determined according to the difference of pixel values of the current frame first image and the historical frame first image.
In some embodiments, the second processing unit 830 determines the second weight map according to the second image sequence, including:
the second image sequence comprises a current frame second image and a historical frame second image, and the second weight map is determined according to the pixel value difference of the current frame second image and the historical frame second image.
In some embodiments, the second processing unit 830 determines the second weight map according to the pixel value difference, including:
filtering the pixel value difference to obtain a second weight map;
and/or sending the pixel value difference into a pre-trained convolutional neural network to obtain the second weight map, wherein the pre-trained convolutional neural network is used for carrying out motion detection on the input pixel value difference and obtaining the second weight map.
In some embodiments, the second processing unit 830 determines the second weight map according to the first image sequence or the second image sequence, including:
the first image sequence is a first image, and the second weight map is determined according to the position information of the key target in the first image;
and/or the second image sequence is a second image, and the second weight map is determined according to the position information of the key target in the second image.
In some embodiments, the second processing unit 830 determines the second weight map according to the location information of the key target, including:
and obtaining a weight map by using a pre-trained convolutional neural network, wherein the pre-trained convolutional neural network is used for carrying out target detection on the input image and obtaining the second weight map.
In some embodiments, the second processing unit 830 performs a second processing on the first image and the second image to obtain a third image, including:
when the average luminance of the first image is different from the average luminance of the second image, the average luminance of the first image is adjusted to be the same as the average luminance of the second image before the combining process is performed.
In some embodiments, the second processing unit 830 is specifically configured to multiply the first image by a ratio of the average luminance of the second image to the average luminance of the first image, so that the average luminance of the first image is the same as the luminance of the second image.
Fig. 9 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present disclosure. The electronic device may include a processor 901, a memory 902 having stored thereon machine executable instructions. The processor 901 and the memory 902 may communicate via a system bus 903. Also, the processor 901 may perform the image processing method described above by reading and executing the machine-executable instructions in the memory 902.
The memory 902 referred to herein may be any electronic, magnetic, optical, or other physical storage device that can contain or store information such as executable instructions, data, and the like. For example, the machine-readable storage medium may be: RAM (random Access Memory), volatile Memory, non-volatile Memory, flash Memory, a storage drive (e.g., a hard drive), a solid state drive, any type of storage disk (e.g., an optical disk, a dvd, etc.), or similar storage medium, or a combination thereof.
In some embodiments, there is also provided a machine-readable storage medium, such as the memory 902 in fig. 9, having stored therein machine-executable instructions that, when executed by a processor, implement the image processing method described above. For example, the machine-readable storage medium may be a ROM, a RAM, a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and so forth.
Referring to fig. 10, a schematic structural diagram of a camera provided in an embodiment of the present application is shown in fig. 10, where the camera may include: a lens 1010, an image sensor 1020, a processor 1030, and a memory 1040; wherein:
the lens 1010 is configured to process incident light into a light signal incident on the image sensor;
the image sensor 1020 is configured to output an image in the first operating mode, where the output image includes a first image;
the processor 1030 is configured to perform first processing on N consecutive frames of first images to obtain second images; the first process comprises a weighting process; the signal-to-noise ratio of the second image is greater than that of the first image, wherein N is a positive integer greater than 1;
the memory 1040, configured to store the first image and the second image;
the processor 1030 is further configured to perform a second processing on the first image and the second image to obtain a third image, where the second processing at least includes a synthesizing processing.
In some embodiments, the weighting process is to perform weighting process on M frames of the first image to obtain a second image, where M is a positive integer less than or equal to N and greater than 1.
In some embodiments, the weighting process is to perform weighting processing on the M frames of the first image according to the M frames of the first weight map to obtain the second image.
In some embodiments, the configuration weight of each pixel position in the M frames of the first weight map is preset;
and/or the M frame first weight maps are obtained according to the pixel value relation among the M frame first images.
In some embodiments, when the configuration weight of each pixel position in the M frames of first weight maps is preset, the configuration weight of each pixel position in the frames of first weight maps is equal to that of any one of the M frames of first weight maps.
In some embodiments, the processor 1030 obtains the first weight map by calculating pixel value differences between the M frames of the first image, including:
obtaining the first weight map according to the difference of each pixel value among the M frames of first image sequences;
or, the first weight map is obtained according to the mean difference of a plurality of pixel values in the appointed area among the M frames of first images.
In some embodiments, the image sensor 1020 is further configured to output a fourth image when the first operating mode is switched to the second operating mode.
In some embodiments, the switching between the first mode of operation and the second mode of operation is based on the intensity of the ambient light.
In some embodiments, the second processing of the first image and the second image by the processor 1030 to obtain a third image includes:
according to the configuration weight of each pixel position in a second weight map, weighting each pixel value of the first image and the second image to obtain a third image; the second weight map is determined according to a first image sequence or a second image sequence, the first image sequence at least comprises one frame of first image, and the second image sequence at least comprises one frame of second image.
In some embodiments, the processor 1030 determines a second weight map from the first sequence of images, including:
the first image sequence comprises a current frame first image and a historical frame first image, and the second weight map is determined according to the difference of pixel values of the current frame first image and the historical frame first image.
In some embodiments, the processor 1030 determines a second weight map from the second sequence of images, including:
the second image sequence comprises a current frame second image and a historical frame second image, and the second weight map is determined according to the pixel value difference of the current frame second image and the historical frame second image.
In some embodiments, the processor 1030 determines the second weight map from pixel value differences, including:
filtering the pixel value difference to obtain a second weight map;
and/or sending the pixel value difference into a pre-trained convolutional neural network to obtain the second weight map, wherein the pre-trained convolutional neural network is used for carrying out motion detection on the input pixel value difference and obtaining the second weight map.
In some embodiments, the processor 1030 determines a second weight map from the first image sequence or the second image sequence, including:
the first image sequence is a first image, and the second weight map is determined according to the position information of the key target in the first image;
and/or the second image sequence is a second image, and the second weight map is determined according to the position information of the key target in the second image.
In some embodiments, the processor 1030 determines the second weight map according to location information of a key target, including:
and obtaining a weight map by using a pre-trained convolutional neural network, wherein the pre-trained convolutional neural network is used for carrying out target detection on an input image and obtaining the second weight map.
In some embodiments, the second processing of the first image and the second image by the processor 1030 to obtain a third image includes:
when the average luminance of the first image is different from the average luminance of the second image, the average luminance of the first image is adjusted to be the same as the average luminance of the second image before the combining process is performed.
In some embodiments, the processor 1030 is specifically configured to multiply the first image by a ratio of the average luminance of the second image to the average luminance of the first image to achieve that the average luminance of the first image is the same as the luminance of the second image.
It should be noted that, the embodiments of the camera, the image processing system, the image processing apparatus, and the image processing method may refer to each other, and the same steps are not described in detail.
It should be noted that, in this document, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising one of 8230; \8230;" 8230; "does not exclude the presence of additional like elements in a process, method, article, or apparatus that comprises the element.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the scope of protection of the present application.

Claims (25)

1. An image processing method, comprising:
acquiring an output image of a single image sensor in a first working mode, wherein the output image comprises N continuous first images, and performing first processing on the N continuous first images to obtain a second image; the first process comprises a weighting process; the signal-to-noise ratio of the second image is greater than that of the first image, wherein N is a positive integer greater than 1;
and carrying out second processing on the first image and the second image to obtain a third image, wherein the second processing at least comprises synthesis processing.
2. The method according to claim 1, wherein the weighting process is a weighting process performed on M frames of the first image to obtain a second image, where M is a positive integer less than or equal to N and greater than 1.
3. The method according to claim 2, wherein the weighting process is performed on the M frames of the first image according to an M frames of the first weight map to obtain the second image.
4. The method according to claim 3, wherein the arrangement weight of each pixel position in the M frames of the first weight map is preset;
and/or the M frame first weight maps are obtained according to the pixel value relation among the M frame first images.
5. The method according to claim 4, wherein when the arrangement weight of each pixel position in the M frames of the first weight map is preset, the arrangement weight of each pixel position in the frame of the first weight map is equal to that of any frame of the M frames of the first weight map.
6. The method of claim 4, wherein obtaining the first weight map by calculating pixel value differences between the M first images comprises obtaining the first weight map based on each pixel value difference between the M first images;
or obtaining the first weight map according to the mean difference of a plurality of pixel values in the appointed area among the M frames of first images.
7. The method according to any one of claims 1-6, wherein when the single image sensor is switched from the first operation mode to the second operation mode, an output image of the image sensor in the second operation mode is obtained, and the output image is a fourth image.
8. The method of claim 7, wherein switching between the first mode of operation and the second mode of operation is based on ambient light level.
9. The method of claim 1, wherein the second processing the first image and the second image to obtain a third image comprises:
according to the configuration weight of each pixel position in a second weight map, weighting each pixel value of the first image and the second image to obtain a third image; the second weight map is determined according to a first image sequence or a second image sequence, the first image sequence at least comprises one frame of first image, and the second image sequence at least comprises one frame of second image; the configuration weight of any pixel position is used for determining the weighted weight of the first image and the second image at the pixel position.
10. The method of claim 9, wherein determining the second weight map from the first sequence of images comprises:
the first image sequence comprises a current frame first image and a historical frame first image, and the second weight map is determined according to the difference of pixel values of the current frame first image and the historical frame first image.
11. The method of claim 9, wherein determining the second weight map from the second sequence of images comprises:
the second image sequence comprises a current frame second image and a historical frame second image, and the second weight map is determined according to the pixel value difference of the current frame second image and the historical frame second image.
12. The method according to claim 10 or 11, wherein determining the second weight map from pixel value differences comprises:
filtering the pixel value difference to obtain the second weight map;
and/or sending the pixel value difference into a pre-trained convolutional neural network to obtain the second weight map, wherein the pre-trained convolutional neural network is used for carrying out motion detection on the input pixel value difference and obtaining the second weight map.
13. The method of claim 9, wherein determining the second weight map from the first image sequence or the second image sequence comprises:
the first image sequence is a first image, and the second weight map is determined according to the position information of the key target in the first image;
and/or the second image sequence is a second image, and the second weight map is determined according to the position information of the key target in the second image.
14. The method of claim 13, wherein determining the second weight map based on location information of key objects comprises:
and obtaining a weight map by using a pre-trained convolutional neural network, wherein the pre-trained convolutional neural network is used for carrying out target detection on an input image and obtaining the second weight map.
15. The method of claim 1, wherein the second processing the first image and the second image to obtain a third image comprises:
when the average luminance of the first image is different from the average luminance of the second image, the average luminance of the first image is adjusted to be the same as the average luminance of the second image before the combining process is performed.
16. The method of claim 15, wherein the average luminance of the first image is the same as the average luminance of the second image by multiplying the first image by the ratio of the average luminance of the second image to the average luminance of the first image.
17. An image processing system, comprising: an image sensor, an image generation unit, and a processing unit; wherein:
the image sensor is used for outputting a first image in a first working mode;
the image generating unit is used for carrying out first processing on N continuous frames of first images to obtain second images; the first process comprises a weighting process; the signal-to-noise ratio of the second image is greater than that of the first image, wherein N is a positive integer greater than 1;
and the processing unit is used for processing the first image and the second image to obtain a third image, and the processing at least comprises synthesis processing.
18. The image processing system according to claim 17,
the image generating unit is specifically configured to perform weighting processing on M frames of the first image to obtain a second image, where M is a positive integer smaller than or equal to N and greater than 1.
19. The image processing system of claim 18,
the image generating unit is specifically configured to perform weighting processing on the M frames of first images according to the M frames of first weight maps to obtain second images.
20. The image processing system according to claim 19, wherein the arrangement weight of each pixel position in the M-frame first weight map is preset;
and/or obtaining the M frame first weight graph according to the pixel value relation among the M frame first images.
21. The image processing system according to claim 20, wherein when the arrangement weight of each pixel position in the M-frame first weight map is set in advance, the arrangement weight of each pixel position in the frame first weight map is equal to each other for any one of the M-frame first weight maps.
22. The image processing system of claim 20,
the image generating unit is specifically configured to obtain the first weight map by calculating a pixel value difference between the M frames of first images, and includes: obtaining the first weight map according to the difference of each pixel value among the M frames of first image sequences; or obtaining the first weight map according to the mean difference of a plurality of pixel values in the appointed area among the M frames of first images.
23. The image processing system according to any one of claims 17 to 22,
the image sensor is further used for outputting a fourth image when the first working mode is switched to the second working mode.
24. The image processing system of claim 23, wherein the switching between the first mode of operation and the second mode of operation is based on ambient light levels.
25. A camera, comprising: the camera comprises a lens, an image sensor, a processor and a memory; wherein:
the lens is used for processing incident light into a light signal incident to the image sensor;
the image sensor is used for outputting an image in a first working mode, and the output image comprises a first image;
the processor is used for carrying out first processing on N continuous frames of first images to obtain second images; the first process comprises a weighting process; the signal-to-noise ratio of the second image is greater than that of the first image, wherein N is a positive integer greater than 1;
the memory is used for storing the first image and the second image;
the processor is further configured to perform a second processing on the first image and the second image to obtain a third image, where the second processing at least includes a synthesizing processing.
CN202110501398.7A 2021-05-08 2021-05-08 Image processing method, system and camera Active CN115314627B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110501398.7A CN115314627B (en) 2021-05-08 2021-05-08 Image processing method, system and camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110501398.7A CN115314627B (en) 2021-05-08 2021-05-08 Image processing method, system and camera

Publications (2)

Publication Number Publication Date
CN115314627A true CN115314627A (en) 2022-11-08
CN115314627B CN115314627B (en) 2024-03-01

Family

ID=83853479

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110501398.7A Active CN115314627B (en) 2021-05-08 2021-05-08 Image processing method, system and camera

Country Status (1)

Country Link
CN (1) CN115314627B (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2005202479A1 (en) * 2004-06-08 2005-12-22 Canon Kabushiki Kaisha Picture composition guide
CN101305400A (en) * 2005-10-12 2008-11-12 有源光学有限公司 Method and system for processing image
CN101866092A (en) * 2009-04-17 2010-10-20 索尼公司 Generate the long exposure image that simulated in response to a plurality of short exposures
WO2011090107A1 (en) * 2010-01-21 2011-07-28 オリンパス株式会社 Image processing device, imaging device, program, and image processing method
GB201205402D0 (en) * 2011-04-06 2012-05-09 Csr Technology Inc In camera implementation of selecting and stitching frames for panoramic imagery
CN102970549A (en) * 2012-09-20 2013-03-13 华为技术有限公司 Image processing method and image processing device
CN104320576A (en) * 2014-09-30 2015-01-28 百度在线网络技术(北京)有限公司 Image processing method and image processing apparatus for portable terminal
CN105874781A (en) * 2014-01-10 2016-08-17 高通股份有限公司 System and method for capturing digital images using multiple short exposures
CN109005366A (en) * 2018-08-22 2018-12-14 Oppo广东移动通信有限公司 Camera module night scene image pickup processing method, device, electronic equipment and storage medium
CN110611750A (en) * 2019-10-31 2019-12-24 北京迈格威科技有限公司 Night scene high dynamic range image generation method and device and electronic equipment
WO2020177723A1 (en) * 2019-03-06 2020-09-10 深圳市道通智能航空技术有限公司 Image processing method, night photographing method, image processing chip and aerial camera
CN112532855A (en) * 2019-09-17 2021-03-19 华为技术有限公司 Image processing method and device
WO2021077963A1 (en) * 2019-10-25 2021-04-29 北京迈格威科技有限公司 Image fusion method and apparatus, electronic device, and readable storage medium

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2005202479A1 (en) * 2004-06-08 2005-12-22 Canon Kabushiki Kaisha Picture composition guide
CN101305400A (en) * 2005-10-12 2008-11-12 有源光学有限公司 Method and system for processing image
CN101866092A (en) * 2009-04-17 2010-10-20 索尼公司 Generate the long exposure image that simulated in response to a plurality of short exposures
WO2011090107A1 (en) * 2010-01-21 2011-07-28 オリンパス株式会社 Image processing device, imaging device, program, and image processing method
GB201205402D0 (en) * 2011-04-06 2012-05-09 Csr Technology Inc In camera implementation of selecting and stitching frames for panoramic imagery
CN102970549A (en) * 2012-09-20 2013-03-13 华为技术有限公司 Image processing method and image processing device
CN110460780A (en) * 2014-01-10 2019-11-15 高通股份有限公司 System and method for using multiple short exposure capture digital pictures
CN105874781A (en) * 2014-01-10 2016-08-17 高通股份有限公司 System and method for capturing digital images using multiple short exposures
CN104320576A (en) * 2014-09-30 2015-01-28 百度在线网络技术(北京)有限公司 Image processing method and image processing apparatus for portable terminal
CN109005366A (en) * 2018-08-22 2018-12-14 Oppo广东移动通信有限公司 Camera module night scene image pickup processing method, device, electronic equipment and storage medium
WO2020177723A1 (en) * 2019-03-06 2020-09-10 深圳市道通智能航空技术有限公司 Image processing method, night photographing method, image processing chip and aerial camera
CN112532855A (en) * 2019-09-17 2021-03-19 华为技术有限公司 Image processing method and device
WO2021077963A1 (en) * 2019-10-25 2021-04-29 北京迈格威科技有限公司 Image fusion method and apparatus, electronic device, and readable storage medium
CN110611750A (en) * 2019-10-31 2019-12-24 北京迈格威科技有限公司 Night scene high dynamic range image generation method and device and electronic equipment

Also Published As

Publication number Publication date
CN115314627B (en) 2024-03-01

Similar Documents

Publication Publication Date Title
JP6911202B2 (en) Imaging control method and imaging device
KR101926490B1 (en) Apparatus and method for processing image
CN110602467B (en) Image noise reduction method and device, storage medium and electronic equipment
US9544505B2 (en) Image processing apparatus for synthesizing images based on a plurality of exposure time periods and image processing method thereof
US8854489B2 (en) Image processing method and image processing apparatus
CN110381263B (en) Image processing method, image processing device, storage medium and electronic equipment
CN110213502B (en) Image processing method, image processing device, storage medium and electronic equipment
JP4986747B2 (en) Imaging apparatus and imaging method
CN110572584B (en) Image processing method, image processing device, storage medium and electronic equipment
JP2010016743A (en) Distance measuring apparatus, distance measuring method, distance measuring program, or imaging device
JP2009177472A (en) Image processing method, image processor and imaging device
KR20150045877A (en) Image processing apparatus and image processing method
CN111246092B (en) Image processing method, image processing device, storage medium and electronic equipment
KR102638565B1 (en) Image processing device, output information control method, and program
CN111246093B (en) Image processing method, image processing device, storage medium and electronic equipment
JP2006237851A (en) Image input apparatus
CN110581957B (en) Image processing method, image processing device, storage medium and electronic equipment
CN110740266B (en) Image frame selection method and device, storage medium and electronic equipment
CN110213462B (en) Image processing method, image processing device, electronic apparatus, image processing circuit, and storage medium
CN108513062B (en) Terminal control method and device, readable storage medium and computer equipment
JP2020017807A (en) Image processing apparatus, image processing method, and imaging apparatus
CN112911160B (en) Image shooting method, device, equipment and storage medium
CN110930440B (en) Image alignment method, device, storage medium and electronic equipment
CN115314627A (en) Image processing method, system and camera
JP5274686B2 (en) Imaging apparatus and imaging method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant