WO2020119504A1 - Procédé et système de traitement d'images - Google Patents

Procédé et système de traitement d'images Download PDF

Info

Publication number
WO2020119504A1
WO2020119504A1 PCT/CN2019/122437 CN2019122437W WO2020119504A1 WO 2020119504 A1 WO2020119504 A1 WO 2020119504A1 CN 2019122437 W CN2019122437 W CN 2019122437W WO 2020119504 A1 WO2020119504 A1 WO 2020119504A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
exposure
fill light
analyzed
target
Prior art date
Application number
PCT/CN2019/122437
Other languages
English (en)
Chinese (zh)
Inventor
范蒙
俞海
浦世亮
Original Assignee
杭州海康威视数字技术股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州海康威视数字技术股份有限公司 filed Critical 杭州海康威视数字技术股份有限公司
Publication of WO2020119504A1 publication Critical patent/WO2020119504A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/74Circuitry for compensating brightness variation in the scene by influencing the scene brightness using illuminating means
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Definitions

  • the present application relates to the field of image processing technology, in particular to an image processing method and system.
  • the information in the environment can usually be recognized based on the image taken by the camera.
  • the camera due to the variability of light, it is difficult for the camera to output high-quality images according to different ambient lights. There will always be cases where the image quality is good when the light is good and the image quality is poor when the light is poor. Therefore, In the above related technologies, the obtained image captured by the camera cannot be applied to all environments, resulting in poor information perception effect of the environment.
  • the purpose of the embodiments of the present application is to provide an image processing method and system to improve the quality of an image to be analyzed for output or intelligent analysis.
  • the specific technical solutions are as follows:
  • an image processing system including:
  • An image sensor for generating and outputting a first image signal and a second image signal through multiple exposures, wherein the first image signal is an image signal generated according to a first preset exposure, and the second image signal is based on An image signal generated by a second preset exposure, the first preset exposure and the second preset exposure are two of the multiple exposures;
  • the fill light device is used to perform near-infrared fill light in a strobe manner, specifically: the fill light device performs near-infrared fill light during the exposure period of the first preset exposure, and the second preset light No near-infrared fill light is applied during the exposure period of exposure;
  • An image processor for receiving the first image signal and the second image signal output by the image sensor, generating a first target image based on the first image signal, and generating a second target image based on the second image signal Target image
  • An intelligent analysis device is configured to obtain an image to be analyzed from the first target image and the second target image, and perform an intelligent analysis on the image to be analyzed to obtain an intelligent analysis result corresponding to the image to be analyzed.
  • an embodiment of the present application provides an image processing method, including:
  • the image sensor generates and outputs the first image signal and the second image signal through multiple exposures, wherein the first image signal is based on the first An image signal generated by a preset exposure, the second image signal is an image signal generated according to a second preset exposure, and the first preset exposure and the second preset exposure are among the multiple exposures Double exposure; during the exposure period of the first preset exposure, the fill light device performs near infrared fill light, and during the exposure period of the second preset exposure, the fill light device does not perform near infrared fill light ;
  • an image processing apparatus including:
  • An image signal obtaining module for obtaining the first image signal and the second image signal output by the image sensor, wherein the image sensor generates and outputs the first image signal and the second image signal through multiple exposures, wherein the first An image signal is an image signal generated according to a first preset exposure, the second image signal is an image signal generated according to a second preset exposure, the first preset exposure and the second preset exposure are Two of the multiple exposures; the fill light device performs near-infrared fill light during the exposure period of the first preset exposure, and the fill light during the exposure period of the second preset exposure The device does not perform near infrared fill light;
  • An image generating module configured to generate a first target image based on the first image signal, and generate a second target image based on the second image signal;
  • An image selection module configured to obtain an image to be analyzed from the first target image and the second target image
  • the image analysis module is configured to perform intelligent analysis on the image to be analyzed to obtain an intelligent analysis result corresponding to the image to be analyzed.
  • an embodiment of the present application provides a processor, a communication interface, a memory, and a communication bus, where the processor, the communication interface, and the memory complete communication with each other through the communication bus;
  • Memory used to store computer programs
  • the processor when used to execute the program stored in the memory, implements the steps of an image processing method provided by the embodiments of the present application.
  • this solution uses near infrared fill light to the target scene to regulate the light environment of the target scene, so that the quality of the image signal received by the image sensor can be guaranteed, which in turn can be guaranteed for output or intelligent analysis.
  • the image quality of the image Therefore, this solution can improve the quality of the image to be analyzed for output or intelligent analysis.
  • FIG. 1 is a schematic structural diagram of an image processing system provided by an embodiment of the present application.
  • FIG 2 is another schematic structural diagram of an image processing system provided by an embodiment of the present application.
  • FIG. 3(a) is a schematic diagram of the principle when the image processing system provided by the embodiment of the present application completes image processing through multiple units;
  • FIG. 3(b) is another schematic diagram of the image processing system provided by the embodiment of the present application when image processing is completed by multiple units;
  • FIG. 3(c) is another schematic diagram of the image processing system provided by the embodiment of the present application when image processing is completed by multiple units together;
  • Figure 4 is a schematic diagram of the array corresponding to the RGBIR image sensor
  • 5(a) is a schematic diagram illustrating the relationship between exposure and near-infrared fill light according to an embodiment of the present application
  • FIG. 5(b) is another schematic diagram embodying the relationship between exposure and near-infrared fill light according to an embodiment of the present application
  • Figure 6 is a schematic diagram of the principle of spectral blocking
  • FIG. 7 is a spectral diagram of a near-infrared light source
  • FIG. 8 is a flowchart of an image processing method provided by an embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of an image processing device provided by an embodiment of the present application.
  • FIG. 10 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • Visible light is an electromagnetic wave that human eyes can perceive.
  • the visible spectrum has no precise range.
  • the wavelength of electromagnetic waves that the human eye can perceive is between 400 and 760 nm (nanometer), but some people can perceive that the wavelength is about 380 to 780 nm Between electromagnetic waves.
  • Near-infrared light refers to electromagnetic waves with a wavelength in the range of 780-2526nm.
  • the visible light image refers to a color image that only perceives visible light signals, and the color image is only sensitive to the visible light band.
  • Infrared-sensitive image refers to a brightness image that perceives near-infrared light signals. It should be noted that the infrared-sensing image is not limited to a brightness image that only perceives near-infrared light signals, but it may also be a brightness image that perceives near-infrared light signals and other band light signals.
  • embodiments of the present application provide an image processing system.
  • an image processing system provided by an embodiment of the present application may include:
  • the image sensor 110 is configured to generate and output a first image signal and a second image signal through multiple exposures, wherein the first image signal is an image signal generated according to a first preset exposure, and the second image signal is According to the image signal generated by the second preset exposure, the first preset exposure and the second preset exposure are two of the multiple exposures;
  • the fill light device 120 is used to perform near-infrared fill light in a strobe manner, specifically: the fill light device 120 performs near-infrared fill light during the exposure period of the first preset exposure, in the second The near-infrared fill light is not performed during the exposure period of the preset exposure;
  • the image processor 130 is configured to receive the first image signal and the second image signal output by the image sensor 110, generate a first target image according to the first image signal, and generate according to the second image signal Second target image;
  • the intelligent analysis device 140 is configured to obtain an image to be analyzed from the first target image and the second target image, and perform an intelligent analysis on the image to be analyzed to obtain an intelligent analysis result corresponding to the image to be analyzed.
  • the image sensor 110 described in the embodiments of the present application may be exposed periodically, and may be exposed multiple times in each cycle.
  • the first image signal and the second image signal are generated and output through multiple exposures described above, and may be the first image signal and the second image signal generated and output through multiple exposures in one cycle, but are not limited to multiple passes in one cycle
  • the exposure generates and outputs the first image signal and the second image signal.
  • the fill light device 120 performs near-infrared fill light during the exposure period of the first preset exposure, but does not perform near-infrared fill light during the exposure period of the second preset exposure, the first The preset exposure and the second preset exposure are different exposures.
  • the first image signal when generating the first target image according to the first image signal generated by the first preset exposure, the first image signal can be interpolated, and the interpolated infrared-sensing image can be used as the first target
  • the second image signal when generating the second target image based on the second image signal generated by the second preset exposure, the second image signal can be subjected to infrared removal processing to obtain a visible light image, and the visible light image is used as the second target
  • the image, or the visible light image after image enhancement is used as the second target image; or, under this exposure and fill light control, when generating the second target image according to the second image signal generated by the second preset exposure, you can
  • the second image signal of the frame is subjected to wide dynamic processing, and then the infrared processed image is processed to obtain a visible light image, and the visible light image is used as a second target image.
  • the structural schematic diagram of an image processing system shown in FIG. 1 is merely an example, and should not constitute a limitation on the embodiments of the present application.
  • the light supplementing device 120 may be combined with the image sensor 110 and image processing
  • the device 130 or the intelligent analysis device 140 is electrically connected.
  • the fill light device 120 can be controlled by the connected image sensor 110, image processor 130, or intelligent analysis device 140.
  • the image sensor 110, the fill light device 120, the image processor 130, and the intelligent analysis device 140 included in the image processing system can be integrated into an electronic device.
  • the electronic device has fill light, image signal acquisition, and Image processing function.
  • the electronic device may be a camera, or other devices capable of acquiring images.
  • each component included in the image processing system may be deployed in at least two electronic devices.
  • any one of the at least two electronic devices has functions of fill light, image signal acquisition, image processing, and intelligent analysis One or more of the functions.
  • the fill light device 120 is a separate device, and the image sensor 110, the image processor 130, and the intelligent analysis device 140 are all deployed in the camera; or, the fill light device 120 is a separate device, The image sensor 110 is deployed in the camera, and the image processor 130 and the intelligent analysis device 140 are deployed in a terminal or server associated with the camera.
  • the device where the image sensor 110 is located may further include an optical lens, so that light rays enter the image sensor 110 through the optical lens.
  • the fill light device 120 adopts a stroboscopic manner to perform near-infrared fill light on the target scene, that is, to perform non-continuous near-infrared light illumination on the target scene.
  • the fill light device 120 is a device that can emit near-infrared light, such as a fill light; and the fill light of the fill light device 120 can be controlled manually, or can be controlled by a software program or a specific device The fill light of the light device 120 is reasonable.
  • the specific wavelength range of the near-infrared light used by the near-infrared fill light is not specifically limited in this application.
  • the near-infrared light source has a strong light intensity at about 850 nm. Therefore, in specific applications, in order to obtain the maximum response from the image sensor 110, this application
  • the embodiment may use near-infrared light with a wavelength of 850 nm, but it is not limited thereto.
  • the fill light device 120 provides near-infrared light in a stroboscopic manner. Specifically, it refers to performing near-infrared fill light on an external scene by controlling the change of the light and dark of the near-infrared light.
  • the bright process is considered to be near infrared fill light to the scene, and the process of the near infrared light in the fill light device 120 from the end to the start light is considered to be that the scene is not provided with near infrared light.
  • the image processing system provided by the embodiments of the present application is a single sensor perception system, that is, the image sensor 110 is a single.
  • the image sensor 110 includes a plurality of photosensitive channels, the plurality of photosensitive channels includes an IR photosensitive channel, and further includes at least two of an R photosensitive channel, a G photosensitive channel, a B photosensitive channel, and a W photosensitive channel.
  • the multiple photosensitive channels generate and output the first image signal and the second image signal through the multiple exposures;
  • R photosensitive channel is used for sensing light in the red and near infrared bands
  • G photosensitive channel is used for sensing light in the green and near infrared bands
  • B photosensitive channel is used for sensing the blue and near infrared bands.
  • Light IR means infrared sensitive channel, used to sense light in near infrared band
  • W means all-pass sensitive channel, used to sense light in full band.
  • the image sensor 110 may be an RGBIR sensor, an RGBWIR sensor, an RWBIR sensor, an RWGIR sensor, or a BWGIR sensor; where R represents an R photosensitive channel, G represents a G photosensitive channel, B represents a B photosensitive channel, and IR represents an IR photosensitive channel, W means all-pass photosensitive channel.
  • the image sensor 110 in the embodiment of the present application may be an RGBIR sensor, and the RGBIR sensor has a red-green-blue RGB photosensitive channel and an infrared IR photosensitive channel.
  • the RGB light-sensing channel can be sensitive to both the visible light band and the near-infrared band, but it is mainly used to light-sensitive the visible light band; and the IR light-sensing channel is a channel that is sensitive to the near-infrared band.
  • the arrangement of the R photosensitive channel, G photosensitive channel, B photosensitive channel, and IR photosensitive channel can be seen in FIG. 4.
  • the RGBIR image sensor senses the R photosensitive channel, G photosensitive channel, B photosensitive channel and IR photosensitive channel to obtain corresponding image signals.
  • the sensitivity value corresponding to the R photosensitive channel includes the R channel value and the IR channel value
  • the sensitivity value corresponding to the G photosensitive channel includes the G channel value and the IR channel value
  • the sensitivity value corresponding to the B photosensitive channel includes the B channel value and the IR channel value
  • the sensitivity value corresponding to the IR sensitivity channel includes the IR channel value.
  • the R channel value and the IR light channel value sensed by the R photosensitive channel are different, and the G channel value and the IR channel perceived by the G light channel are different
  • the value is different, the B channel value and the IR channel value perceived by the B photosensitive channel are different, and the IR channel value perceived by the IR photosensitive channel is different.
  • the image signal captured by the RGBIR image sensor is the first image signal
  • the image captured by the RGBIR image sensor The signal is the second image signal.
  • the value of each channel of R photosensitive channel, G photosensitive channel, B photosensitive channel and IR photosensitive channel in the first image signal, and each of the R photosensitive channel, G photosensitive channel, B photosensitive channel and IR photosensitive channel in the second image signal is different.
  • the channel values of each photosensitive channel in the first image signal are different from the channel values of the photosensitive channel in the second image signal.
  • the optical lens of the device where the image sensor 110 is located may be provided with a filter.
  • the spectral region filtered by the filter may include [T1, T2]; wherein, 600nm ⁇ T1 ⁇ 800nm, 750nm ⁇ T2 ⁇ 1100nm, and T1 ⁇ T2.
  • the R, G, B, and IR photosensitive channels have large differences in response in the near-infrared band (650 nm to 1100 nm).
  • the near-infrared light component is removed.
  • a filter is provided on the optical lens to filter out the spectral region with a large difference in response.
  • the filter can be integrated on the above-mentioned optical lens by coating technology; in addition, the filter can be a band-stop filter or a bimodal filter with lower cost.
  • the spectral region filtered by the filter may further include a spectral region of [T3, + ⁇ ), 850nm ⁇ T3 ⁇ 1100nm, and T2 ⁇ T3.
  • the light supplementing device 120 may adopt a strobe method to perform near-infrared light supplement on the target scene. Furthermore, the fill light device performs near infrared fill light during the exposure period of the first preset exposure, which may be: during the exposure period of the first preset exposure, the start of near infrared fill light The time is not earlier than the exposure start time of the first preset exposure, and the end time of performing near infrared fill light is not later than the exposure end time of the first preset exposure.
  • FIG. 5(a) and FIG. 5(b) exemplarily show a schematic diagram of the relationship between the exposure time and the fill time of the near infrared fill light.
  • FIG. 5(a) two exposures are used for the image sensor 110, that is, two exposures occur within one exposure period, and the two exposures are defined as an odd exposure and an even exposure, respectively.
  • Perform near-infrared fill light on the target scene, that is, the even exposure is the first preset exposure.
  • the rising edge of near-infrared fill light is later than the start time of even exposure, and the falling edge of near-infrared fill light can be compared
  • multiple exposures are used for the image sensor 110, that is, three exposures occur within one exposure period, and the three exposures are defined as A exposure, B exposure, and C exposure, respectively.
  • the target scene performs near-infrared fill light, that is, the C exposure is the first preset exposure.
  • the rising edge of the near-infrared fill light is later than the start of the C exposure, and the falling edge of the near-infrared fill light can be shorter than the time when the C exposure ends early.
  • the exposure parameter corresponding to any exposure process of light may not be greater than the target maximum value, where the exposure parameter is the exposure duration and/or gain, and the target maximum value is the maximum value among the exposure parameters corresponding to the exposure without supplementary light .
  • a second target image without near-infrared fill light and a first target image with near-infrared fill light can be captured.
  • the fill light device 120 provides near-infrared fill light at least during the exposure process in which the image sensor 110 captures the first image signal.
  • the fill light device 120 needs to not provide near infrared fill light during the exposure process in which the image sensor 110 captures the second image signal.
  • the number of near-infrared fill lights of the fill light device 120 per unit length of time is lower than the number of exposures of the image sensor 110 within a unit time length, and the interval between each two adjacent near-infrared fill lights is one or more times exposure. In this way, the fill light device 120 has near infrared fill light only during the partial exposure of the image sensor 110.
  • the specific timing of the fill light of the fill light device 120 in multiple exposures may be set according to actual scene requirements, that is, the first preset exposure may be set according to actual scene requirements.
  • multiple exposures may include odd-numbered exposures and even-numbered exposures.
  • the configuration method of the first preset exposure may be as follows:
  • the first preset exposure is one of odd exposures
  • the second preset exposure is one of even exposures.
  • the first image signal is a signal generated according to one of the odd exposures
  • the second image signal is a signal generated according to one of the even exposures.
  • the first preset exposure is one of even-numbered exposures
  • the second preset exposure is one of odd-numbered exposures.
  • the first image signal is a signal generated according to one of the even exposures
  • the second image signal is a signal generated according to one of the odd exposures.
  • the first preset exposure is one of the specified odd exposures
  • the second preset exposure is other than the specified odd exposures One of them.
  • the first image signal is a signal generated according to one of the specified odd exposures
  • the second image signal is generated according to one of the other exposures other than the specified odd exposures signal.
  • the first preset exposure is one of the specified even-numbered exposures
  • the second preset exposure is other than the specified even-numbered exposures One of them.
  • the first image signal is a signal generated according to one of the even-numbered exposures specified
  • the second image signal is generated based on one of the other exposures other than the specified even-numbered exposure signal.
  • the timing of the fill light of the fill light device 120 in multiple exposures given above is merely an example, and should not constitute a limitation on the embodiments of the present application.
  • the intelligent analysis device 140 may select the first target image and the second target image Obtain the image to be analyzed, and perform intelligent analysis on the image to be analyzed to obtain an intelligent analysis result corresponding to the image to be analyzed.
  • the intelligent analysis device 140 may acquire corresponding images to be analyzed according to scene requirements, and perform intelligent analysis on the acquired images to be analyzed.
  • the intelligent analysis device 140 may obtain the first target image from the first target image and the second target image, and determine the first target image as a Describe the image to be analyzed. In this way, the intelligent analysis device can perform intelligent analysis based on the first target image by default.
  • the intelligent analysis device 140 may obtain the second target image from the first target image and the second target image, and determine the second target image as The image to be analyzed. In this way, the intelligent analysis device can perform intelligent analysis based on the second target image by default.
  • the intelligent analysis device 140 when the received selection signal is switched to the first selection signal, acquires the first target image and determines the first target image as the pending Analyzing the image; when the received selection signal is switched to the second selection signal, acquiring the second target image and determining the second target image as the image to be analyzed. In this way, the intelligent analysis device can switch from the first target image and the second target image to perform intelligent analysis.
  • selecting the corresponding image according to the selection signal can improve the controllability of the image processing system, that is, switch the type of the acquired image according to different needs.
  • the above specific implementation manner of selecting the corresponding image according to the selection signal is only an optional implementation manner.
  • all the methods that can realize the selection signal are within the protection scope of the present application, and the present application does not limit this. Methods such as the mode selection or the default selection are reasonable.
  • the image processing system is embodied in the form of multiple units, and the multiple units jointly complete the image processing process.
  • the division of the image processing system in FIG. 3(a) does not constitute a limitation on the present application, but is merely an exemplary description.
  • the image processing system includes: a scene collection unit, a scene processing unit, a scene perception unit, and a scene fill light unit.
  • the scene collection unit may include the above-mentioned optical lens, filter and image sensor 110.
  • the scene fill light unit is the fill light device 120 described above.
  • the function implemented by the scene processing unit is the function of the image processor 130 described above.
  • the function is specifically: the scene processing unit obtains the first image signal and the second image signal output by the scene collection unit, and according to the first image signal A first target image is generated, and a second target image is generated based on the second image signal.
  • the scene perception unit is the above-mentioned intelligent analysis device 140, which is used to obtain the image to be analyzed from the first target image and the second target image, and intelligently analyze the image to be analyzed to obtain the to-be-analyzed The intelligent analysis result corresponding to the image.
  • the image processing system includes: a scene collection unit, a scene processing unit, a selection unit, a scene perception unit, and a scene fill light unit.
  • the scene collection unit may include the above-mentioned optical lens, filter and image sensor 110.
  • the scene fill light unit is the fill light device 120 described above.
  • the function implemented by the scene processing unit is the function of the image processor 130 described above. The function is specifically: the scene processing unit obtains the first image signal and the second image signal output by the scene collection unit, and according to the first image signal A first target image is generated, and a second target image is generated based on the second image signal.
  • the functions implemented by the selection unit and the scene perception unit are the functions implemented by the intelligent analysis device 140, specifically: when the received selection signal is switched to the first selection signal, the first target image is acquired and the The first target image is determined to be the image to be analyzed, and the image to be analyzed is intelligently analyzed to obtain an intelligent analysis result corresponding to the image to be analyzed; when the received selection signal is switched to the second selection signal, the The second target image, determining the second target image as the image to be analyzed, and performing intelligent analysis on the image to be analyzed to obtain an intelligent analysis result corresponding to the image to be analyzed.
  • this solution uses near infrared fill light to the target scene to regulate the light environment of the target scene, so that the image signal quality of the image sensor can be guaranteed, which can be used for output or intelligent analysis Image quality of the image. Therefore, this solution can improve the quality of the image to be analyzed for output or intelligent analysis.
  • the multiple exposure of the image sensor 110 specifically includes: the image sensor 110 performs the multiple exposure according to a first exposure parameter, and the parameter type of the first exposure parameter includes exposure At least one of time and exposure gain;
  • the fill light device performs near-infrared fill light during the exposure time period of the first preset exposure, specifically: the fill light device performs the exposure time of the first preset exposure according to the first fill light parameter In the segment, near infrared fill light is performed, and the parameter type of the first fill light parameter includes at least one of fill light intensity and fill light concentration.
  • the exposure parameters and/or fill light parameters may be adjusted based on image information corresponding to the image to be analyzed.
  • the image processing system provided by the embodiment of the present application may further include: a control unit 150;
  • the control unit 150 is configured to obtain brightness information corresponding to the image to be analyzed, adjust the first fill light parameter to a second fill light parameter according to the brightness information corresponding to the image to be analyzed, and expose the first exposure
  • the parameter is adjusted to the second exposure parameter; and the second fill light parameter is sent to the fill light device 120, and the second exposure parameter is sent to the image sensor 110 synchronously;
  • the fill light device 120 performs near infrared fill light during the exposure time period of the first preset exposure, specifically: the fill light device 120 receives the second fill light parameter from the control unit according to The second fill light parameter, performing near infrared fill light during the exposure time period of the first preset exposure;
  • the multiple exposure of the image sensor 110 is specifically: the image sensor 110 receives the second exposure parameter from the control unit, and performs the multiple exposure according to the second exposure parameter.
  • the image processing system shown in FIG. 2 is only an example, and should not constitute a limitation on the embodiment of the application.
  • the control unit 150 can be connected to the image in addition to the fill light device 120
  • the sensor 110, the image processor 130, or the intelligent analysis device 140 are connected so that the control unit 150 can interact with the image sensor 110, the image processor 130, or the intelligent analysis device 140 to complete image processing.
  • the control unit 150 may be located in the same device as the fill light device 120, or may be located in a different device from the fill light device 120, which is reasonable.
  • the function performed by the control unit 150 may be performed by the image processor 130 or the intelligent analysis device 140.
  • the exposure parameters of the image sensor 110 and/or the fill light device can be adjusted based on the brightness information corresponding to the image to be analyzed 120 fill light parameters.
  • the brightness information corresponding to the image to be analyzed may be obtained according to the intelligent analysis result corresponding to the image to be analyzed, which may specifically include:
  • the intelligent analysis result corresponding to the image to be analyzed includes the position information of the target of interest included in the image to be analyzed, determining at least one target area in the image to be analyzed according to the position information;
  • the average brightness of the at least one target area is determined as brightness information corresponding to the image to be analyzed.
  • At least one target area can be selected from the areas indicated by the location information, in this case, each target area is the area where the interest target is located.
  • the adjusting the first exposure parameter to the second exposure parameter according to the brightness information corresponding to the image to be analyzed includes:
  • the adjusting the first fill light parameter to the second fill light parameter according to the brightness information corresponding to the image to be analyzed may include:
  • first predetermined threshold and the third predetermined threshold may be the same value or different values.
  • second predetermined threshold and the fourth predetermined threshold may be the same value or different values.
  • specific values of the first predetermined threshold, the second predetermined threshold, the third predetermined threshold, and the fourth predetermined threshold may be set according to empirical values.
  • first fill light parameter and the second fill light parameter are only used to distinguish the fill light parameter before and after adjustment, and do not have any limited meaning.
  • the first exposure parameter and the second exposure parameter are only used to distinguish the before and after adjustment The exposure parameters do not have any limited meaning.
  • the degree of increase or decrease of the fill light parameter and the exposure parameter can also be set based on empirical values.
  • the image processing system in this application further includes a control unit, which is used to adaptively control the fill light of the fill light device 120 and the exposure of the image sensor 110.
  • the image processing system is embodied in the form of multiple units, and the multiple units jointly complete the image processing process.
  • the division of the image processing system in FIG. 3(c) does not constitute a limitation on the present application, but is merely an exemplary description.
  • the electronic device includes: a scene collection unit, a scene processing unit, a scene perception unit, a scene fill light unit, and a control unit.
  • the scene collection unit may include: the above-mentioned optical lens, filter and image sensor 110; the scene fill-in unit is the above-mentioned fill-light device 120; the control unit is the above-mentioned control unit 150; and the scene processing unit implements the above-mentioned image Functions implemented by the processor 130; the scene awareness unit implements the functions implemented by the intelligent analysis device 140 described above.
  • control of the scene fill light unit and the scene collection unit in the system shown in FIG. 3(b) can also refer to FIG. 3(c), and the fill light control of the scene fill light unit can be performed by adding a control unit And the scene collection unit's collection control, the scene fill light unit and the scene collection unit can also adjust the fill light control of the scene fill light unit and the scene collection unit's collection control according to the intelligent analysis results fed back by the scene perception unit.
  • the image processor 130 may further include the following steps: outputting the second target image for display, for example, the output second target image may be Displayed in a display device outside the system.
  • the image processor 130 may output only the second target image, and simultaneously output the second target image and the first target image.
  • the specific image to be output is determined according to actual needs, and is not limited here.
  • the content related to generating a first target image based on the first image signal and generating a second target image based on the second image signal will be described below.
  • the image processor 130 For the above single sensor perception system, there are many specific implementation manners of the image processor 130 generating the first target image according to the first image signal. Those skilled in the art can understand that since the signals of each channel of the sensor including the IR channel and at least two non-IR channels are staggered, when directly magnifying the image signal obtained by imaging the sensor, a mosaic phenomenon is found in the image, which is clear The degree is not good, so demosaicing is needed to generate a true-detail image. In order to obtain a clear and accurate first target image, the first image signal may be demosaiced, and then the demosaiced image signal is used to generate a first target image. Based on this, in one implementation, the image processor 130 generates a first target image according to the first image signal, including:
  • interpolation processing is performed in an averaging manner, and the first target image is obtained according to the image after difference processing.
  • the difference-processed image may be determined as the first target image; or, the difference-processed image may be subjected to image enhancement processing, and the image after the image enhancement processing may be determined as The first target image.
  • image enhancement processing may include but is not limited to: histogram equalization, gamma correction, contrast lifting, etc., where histogram equalization converts the histogram of the original image to a probability density of 1 (ideal case) ) Image, Gamma correction uses a non-linear function (exponential function) to transform the gray value of the image, and contrast enhancement uses a linear function to transform the gray value of the image.
  • the interpolation processing according to the channel values of the plurality of pixels included in the neighborhood of each pixel of the first image signal in an average manner includes:
  • Each channel value of each photosensitive channel of the first image signal is interpolated to obtain each channel value after interpolation processing of each photosensitive channel corresponding to each pixel in the first image signal; for each pixel The channel values after the interpolation processing of the corresponding photosensitive channels are averaged to obtain the image after the difference processing.
  • the interpolation algorithm used for interpolation may be a bilinear interpolation algorithm or a bicubic interpolation algorithm.
  • the embodiments of the present application do not limit the interpolation algorithm.
  • the first target image is obtained by averaging the channel values of the respective photosensitive channels corresponding to each pixel.
  • the first target image is the demosaiced image.
  • the first target image is an image including only a luminance signal.
  • the luminance value of each pixel is: the average value of the corresponding channel values in the first image signal.
  • a sensor including an IR channel and at least two non-IR channels is an RGBIR sensor, where the channel values of multiple pixels included in the neighborhood of each pixel according to the first image signal , Interpolating in an averaging manner, including:
  • Each IR photosensitive channel, R photosensitive channel, G photosensitive channel and B photosensitive channel of the first image signal are respectively interpolated to obtain the channel value after interpolation processing of each photosensitive channel corresponding to each pixel in the first image signal
  • the average value of the channel values after interpolation processing of each photosensitive channel corresponding to each pixel is averaged to obtain the image after the difference processing.
  • the image processor 130 generating the second target image according to the second image signal may include:
  • the channel value adjustment for each non-IR photosensitive channel specifically includes: subtracting the IR parameter value corresponding to the corresponding pixel position of each channel value before the adjustment of the non-IR photosensitive channel, and the IR parameter value is the corresponding pixel A product of an IR value of a position and a preset correction value, and the IR value is an IR value sensed by the IR photosensitive channel at the position of the corresponding pixel.
  • the difference-processed image may be determined as the second target image; or, the difference-processed image may be subjected to image enhancement processing, and the image after the image enhancement processing may be determined as The second target image.
  • the specific method for determining the second target image is not limited in this application.
  • the preset correction value can be set according to the actual situation.
  • the preset correction value can usually be set to 1, of course, according to the actual situation, the preset correction value can be set to 0 to Any integer or decimal in 1024, and those skilled in the art can understand that the value of the preset correction value is not limited to this.
  • the image processor 130 generates the second target image according to the second image signal, specifically:
  • the image processor 130 generating the second target image according to the second image signal may include:
  • M frame second image signals including the current second image signal, performing wide dynamic synthesis processing on the M frame second image signals to obtain a wide dynamic image, and performing infrared removal processing on the wide dynamic image to obtain the A second target image; wherein the infrared removal processing includes:
  • the number of M frames is not limited, and M is less than the total number of exposures in one exposure period.
  • High Dynamic (HDR) images have also become wide dynamic range images. Compared with low dynamic range images, there is no local overexposure, which can reflect more image details, so in this embodiment of the application A visible light image capable of obtaining more image details can be obtained, and at least two frames of second image signals can be subjected to wide dynamic synthesis processing to obtain a wide dynamic image signal.
  • the process of performing infrared removal processing on the wide dynamic image signal to obtain a visible light image can refer to the aforementioned processing process for a frame of second image signal.
  • a frame of second image signal may also be selected, and a visible light image may be generated based on the selected frame of second image signal.
  • the specific generation process is the same as the generation process when the second image signal is one frame, which will not be repeated here.
  • the intelligent analysis in this application includes but is not limited to the types of objects included in the target scene, the area where the objects are located, etc.
  • the results of the intelligent analysis may include but not limited to: types of objects included in the target scene, Coordinate information of the area, location information of interest targets, etc.
  • the intelligent analysis device 140 can detect the target object and identify the target object based on the image to be analyzed. For example, according to the image to be analyzed, detect whether there is a target object in the target scene, and the location of the existing target object; for another example, identify the specific target object in the target scene according to the image to be analyzed, and identify the category of the target object Attribute information, etc.
  • the target object may be a human face, a vehicle, a license plate, or other objects or objects.
  • the intelligent analysis device 140 may analyze the image to be analyzed based on a specific algorithm to perform image processing on the target scene, or, by using a neural network model, analyze the image to be analyzed to analyze the target scene It is reasonable to do image processing.
  • the intelligent analysis device 140 may perform feature enhancement processing on the feature image before analyzing the feature image corresponding to the image to be analyzed.
  • the intelligent analysis device 140 performs intelligent analysis on the image to be analyzed to obtain an intelligent analysis result corresponding to the image to be analyzed, including:
  • an intelligent analysis result corresponding to the image to be analyzed is obtained, and the intelligent analysis result includes an interest target contained in the image to be analyzed and/or location information of the interest target.
  • one or more frames of feature images can be generated, and then each frame of feature images is analyzed to obtain the results of the intelligent analysis.
  • the feature image can be subjected to feature enhancement processing.
  • the feature enhancement processing includes extreme value enhancement processing, where the extreme value enhancement processing is specifically: processing of localized extreme value filtering on the feature image.
  • the so-called extreme value may be a maximum value or a minimum value.
  • the processing of the extreme value enhancement processing includes: dividing the feature image into blocks to obtain multiple image blocks; for each image block, maximizing the pixels included in the image block, It is determined to be the processing result corresponding to the image block; each processing result is combined to obtain the image after the extreme value enhancement processing.
  • the number of image blocks is the resolution of the image after extreme value enhancement processing. It should be noted that the number of image blocks can be set according to the actual situation, which is not limited in this application. For ease of understanding, taking the number of image blocks as 100 as an example, the process of extreme value enhancement processing is introduced:
  • the maximum value in the pixels included in the image block is determined as the processing result corresponding to the image block, and 100 processing results are obtained;
  • the 100 processing results are merged according to the positional relationship of the image blocks to obtain an image containing 100 pixels.
  • the specific implementation method of the extreme value enhancement processing is not limited to the above-mentioned method. For example: you can traverse each pixel position, for each pixel position, determine a maximum value for the pixel position, and use the maximum value to update the pixel value of the pixel position, where, for any pixel position
  • the way of the large value may be: determining each pixel position adjacent to the pixel position, determining each adjacent pixel position and the maximum value of the pixels in the pixel position, and using the determined maximum value as the maximum value of the pixel position Great value.
  • an embodiment of the present application further provides an image processing method.
  • the image processing method provided in the embodiments of the present application can be applied to an electronic device having the functions of an image processor, an intelligent analysis device, and a control unit.
  • the functions performed by the electronic device are the same as the images in the above embodiments
  • the functions performed by the processor and the intelligent analysis device are the same, and the specific implementation of the image processing method may refer to the foregoing embodiments.
  • an image processing method provided by an embodiment of the present application may include:
  • S801 Obtain a first image signal and a second image signal output by the image sensor
  • the image sensor generates and outputs a first image signal and a second image signal through multiple exposures, wherein the first image signal is an image signal generated according to a first preset exposure, and the second image signal is According to the image signal generated by the second preset exposure, the first preset exposure and the second preset exposure are two of the multiple exposures; at the exposure time of the first preset exposure
  • the fill light device in the section performs near infrared fill light, and the fill light device does not perform near infrared fill light in the exposure time period of the second preset exposure.
  • S804 Perform intelligent analysis on the image to be analyzed to obtain an intelligent analysis result corresponding to the image to be analyzed.
  • the image sensor includes a plurality of photosensitive channels, the plurality of photosensitive channels includes an IR photosensitive channel, and further includes at least two of an R photosensitive channel, a G photosensitive channel, a B photosensitive channel, and a W photosensitive channel.
  • the photosensitive channel generates and outputs the first image signal and the second image signal through the multiple exposures;
  • R photosensitive channel is used for sensing light in the red and near infrared bands
  • G photosensitive channel is used for sensing light in the green and near infrared bands
  • B photosensitive channel is used for sensing the blue and near infrared bands.
  • Light IR means infrared sensitive channel, used to sense light in near infrared band
  • W means all-pass sensitive channel, used to sense light in full band.
  • the image sensor is an RGBIR sensor, an RGBWIR sensor, an RWBIR sensor, an RWGIR sensor, or a BWGIR sensor;
  • R represents R photosensitive channel
  • G represents G photosensitive channel
  • B represents B photosensitive channel
  • IR represents IR photosensitive channel
  • W represents all-pass photosensitive channel.
  • the acquiring the image to be analyzed from the first target image and the second target image includes:
  • the acquiring the image to be analyzed from the first target image and the second target image includes:
  • the received selection signal is switched to the second selection signal
  • the second target image is acquired, and the second target image is determined as the image to be analyzed.
  • an image processing method provided by an embodiment of the present application further includes:
  • the first control signal is used to instruct the fill light device to perform the fill time of near infrared fill light, specifically, during the exposure period of the first preset exposure, perform near infrared fill light
  • the start time of is not earlier than the exposure start time of the first preset exposure
  • the end time of performing near infrared fill light is not later than the exposure end time of the first preset exposure.
  • the first control signal is also used to indicate the number of fill lights of the fill light device, specifically, the number of near infrared fill lights of the fill light device per unit time length is lower than that of the image sensor The number of exposures per unit length of time, in which one or more exposures are spaced every two adjacent periods of near infrared fill light.
  • the multiple exposures of the image sensor include odd exposures and even exposures; wherein,
  • the first preset exposure is one of odd exposures
  • the second preset exposure is one of even exposures
  • the first preset exposure is one of the even exposures
  • the second preset exposure is one of the odd exposures
  • the first preset exposure is one of the specified odd exposures
  • the second preset exposure is one of the other exposures other than the specified odd exposures
  • the first preset exposure is one of the specified even-numbered exposures
  • the second preset exposure is one of the other exposures other than the specified even-numbered exposures.
  • an image processing method provided by an embodiment of the present application further includes:
  • the acquiring brightness information corresponding to the image to be analyzed includes:
  • the intelligent analysis result corresponding to the image to be analyzed includes the position information of the target of interest included in the image to be analyzed, determining at least one target area in the image to be analyzed according to the position information;
  • the average brightness of the at least one target area is determined as brightness information corresponding to the image to be analyzed.
  • the adjusting the first exposure parameter used for exposure of the image sensor to the second exposure parameter according to the brightness information corresponding to the image to be analyzed includes:
  • the first exposure parameter used by the image sensor for exposure is adjusted down to obtain the second exposure parameter
  • the first predetermined threshold is higher than the second predetermined threshold.
  • the adjusting the first fill light parameter used by the fill light device to the second fill light parameter according to the brightness information corresponding to the image to be analyzed includes:
  • the first fill light parameter used by the fill light of the fill light device is adjusted down to obtain the second fill light parameter
  • the third predetermined threshold is higher than the fourth predetermined threshold.
  • generating the first target image according to the first image signal includes:
  • interpolation processing is performed in an averaging manner, and the first target image is obtained according to the image after difference processing.
  • the obtaining the first target image according to the difference processed image includes:
  • the image after the difference processing is subjected to image enhancement processing, and the image after the image enhancement processing is determined as the first target image.
  • the interpolating the channel values of the plurality of pixels included in the neighborhood of each pixel of the first image signal in an average manner includes:
  • An average value is obtained for each channel value after interpolation processing of each photosensitive channel corresponding to each pixel to obtain the image after the difference processing.
  • the generating the second target image according to the second image signal includes:
  • the channel value adjustment for each non-IR photosensitive channel specifically includes: subtracting the IR parameter corresponding to the corresponding pixel position of each channel value before the adjustment of the non-IR photosensitive channel Value, the IR parameter value is the product of the IR value of the corresponding pixel position and a preset correction value, and the IR value is the IR value sensed by the IR light-sensing channel at the corresponding pixel position.
  • the generating the second target image according to the second image signal includes:
  • M frame second image signals including the current second image signal, performing wide dynamic synthesis processing on the M frame second image signals to obtain a wide dynamic image, and performing infrared removal processing on the wide dynamic image to obtain the A second target image; wherein the infrared removal processing includes:
  • performing intelligent analysis on the image to be analyzed to obtain an intelligent analysis result corresponding to the image to be analyzed includes:
  • an intelligent analysis result corresponding to the image to be analyzed is obtained, and the intelligent analysis result includes an interest target contained in the image to be analyzed and/or location information of the interest target.
  • steps of fill light control and exposure control can be performed by the image processor or the intelligent analysis device, or by the controller in the device integrating the image processor, the intelligent analysis device and the controller , This is reasonable.
  • this solution uses near infrared fill light to the target scene to regulate the light environment of the target scene, so that the quality of the image signal received by the image sensor can be guaranteed, which in turn can be guaranteed for output or intelligent analysis.
  • the image quality of the image Therefore, this solution can improve the quality of the image to be analyzed for output or intelligent analysis.
  • an embodiment of the present application further provides an image processing device.
  • an image processing device provided by an embodiment of the present application may include:
  • the image signal obtaining module 910 is configured to obtain the first image signal and the second image signal output by the image sensor, wherein the image sensor generates and outputs the first image signal and the second image signal through multiple exposures, the first The image signal is an image signal generated according to a first preset exposure, the second image signal is an image signal generated according to a second preset exposure, and the first preset exposure and the second preset exposure are the Two of the multiple exposures; the fill light device performs near-infrared fill light during the exposure period of the first preset exposure, and the fill light device during the exposure period of the second preset exposure No near infrared fill light;
  • An image generation module 920 configured to generate a first target image based on the first image signal, and generate a second target image based on the second image signal;
  • An image selection module 930 configured to obtain an image to be analyzed from the first target image and the second target image
  • the image analysis module 940 is configured to perform intelligent analysis on the image to be analyzed to obtain an intelligent analysis result corresponding to the image to be analyzed.
  • this solution uses near infrared fill light to the target scene to regulate the light environment of the target scene, so that the quality of the image signal received by the image sensor can be guaranteed, which in turn can be guaranteed for output or intelligent analysis.
  • the image quality of the image Therefore, this solution can improve the quality of the image to be analyzed for output or intelligent analysis.
  • the image selection module 930 is configured to: obtain the first target image from the first target image and the second target image, and determine the first target image as the image to be analyzed Or, obtain the second target image from the first target image and the second target image, and determine the second target image as the image to be analyzed.
  • the image selection module 930 is configured to: when the received selection signal is switched to the first selection signal, acquire the first target image and determine the first target image as the image to be analyzed; When the received selection signal is switched to the second selection signal, the second target image is acquired, and the second target image is determined as the image to be analyzed.
  • an image processing device provided by an embodiment of the present application further includes:
  • the signal sending module is used to send a first control signal to the fill light device, and the first control signal is used to control the fill light device to perform near infrared fill light during the exposure period of the first preset exposure , During the exposure period of the second preset exposure, near infrared fill light is not performed.
  • the first control signal is used to instruct the fill light device to perform the fill time of near infrared fill light, specifically, during the exposure period of the first preset exposure, perform near infrared fill light
  • the start time of is not earlier than the exposure start time of the first preset exposure
  • the end time of performing near infrared fill light is not later than the exposure end time of the first preset exposure.
  • the first control signal is also used to indicate the number of fill lights of the fill light device, specifically, the number of near infrared fill lights of the fill light device per unit time length is lower than that of the image sensor The number of exposures per unit length of time, in which one or more exposures are spaced every two adjacent periods of near infrared fill light.
  • the multiple exposures of the image sensor include odd exposures and even exposures; the first control signal is used to instruct the fill light device to perform near infrared fill light in the first preset exposure; among them,
  • the first preset exposure is one of odd exposures
  • the second preset exposure is one of even exposures
  • the first preset exposure is one of the even exposures
  • the second preset exposure is one of the odd exposures
  • the first preset exposure is one of the specified odd exposures
  • the second preset exposure is one of the other exposures other than the specified odd exposures
  • the first preset exposure is one of the specified even-numbered exposures
  • the second preset exposure is one of the other exposures other than the specified even-numbered exposures.
  • an image processing device provided by an embodiment of the present application further includes:
  • the parameter adjustment module is used to obtain the brightness information corresponding to the image to be analyzed; according to the brightness information corresponding to the image to be analyzed, the first fill light parameter used by the fill light of the fill light device is adjusted to the second fill light Parameter, adjusting the first exposure parameter used by the image sensor exposure to the second exposure parameter; and sending the second fill light parameter to the fill light device, and synchronously sending the second exposure to the image sensor Parameters such that the fill light device receives the second fill light parameter, performs near-infrared fill light during the exposure period of the first preset exposure according to the second fill light parameter, and the The image sensor receives the second exposure parameter, and performs the multiple exposure according to the second exposure parameter.
  • the parameter adjustment module acquiring brightness information corresponding to the image to be analyzed includes:
  • the intelligent analysis result corresponding to the image to be analyzed includes the position information of the target of interest included in the image to be analyzed, determining at least one target area in the image to be analyzed according to the position information;
  • the average brightness of the at least one target area is determined as brightness information corresponding to the image to be analyzed.
  • the parameter adjustment module adjusts the first exposure parameter used by the image sensor to the second exposure parameter according to the brightness information corresponding to the image to be analyzed, including:
  • the first exposure parameter used by the image sensor for exposure is adjusted down to obtain the second exposure parameter
  • the first predetermined threshold is higher than the second predetermined threshold.
  • the parameter adjustment module adjusts the first fill light parameter used by the fill light device to the second fill light parameter according to the brightness information corresponding to the image to be analyzed, including:
  • the first fill-light parameter used by fill-light of the fill-light device is adjusted down to obtain the second fill-light parameter
  • the first fill-in light parameter is adjusted up to obtain the second fill-in light parameter
  • the third predetermined threshold is higher than the fourth predetermined threshold.
  • the image generation module 920 generates the first target image according to the first image signal, including:
  • interpolation processing is performed in an averaging manner, and the first target image is obtained according to the image after difference processing.
  • the obtaining the first target image according to the difference processed image includes:
  • the image after the difference processing is subjected to image enhancement processing, and the image after the image enhancement processing is determined as the first target image.
  • the interpolating the channel values of the plurality of pixels included in the neighborhood of each pixel of the first image signal in an average manner includes:
  • An average value is obtained for each channel value after interpolation processing of each photosensitive channel corresponding to each pixel to obtain the image after the difference processing.
  • the image generation module 920 generating the second target image according to the second image signal includes:
  • the channel value adjustment for each non-IR photosensitive channel specifically includes: subtracting the IR parameter corresponding to the corresponding pixel position of each channel value before the adjustment of the non-IR photosensitive channel Value, the IR parameter value is the product of the IR value of the corresponding pixel position and a preset correction value, and the IR value is the IR value sensed by the IR light-sensing channel at the corresponding pixel position.
  • the image generation module 920 generates the second target image according to the second image signal, including:
  • M frame second image signals including the current second image signal, performing wide dynamic synthesis processing on the M frame second image signals to obtain a wide dynamic image, and performing infrared removal processing on the wide dynamic image to obtain the A second target image; wherein the infrared removal processing includes:
  • the image analysis module 940 performs intelligent analysis on the image to be analyzed to obtain an intelligent analysis result corresponding to the image to be analyzed, including:
  • an intelligent analysis result corresponding to the image to be analyzed is obtained, and the intelligent analysis result includes an interest target contained in the image to be analyzed and/or location information of the interest target.
  • an embodiment of the present application further provides an electronic device.
  • the electronic device includes a processor 1001, a communication interface 1002, a memory 1003, and a communication bus 1004, where the processor 1001 communicates The interface 1002 and the memory 1003 communicate with each other through the communication bus 1004,
  • Memory 1003 used to store computer programs
  • the processor 1001 is used to implement an image processing method provided in the embodiments of the present application when executing the program stored in the memory 1003.
  • the communication bus mentioned in the above electronic device may be a peripheral component interconnection standard (Peripheral Component Interconnect, PCI) bus or an extended industry standard structure (Extended Industry Standard Architecture, EISA) bus, etc.
  • PCI peripheral component interconnection standard
  • EISA Extended Industry Standard Architecture
  • the communication bus can be divided into an address bus, a data bus, and a control bus. For ease of representation, only one thick line is used in the figure, but it does not mean that there is only one bus or one type of bus.
  • the communication interface is used for communication between the above electronic device and other devices.
  • the memory may include random access memory (Random Access Memory, RAM), or non-volatile memory (Non-Volatile Memory, NVM), for example, at least one disk memory.
  • RAM Random Access Memory
  • NVM Non-Volatile Memory
  • the memory may also be at least one storage device located away from the foregoing processor.
  • the aforementioned processor may be a general-purpose processor, including a central processor (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; it may also be a digital signal processor (Digital Signal Processing, DSP), dedicated integration Circuit (Application Specific Integrated Circuit, ASIC), field programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components.
  • a central processor Central Processing Unit, CPU
  • NP Network Processor
  • DSP Digital Signal Processing
  • ASIC Application Specific Integrated Circuit
  • FPGA Field-Programmable Gate Array
  • an embodiment of the present application also provides a computer-readable storage medium in which a computer program is stored, and the computer program is processed
  • the image processing method provided by the embodiments of the present application is implemented when the processor is executed.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

L'invention concerne un procédé et un système de traitement d'images. Ledit système comprend : un capteur d'images pour générer et émettre un premier signal d'image et un second signal d'image au moyen d'une pluralité d'instances d'exposition, le premier signal d'image étant un signal d'image généré selon une première exposition prédéfinie, et le second signal d'image étant un signal d'image généré selon une seconde exposition prédéfinie ; un appareil d'apport supplémentaire de lumière pour effectuer une supplémentation en lumière infrarouge proche pendant une période de temps d'exposition de la première exposition prédéfinie, et pour ne pas effectuer de supplémentation en lumière infrarouge proche pendant une période de temps d'exposition de la seconde exposition prédéfinie ; un processeur d'images pour générer une première image cible selon le premier signal d'image, et pour générer une seconde image cible selon le second signal d'image ; et un appareil d'analyse intelligente pour acquérir des images à analyser provenant de la première image cible et de la seconde image cible, et pour analyser de manière intelligente lesdites images pour obtenir des résultats d'analyse intelligente correspondant aux images à analyser. La qualité des images à analyser utilisées pour la sortie ou pour l'analyse intelligente peut être ainsi améliorée au moyen de la présente invention.
PCT/CN2019/122437 2018-12-12 2019-12-02 Procédé et système de traitement d'images WO2020119504A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811517428.8A CN110493506B (zh) 2018-12-12 2018-12-12 一种图像处理方法和***
CN201811517428.8 2018-12-12

Publications (1)

Publication Number Publication Date
WO2020119504A1 true WO2020119504A1 (fr) 2020-06-18

Family

ID=68545688

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/122437 WO2020119504A1 (fr) 2018-12-12 2019-12-02 Procédé et système de traitement d'images

Country Status (2)

Country Link
CN (1) CN110493506B (fr)
WO (1) WO2020119504A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113965671A (zh) * 2021-02-04 2022-01-21 福建汇川物联网技术科技股份有限公司 一种用于测距的补光方法、装置、电子设备及存储介质
CN114745509A (zh) * 2022-04-08 2022-07-12 深圳鹏行智能研究有限公司 图像采集方法、设备、足式机器人及存储介质

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110493506B (zh) * 2018-12-12 2021-03-02 杭州海康威视数字技术股份有限公司 一种图像处理方法和***
CN111064898B (zh) * 2019-12-02 2021-07-16 联想(北京)有限公司 图像拍摄方法及装置、设备、存储介质
CN112926367B (zh) * 2019-12-06 2024-06-21 杭州海康威视数字技术股份有限公司 一种活体检测的设备及方法
CN113129241B (zh) * 2019-12-31 2023-02-07 RealMe重庆移动通信有限公司 图像处理方法及装置、计算机可读介质、电子设备
CN115297268B (zh) * 2020-01-22 2024-01-05 杭州海康威视数字技术股份有限公司 一种成像***及图像处理方法
CN111556225B (zh) * 2020-05-20 2022-11-22 杭州海康威视数字技术股份有限公司 图像采集装置及图像采集控制方法
CN111935415B (zh) * 2020-08-18 2022-02-08 浙江大华技术股份有限公司 亮度调整方法、装置、存储介质及电子装置
CN113301264B (zh) * 2021-07-26 2021-11-23 北京博清科技有限公司 一种图像亮度调整方法、装置、电子设备及存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103971351A (zh) * 2013-02-04 2014-08-06 三星泰科威株式会社 使用多光谱滤光器阵列传感器的图像融合方法和设备
CN104134352A (zh) * 2014-08-15 2014-11-05 青岛比特信息技术有限公司 基于长短曝光结合的视频车辆特征检测***及其检测方法
US20160057367A1 (en) * 2014-08-25 2016-02-25 Hyundai Motor Company Method for extracting rgb and nir using rgbw sensor
CN107343132A (zh) * 2017-08-28 2017-11-10 中控智慧科技股份有限公司 一种基于近红外led补光灯的手掌识别装置及方法
CN107920188A (zh) * 2016-10-08 2018-04-17 杭州海康威视数字技术股份有限公司 一种镜头及摄像机
CN108419061A (zh) * 2017-02-10 2018-08-17 杭州海康威视数字技术股份有限公司 基于多光谱的图像融合设备、方法及图像传感器
CN110493506A (zh) * 2018-12-12 2019-11-22 杭州海康威视数字技术股份有限公司 一种图像处理方法和***

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7978260B2 (en) * 2003-09-15 2011-07-12 Senshin Capital, Llc Electronic camera and method with fill flash function
CN105306832A (zh) * 2015-09-15 2016-02-03 北京信路威科技股份有限公司 图像采集设备补光装置及方法
CN105657280B (zh) * 2016-03-01 2019-03-08 Oppo广东移动通信有限公司 一种快速对焦方法、装置及移动终端
CN106572310B (zh) * 2016-11-04 2019-12-13 浙江宇视科技有限公司 一种补光强度控制方法与摄像机
CN106778518B (zh) * 2016-11-24 2021-01-08 汉王科技股份有限公司 一种人脸活体检测方法及装置

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103971351A (zh) * 2013-02-04 2014-08-06 三星泰科威株式会社 使用多光谱滤光器阵列传感器的图像融合方法和设备
CN104134352A (zh) * 2014-08-15 2014-11-05 青岛比特信息技术有限公司 基于长短曝光结合的视频车辆特征检测***及其检测方法
US20160057367A1 (en) * 2014-08-25 2016-02-25 Hyundai Motor Company Method for extracting rgb and nir using rgbw sensor
CN107920188A (zh) * 2016-10-08 2018-04-17 杭州海康威视数字技术股份有限公司 一种镜头及摄像机
CN108419061A (zh) * 2017-02-10 2018-08-17 杭州海康威视数字技术股份有限公司 基于多光谱的图像融合设备、方法及图像传感器
CN107343132A (zh) * 2017-08-28 2017-11-10 中控智慧科技股份有限公司 一种基于近红外led补光灯的手掌识别装置及方法
CN110493506A (zh) * 2018-12-12 2019-11-22 杭州海康威视数字技术股份有限公司 一种图像处理方法和***

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113965671A (zh) * 2021-02-04 2022-01-21 福建汇川物联网技术科技股份有限公司 一种用于测距的补光方法、装置、电子设备及存储介质
CN114745509A (zh) * 2022-04-08 2022-07-12 深圳鹏行智能研究有限公司 图像采集方法、设备、足式机器人及存储介质
CN114745509B (zh) * 2022-04-08 2024-06-07 深圳鹏行智能研究有限公司 图像采集方法、设备、足式机器人及存储介质

Also Published As

Publication number Publication date
CN110493506A (zh) 2019-11-22
CN110493506B (zh) 2021-03-02

Similar Documents

Publication Publication Date Title
WO2020119504A1 (fr) Procédé et système de traitement d'images
WO2020119505A1 (fr) Système et procédé de traitement d'image
CN109951646B (zh) 图像融合方法、装置、电子设备及计算机可读存储介质
US11849224B2 (en) Global tone mapping
US9661218B2 (en) Using captured high and low resolution images
CN109712102B (zh) 一种图像融合方法、装置及图像采集设备
EP3038356B1 (fr) Exposition de groupes de pixels dans la production d'images numériques
EP3849170B1 (fr) Procédé de traitement d'images, dispositif électronique, et support d'enregistrement lisible par ordinateur
US20110216210A1 (en) Providing improved high resolution image
CN110493531B (zh) 一种图像处理方法和***
WO2017152402A1 (fr) Procédé et appareil de traitement d'image pour un terminal, et terminal
CN112118388B (zh) 图像处理方法、装置、计算机设备和存储介质
JP2002204389A (ja) 露出制御方法
CN103546730A (zh) 基于多摄像头的图像感光度增强方法
WO2019104047A1 (fr) Mise en correspondance de tonalité globale
US20200228770A1 (en) Lens rolloff assisted auto white balance
EP3270586A1 (fr) Dispositif de traitement d'image, procédé de traitement d'image et programme
US20200228769A1 (en) Lens rolloff assisted auto white balance
US10937230B2 (en) Image processing
JP6492452B2 (ja) 制御システム、撮像装置、制御方法およびプログラム
KR20210107955A (ko) 컬러 스테인 분석 방법 및 상기 방법을 이용하는 전자 장치
JP4575100B2 (ja) ホワイトバランス調整装置、色調整装置、ホワイトバランス調整方法、及び色調整方法
Leznik et al. Optimization of demanding scenarios in CMS and image quality criteria
JP2004200888A (ja) 撮像機器
US8818094B2 (en) Image processing apparatus, image processing method and recording device recording image processing program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19896120

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19896120

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 03.08.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19896120

Country of ref document: EP

Kind code of ref document: A1