WO2020119504A1 - 一种图像处理方法和*** - Google Patents
一种图像处理方法和*** Download PDFInfo
- Publication number
- WO2020119504A1 WO2020119504A1 PCT/CN2019/122437 CN2019122437W WO2020119504A1 WO 2020119504 A1 WO2020119504 A1 WO 2020119504A1 CN 2019122437 W CN2019122437 W CN 2019122437W WO 2020119504 A1 WO2020119504 A1 WO 2020119504A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- exposure
- fill light
- analyzed
- target
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/74—Circuitry for compensating brightness variation in the scene by influencing the scene brightness using illuminating means
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
Definitions
- the present application relates to the field of image processing technology, in particular to an image processing method and system.
- the information in the environment can usually be recognized based on the image taken by the camera.
- the camera due to the variability of light, it is difficult for the camera to output high-quality images according to different ambient lights. There will always be cases where the image quality is good when the light is good and the image quality is poor when the light is poor. Therefore, In the above related technologies, the obtained image captured by the camera cannot be applied to all environments, resulting in poor information perception effect of the environment.
- the purpose of the embodiments of the present application is to provide an image processing method and system to improve the quality of an image to be analyzed for output or intelligent analysis.
- the specific technical solutions are as follows:
- an image processing system including:
- An image sensor for generating and outputting a first image signal and a second image signal through multiple exposures, wherein the first image signal is an image signal generated according to a first preset exposure, and the second image signal is based on An image signal generated by a second preset exposure, the first preset exposure and the second preset exposure are two of the multiple exposures;
- the fill light device is used to perform near-infrared fill light in a strobe manner, specifically: the fill light device performs near-infrared fill light during the exposure period of the first preset exposure, and the second preset light No near-infrared fill light is applied during the exposure period of exposure;
- An image processor for receiving the first image signal and the second image signal output by the image sensor, generating a first target image based on the first image signal, and generating a second target image based on the second image signal Target image
- An intelligent analysis device is configured to obtain an image to be analyzed from the first target image and the second target image, and perform an intelligent analysis on the image to be analyzed to obtain an intelligent analysis result corresponding to the image to be analyzed.
- an embodiment of the present application provides an image processing method, including:
- the image sensor generates and outputs the first image signal and the second image signal through multiple exposures, wherein the first image signal is based on the first An image signal generated by a preset exposure, the second image signal is an image signal generated according to a second preset exposure, and the first preset exposure and the second preset exposure are among the multiple exposures Double exposure; during the exposure period of the first preset exposure, the fill light device performs near infrared fill light, and during the exposure period of the second preset exposure, the fill light device does not perform near infrared fill light ;
- an image processing apparatus including:
- An image signal obtaining module for obtaining the first image signal and the second image signal output by the image sensor, wherein the image sensor generates and outputs the first image signal and the second image signal through multiple exposures, wherein the first An image signal is an image signal generated according to a first preset exposure, the second image signal is an image signal generated according to a second preset exposure, the first preset exposure and the second preset exposure are Two of the multiple exposures; the fill light device performs near-infrared fill light during the exposure period of the first preset exposure, and the fill light during the exposure period of the second preset exposure The device does not perform near infrared fill light;
- An image generating module configured to generate a first target image based on the first image signal, and generate a second target image based on the second image signal;
- An image selection module configured to obtain an image to be analyzed from the first target image and the second target image
- the image analysis module is configured to perform intelligent analysis on the image to be analyzed to obtain an intelligent analysis result corresponding to the image to be analyzed.
- an embodiment of the present application provides a processor, a communication interface, a memory, and a communication bus, where the processor, the communication interface, and the memory complete communication with each other through the communication bus;
- Memory used to store computer programs
- the processor when used to execute the program stored in the memory, implements the steps of an image processing method provided by the embodiments of the present application.
- this solution uses near infrared fill light to the target scene to regulate the light environment of the target scene, so that the quality of the image signal received by the image sensor can be guaranteed, which in turn can be guaranteed for output or intelligent analysis.
- the image quality of the image Therefore, this solution can improve the quality of the image to be analyzed for output or intelligent analysis.
- FIG. 1 is a schematic structural diagram of an image processing system provided by an embodiment of the present application.
- FIG 2 is another schematic structural diagram of an image processing system provided by an embodiment of the present application.
- FIG. 3(a) is a schematic diagram of the principle when the image processing system provided by the embodiment of the present application completes image processing through multiple units;
- FIG. 3(b) is another schematic diagram of the image processing system provided by the embodiment of the present application when image processing is completed by multiple units;
- FIG. 3(c) is another schematic diagram of the image processing system provided by the embodiment of the present application when image processing is completed by multiple units together;
- Figure 4 is a schematic diagram of the array corresponding to the RGBIR image sensor
- 5(a) is a schematic diagram illustrating the relationship between exposure and near-infrared fill light according to an embodiment of the present application
- FIG. 5(b) is another schematic diagram embodying the relationship between exposure and near-infrared fill light according to an embodiment of the present application
- Figure 6 is a schematic diagram of the principle of spectral blocking
- FIG. 7 is a spectral diagram of a near-infrared light source
- FIG. 8 is a flowchart of an image processing method provided by an embodiment of the present application.
- FIG. 9 is a schematic structural diagram of an image processing device provided by an embodiment of the present application.
- FIG. 10 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
- Visible light is an electromagnetic wave that human eyes can perceive.
- the visible spectrum has no precise range.
- the wavelength of electromagnetic waves that the human eye can perceive is between 400 and 760 nm (nanometer), but some people can perceive that the wavelength is about 380 to 780 nm Between electromagnetic waves.
- Near-infrared light refers to electromagnetic waves with a wavelength in the range of 780-2526nm.
- the visible light image refers to a color image that only perceives visible light signals, and the color image is only sensitive to the visible light band.
- Infrared-sensitive image refers to a brightness image that perceives near-infrared light signals. It should be noted that the infrared-sensing image is not limited to a brightness image that only perceives near-infrared light signals, but it may also be a brightness image that perceives near-infrared light signals and other band light signals.
- embodiments of the present application provide an image processing system.
- an image processing system provided by an embodiment of the present application may include:
- the image sensor 110 is configured to generate and output a first image signal and a second image signal through multiple exposures, wherein the first image signal is an image signal generated according to a first preset exposure, and the second image signal is According to the image signal generated by the second preset exposure, the first preset exposure and the second preset exposure are two of the multiple exposures;
- the fill light device 120 is used to perform near-infrared fill light in a strobe manner, specifically: the fill light device 120 performs near-infrared fill light during the exposure period of the first preset exposure, in the second The near-infrared fill light is not performed during the exposure period of the preset exposure;
- the image processor 130 is configured to receive the first image signal and the second image signal output by the image sensor 110, generate a first target image according to the first image signal, and generate according to the second image signal Second target image;
- the intelligent analysis device 140 is configured to obtain an image to be analyzed from the first target image and the second target image, and perform an intelligent analysis on the image to be analyzed to obtain an intelligent analysis result corresponding to the image to be analyzed.
- the image sensor 110 described in the embodiments of the present application may be exposed periodically, and may be exposed multiple times in each cycle.
- the first image signal and the second image signal are generated and output through multiple exposures described above, and may be the first image signal and the second image signal generated and output through multiple exposures in one cycle, but are not limited to multiple passes in one cycle
- the exposure generates and outputs the first image signal and the second image signal.
- the fill light device 120 performs near-infrared fill light during the exposure period of the first preset exposure, but does not perform near-infrared fill light during the exposure period of the second preset exposure, the first The preset exposure and the second preset exposure are different exposures.
- the first image signal when generating the first target image according to the first image signal generated by the first preset exposure, the first image signal can be interpolated, and the interpolated infrared-sensing image can be used as the first target
- the second image signal when generating the second target image based on the second image signal generated by the second preset exposure, the second image signal can be subjected to infrared removal processing to obtain a visible light image, and the visible light image is used as the second target
- the image, or the visible light image after image enhancement is used as the second target image; or, under this exposure and fill light control, when generating the second target image according to the second image signal generated by the second preset exposure, you can
- the second image signal of the frame is subjected to wide dynamic processing, and then the infrared processed image is processed to obtain a visible light image, and the visible light image is used as a second target image.
- the structural schematic diagram of an image processing system shown in FIG. 1 is merely an example, and should not constitute a limitation on the embodiments of the present application.
- the light supplementing device 120 may be combined with the image sensor 110 and image processing
- the device 130 or the intelligent analysis device 140 is electrically connected.
- the fill light device 120 can be controlled by the connected image sensor 110, image processor 130, or intelligent analysis device 140.
- the image sensor 110, the fill light device 120, the image processor 130, and the intelligent analysis device 140 included in the image processing system can be integrated into an electronic device.
- the electronic device has fill light, image signal acquisition, and Image processing function.
- the electronic device may be a camera, or other devices capable of acquiring images.
- each component included in the image processing system may be deployed in at least two electronic devices.
- any one of the at least two electronic devices has functions of fill light, image signal acquisition, image processing, and intelligent analysis One or more of the functions.
- the fill light device 120 is a separate device, and the image sensor 110, the image processor 130, and the intelligent analysis device 140 are all deployed in the camera; or, the fill light device 120 is a separate device, The image sensor 110 is deployed in the camera, and the image processor 130 and the intelligent analysis device 140 are deployed in a terminal or server associated with the camera.
- the device where the image sensor 110 is located may further include an optical lens, so that light rays enter the image sensor 110 through the optical lens.
- the fill light device 120 adopts a stroboscopic manner to perform near-infrared fill light on the target scene, that is, to perform non-continuous near-infrared light illumination on the target scene.
- the fill light device 120 is a device that can emit near-infrared light, such as a fill light; and the fill light of the fill light device 120 can be controlled manually, or can be controlled by a software program or a specific device The fill light of the light device 120 is reasonable.
- the specific wavelength range of the near-infrared light used by the near-infrared fill light is not specifically limited in this application.
- the near-infrared light source has a strong light intensity at about 850 nm. Therefore, in specific applications, in order to obtain the maximum response from the image sensor 110, this application
- the embodiment may use near-infrared light with a wavelength of 850 nm, but it is not limited thereto.
- the fill light device 120 provides near-infrared light in a stroboscopic manner. Specifically, it refers to performing near-infrared fill light on an external scene by controlling the change of the light and dark of the near-infrared light.
- the bright process is considered to be near infrared fill light to the scene, and the process of the near infrared light in the fill light device 120 from the end to the start light is considered to be that the scene is not provided with near infrared light.
- the image processing system provided by the embodiments of the present application is a single sensor perception system, that is, the image sensor 110 is a single.
- the image sensor 110 includes a plurality of photosensitive channels, the plurality of photosensitive channels includes an IR photosensitive channel, and further includes at least two of an R photosensitive channel, a G photosensitive channel, a B photosensitive channel, and a W photosensitive channel.
- the multiple photosensitive channels generate and output the first image signal and the second image signal through the multiple exposures;
- R photosensitive channel is used for sensing light in the red and near infrared bands
- G photosensitive channel is used for sensing light in the green and near infrared bands
- B photosensitive channel is used for sensing the blue and near infrared bands.
- Light IR means infrared sensitive channel, used to sense light in near infrared band
- W means all-pass sensitive channel, used to sense light in full band.
- the image sensor 110 may be an RGBIR sensor, an RGBWIR sensor, an RWBIR sensor, an RWGIR sensor, or a BWGIR sensor; where R represents an R photosensitive channel, G represents a G photosensitive channel, B represents a B photosensitive channel, and IR represents an IR photosensitive channel, W means all-pass photosensitive channel.
- the image sensor 110 in the embodiment of the present application may be an RGBIR sensor, and the RGBIR sensor has a red-green-blue RGB photosensitive channel and an infrared IR photosensitive channel.
- the RGB light-sensing channel can be sensitive to both the visible light band and the near-infrared band, but it is mainly used to light-sensitive the visible light band; and the IR light-sensing channel is a channel that is sensitive to the near-infrared band.
- the arrangement of the R photosensitive channel, G photosensitive channel, B photosensitive channel, and IR photosensitive channel can be seen in FIG. 4.
- the RGBIR image sensor senses the R photosensitive channel, G photosensitive channel, B photosensitive channel and IR photosensitive channel to obtain corresponding image signals.
- the sensitivity value corresponding to the R photosensitive channel includes the R channel value and the IR channel value
- the sensitivity value corresponding to the G photosensitive channel includes the G channel value and the IR channel value
- the sensitivity value corresponding to the B photosensitive channel includes the B channel value and the IR channel value
- the sensitivity value corresponding to the IR sensitivity channel includes the IR channel value.
- the R channel value and the IR light channel value sensed by the R photosensitive channel are different, and the G channel value and the IR channel perceived by the G light channel are different
- the value is different, the B channel value and the IR channel value perceived by the B photosensitive channel are different, and the IR channel value perceived by the IR photosensitive channel is different.
- the image signal captured by the RGBIR image sensor is the first image signal
- the image captured by the RGBIR image sensor The signal is the second image signal.
- the value of each channel of R photosensitive channel, G photosensitive channel, B photosensitive channel and IR photosensitive channel in the first image signal, and each of the R photosensitive channel, G photosensitive channel, B photosensitive channel and IR photosensitive channel in the second image signal is different.
- the channel values of each photosensitive channel in the first image signal are different from the channel values of the photosensitive channel in the second image signal.
- the optical lens of the device where the image sensor 110 is located may be provided with a filter.
- the spectral region filtered by the filter may include [T1, T2]; wherein, 600nm ⁇ T1 ⁇ 800nm, 750nm ⁇ T2 ⁇ 1100nm, and T1 ⁇ T2.
- the R, G, B, and IR photosensitive channels have large differences in response in the near-infrared band (650 nm to 1100 nm).
- the near-infrared light component is removed.
- a filter is provided on the optical lens to filter out the spectral region with a large difference in response.
- the filter can be integrated on the above-mentioned optical lens by coating technology; in addition, the filter can be a band-stop filter or a bimodal filter with lower cost.
- the spectral region filtered by the filter may further include a spectral region of [T3, + ⁇ ), 850nm ⁇ T3 ⁇ 1100nm, and T2 ⁇ T3.
- the light supplementing device 120 may adopt a strobe method to perform near-infrared light supplement on the target scene. Furthermore, the fill light device performs near infrared fill light during the exposure period of the first preset exposure, which may be: during the exposure period of the first preset exposure, the start of near infrared fill light The time is not earlier than the exposure start time of the first preset exposure, and the end time of performing near infrared fill light is not later than the exposure end time of the first preset exposure.
- FIG. 5(a) and FIG. 5(b) exemplarily show a schematic diagram of the relationship between the exposure time and the fill time of the near infrared fill light.
- FIG. 5(a) two exposures are used for the image sensor 110, that is, two exposures occur within one exposure period, and the two exposures are defined as an odd exposure and an even exposure, respectively.
- Perform near-infrared fill light on the target scene, that is, the even exposure is the first preset exposure.
- the rising edge of near-infrared fill light is later than the start time of even exposure, and the falling edge of near-infrared fill light can be compared
- multiple exposures are used for the image sensor 110, that is, three exposures occur within one exposure period, and the three exposures are defined as A exposure, B exposure, and C exposure, respectively.
- the target scene performs near-infrared fill light, that is, the C exposure is the first preset exposure.
- the rising edge of the near-infrared fill light is later than the start of the C exposure, and the falling edge of the near-infrared fill light can be shorter than the time when the C exposure ends early.
- the exposure parameter corresponding to any exposure process of light may not be greater than the target maximum value, where the exposure parameter is the exposure duration and/or gain, and the target maximum value is the maximum value among the exposure parameters corresponding to the exposure without supplementary light .
- a second target image without near-infrared fill light and a first target image with near-infrared fill light can be captured.
- the fill light device 120 provides near-infrared fill light at least during the exposure process in which the image sensor 110 captures the first image signal.
- the fill light device 120 needs to not provide near infrared fill light during the exposure process in which the image sensor 110 captures the second image signal.
- the number of near-infrared fill lights of the fill light device 120 per unit length of time is lower than the number of exposures of the image sensor 110 within a unit time length, and the interval between each two adjacent near-infrared fill lights is one or more times exposure. In this way, the fill light device 120 has near infrared fill light only during the partial exposure of the image sensor 110.
- the specific timing of the fill light of the fill light device 120 in multiple exposures may be set according to actual scene requirements, that is, the first preset exposure may be set according to actual scene requirements.
- multiple exposures may include odd-numbered exposures and even-numbered exposures.
- the configuration method of the first preset exposure may be as follows:
- the first preset exposure is one of odd exposures
- the second preset exposure is one of even exposures.
- the first image signal is a signal generated according to one of the odd exposures
- the second image signal is a signal generated according to one of the even exposures.
- the first preset exposure is one of even-numbered exposures
- the second preset exposure is one of odd-numbered exposures.
- the first image signal is a signal generated according to one of the even exposures
- the second image signal is a signal generated according to one of the odd exposures.
- the first preset exposure is one of the specified odd exposures
- the second preset exposure is other than the specified odd exposures One of them.
- the first image signal is a signal generated according to one of the specified odd exposures
- the second image signal is generated according to one of the other exposures other than the specified odd exposures signal.
- the first preset exposure is one of the specified even-numbered exposures
- the second preset exposure is other than the specified even-numbered exposures One of them.
- the first image signal is a signal generated according to one of the even-numbered exposures specified
- the second image signal is generated based on one of the other exposures other than the specified even-numbered exposure signal.
- the timing of the fill light of the fill light device 120 in multiple exposures given above is merely an example, and should not constitute a limitation on the embodiments of the present application.
- the intelligent analysis device 140 may select the first target image and the second target image Obtain the image to be analyzed, and perform intelligent analysis on the image to be analyzed to obtain an intelligent analysis result corresponding to the image to be analyzed.
- the intelligent analysis device 140 may acquire corresponding images to be analyzed according to scene requirements, and perform intelligent analysis on the acquired images to be analyzed.
- the intelligent analysis device 140 may obtain the first target image from the first target image and the second target image, and determine the first target image as a Describe the image to be analyzed. In this way, the intelligent analysis device can perform intelligent analysis based on the first target image by default.
- the intelligent analysis device 140 may obtain the second target image from the first target image and the second target image, and determine the second target image as The image to be analyzed. In this way, the intelligent analysis device can perform intelligent analysis based on the second target image by default.
- the intelligent analysis device 140 when the received selection signal is switched to the first selection signal, acquires the first target image and determines the first target image as the pending Analyzing the image; when the received selection signal is switched to the second selection signal, acquiring the second target image and determining the second target image as the image to be analyzed. In this way, the intelligent analysis device can switch from the first target image and the second target image to perform intelligent analysis.
- selecting the corresponding image according to the selection signal can improve the controllability of the image processing system, that is, switch the type of the acquired image according to different needs.
- the above specific implementation manner of selecting the corresponding image according to the selection signal is only an optional implementation manner.
- all the methods that can realize the selection signal are within the protection scope of the present application, and the present application does not limit this. Methods such as the mode selection or the default selection are reasonable.
- the image processing system is embodied in the form of multiple units, and the multiple units jointly complete the image processing process.
- the division of the image processing system in FIG. 3(a) does not constitute a limitation on the present application, but is merely an exemplary description.
- the image processing system includes: a scene collection unit, a scene processing unit, a scene perception unit, and a scene fill light unit.
- the scene collection unit may include the above-mentioned optical lens, filter and image sensor 110.
- the scene fill light unit is the fill light device 120 described above.
- the function implemented by the scene processing unit is the function of the image processor 130 described above.
- the function is specifically: the scene processing unit obtains the first image signal and the second image signal output by the scene collection unit, and according to the first image signal A first target image is generated, and a second target image is generated based on the second image signal.
- the scene perception unit is the above-mentioned intelligent analysis device 140, which is used to obtain the image to be analyzed from the first target image and the second target image, and intelligently analyze the image to be analyzed to obtain the to-be-analyzed The intelligent analysis result corresponding to the image.
- the image processing system includes: a scene collection unit, a scene processing unit, a selection unit, a scene perception unit, and a scene fill light unit.
- the scene collection unit may include the above-mentioned optical lens, filter and image sensor 110.
- the scene fill light unit is the fill light device 120 described above.
- the function implemented by the scene processing unit is the function of the image processor 130 described above. The function is specifically: the scene processing unit obtains the first image signal and the second image signal output by the scene collection unit, and according to the first image signal A first target image is generated, and a second target image is generated based on the second image signal.
- the functions implemented by the selection unit and the scene perception unit are the functions implemented by the intelligent analysis device 140, specifically: when the received selection signal is switched to the first selection signal, the first target image is acquired and the The first target image is determined to be the image to be analyzed, and the image to be analyzed is intelligently analyzed to obtain an intelligent analysis result corresponding to the image to be analyzed; when the received selection signal is switched to the second selection signal, the The second target image, determining the second target image as the image to be analyzed, and performing intelligent analysis on the image to be analyzed to obtain an intelligent analysis result corresponding to the image to be analyzed.
- this solution uses near infrared fill light to the target scene to regulate the light environment of the target scene, so that the image signal quality of the image sensor can be guaranteed, which can be used for output or intelligent analysis Image quality of the image. Therefore, this solution can improve the quality of the image to be analyzed for output or intelligent analysis.
- the multiple exposure of the image sensor 110 specifically includes: the image sensor 110 performs the multiple exposure according to a first exposure parameter, and the parameter type of the first exposure parameter includes exposure At least one of time and exposure gain;
- the fill light device performs near-infrared fill light during the exposure time period of the first preset exposure, specifically: the fill light device performs the exposure time of the first preset exposure according to the first fill light parameter In the segment, near infrared fill light is performed, and the parameter type of the first fill light parameter includes at least one of fill light intensity and fill light concentration.
- the exposure parameters and/or fill light parameters may be adjusted based on image information corresponding to the image to be analyzed.
- the image processing system provided by the embodiment of the present application may further include: a control unit 150;
- the control unit 150 is configured to obtain brightness information corresponding to the image to be analyzed, adjust the first fill light parameter to a second fill light parameter according to the brightness information corresponding to the image to be analyzed, and expose the first exposure
- the parameter is adjusted to the second exposure parameter; and the second fill light parameter is sent to the fill light device 120, and the second exposure parameter is sent to the image sensor 110 synchronously;
- the fill light device 120 performs near infrared fill light during the exposure time period of the first preset exposure, specifically: the fill light device 120 receives the second fill light parameter from the control unit according to The second fill light parameter, performing near infrared fill light during the exposure time period of the first preset exposure;
- the multiple exposure of the image sensor 110 is specifically: the image sensor 110 receives the second exposure parameter from the control unit, and performs the multiple exposure according to the second exposure parameter.
- the image processing system shown in FIG. 2 is only an example, and should not constitute a limitation on the embodiment of the application.
- the control unit 150 can be connected to the image in addition to the fill light device 120
- the sensor 110, the image processor 130, or the intelligent analysis device 140 are connected so that the control unit 150 can interact with the image sensor 110, the image processor 130, or the intelligent analysis device 140 to complete image processing.
- the control unit 150 may be located in the same device as the fill light device 120, or may be located in a different device from the fill light device 120, which is reasonable.
- the function performed by the control unit 150 may be performed by the image processor 130 or the intelligent analysis device 140.
- the exposure parameters of the image sensor 110 and/or the fill light device can be adjusted based on the brightness information corresponding to the image to be analyzed 120 fill light parameters.
- the brightness information corresponding to the image to be analyzed may be obtained according to the intelligent analysis result corresponding to the image to be analyzed, which may specifically include:
- the intelligent analysis result corresponding to the image to be analyzed includes the position information of the target of interest included in the image to be analyzed, determining at least one target area in the image to be analyzed according to the position information;
- the average brightness of the at least one target area is determined as brightness information corresponding to the image to be analyzed.
- At least one target area can be selected from the areas indicated by the location information, in this case, each target area is the area where the interest target is located.
- the adjusting the first exposure parameter to the second exposure parameter according to the brightness information corresponding to the image to be analyzed includes:
- the adjusting the first fill light parameter to the second fill light parameter according to the brightness information corresponding to the image to be analyzed may include:
- first predetermined threshold and the third predetermined threshold may be the same value or different values.
- second predetermined threshold and the fourth predetermined threshold may be the same value or different values.
- specific values of the first predetermined threshold, the second predetermined threshold, the third predetermined threshold, and the fourth predetermined threshold may be set according to empirical values.
- first fill light parameter and the second fill light parameter are only used to distinguish the fill light parameter before and after adjustment, and do not have any limited meaning.
- the first exposure parameter and the second exposure parameter are only used to distinguish the before and after adjustment The exposure parameters do not have any limited meaning.
- the degree of increase or decrease of the fill light parameter and the exposure parameter can also be set based on empirical values.
- the image processing system in this application further includes a control unit, which is used to adaptively control the fill light of the fill light device 120 and the exposure of the image sensor 110.
- the image processing system is embodied in the form of multiple units, and the multiple units jointly complete the image processing process.
- the division of the image processing system in FIG. 3(c) does not constitute a limitation on the present application, but is merely an exemplary description.
- the electronic device includes: a scene collection unit, a scene processing unit, a scene perception unit, a scene fill light unit, and a control unit.
- the scene collection unit may include: the above-mentioned optical lens, filter and image sensor 110; the scene fill-in unit is the above-mentioned fill-light device 120; the control unit is the above-mentioned control unit 150; and the scene processing unit implements the above-mentioned image Functions implemented by the processor 130; the scene awareness unit implements the functions implemented by the intelligent analysis device 140 described above.
- control of the scene fill light unit and the scene collection unit in the system shown in FIG. 3(b) can also refer to FIG. 3(c), and the fill light control of the scene fill light unit can be performed by adding a control unit And the scene collection unit's collection control, the scene fill light unit and the scene collection unit can also adjust the fill light control of the scene fill light unit and the scene collection unit's collection control according to the intelligent analysis results fed back by the scene perception unit.
- the image processor 130 may further include the following steps: outputting the second target image for display, for example, the output second target image may be Displayed in a display device outside the system.
- the image processor 130 may output only the second target image, and simultaneously output the second target image and the first target image.
- the specific image to be output is determined according to actual needs, and is not limited here.
- the content related to generating a first target image based on the first image signal and generating a second target image based on the second image signal will be described below.
- the image processor 130 For the above single sensor perception system, there are many specific implementation manners of the image processor 130 generating the first target image according to the first image signal. Those skilled in the art can understand that since the signals of each channel of the sensor including the IR channel and at least two non-IR channels are staggered, when directly magnifying the image signal obtained by imaging the sensor, a mosaic phenomenon is found in the image, which is clear The degree is not good, so demosaicing is needed to generate a true-detail image. In order to obtain a clear and accurate first target image, the first image signal may be demosaiced, and then the demosaiced image signal is used to generate a first target image. Based on this, in one implementation, the image processor 130 generates a first target image according to the first image signal, including:
- interpolation processing is performed in an averaging manner, and the first target image is obtained according to the image after difference processing.
- the difference-processed image may be determined as the first target image; or, the difference-processed image may be subjected to image enhancement processing, and the image after the image enhancement processing may be determined as The first target image.
- image enhancement processing may include but is not limited to: histogram equalization, gamma correction, contrast lifting, etc., where histogram equalization converts the histogram of the original image to a probability density of 1 (ideal case) ) Image, Gamma correction uses a non-linear function (exponential function) to transform the gray value of the image, and contrast enhancement uses a linear function to transform the gray value of the image.
- the interpolation processing according to the channel values of the plurality of pixels included in the neighborhood of each pixel of the first image signal in an average manner includes:
- Each channel value of each photosensitive channel of the first image signal is interpolated to obtain each channel value after interpolation processing of each photosensitive channel corresponding to each pixel in the first image signal; for each pixel The channel values after the interpolation processing of the corresponding photosensitive channels are averaged to obtain the image after the difference processing.
- the interpolation algorithm used for interpolation may be a bilinear interpolation algorithm or a bicubic interpolation algorithm.
- the embodiments of the present application do not limit the interpolation algorithm.
- the first target image is obtained by averaging the channel values of the respective photosensitive channels corresponding to each pixel.
- the first target image is the demosaiced image.
- the first target image is an image including only a luminance signal.
- the luminance value of each pixel is: the average value of the corresponding channel values in the first image signal.
- a sensor including an IR channel and at least two non-IR channels is an RGBIR sensor, where the channel values of multiple pixels included in the neighborhood of each pixel according to the first image signal , Interpolating in an averaging manner, including:
- Each IR photosensitive channel, R photosensitive channel, G photosensitive channel and B photosensitive channel of the first image signal are respectively interpolated to obtain the channel value after interpolation processing of each photosensitive channel corresponding to each pixel in the first image signal
- the average value of the channel values after interpolation processing of each photosensitive channel corresponding to each pixel is averaged to obtain the image after the difference processing.
- the image processor 130 generating the second target image according to the second image signal may include:
- the channel value adjustment for each non-IR photosensitive channel specifically includes: subtracting the IR parameter value corresponding to the corresponding pixel position of each channel value before the adjustment of the non-IR photosensitive channel, and the IR parameter value is the corresponding pixel A product of an IR value of a position and a preset correction value, and the IR value is an IR value sensed by the IR photosensitive channel at the position of the corresponding pixel.
- the difference-processed image may be determined as the second target image; or, the difference-processed image may be subjected to image enhancement processing, and the image after the image enhancement processing may be determined as The second target image.
- the specific method for determining the second target image is not limited in this application.
- the preset correction value can be set according to the actual situation.
- the preset correction value can usually be set to 1, of course, according to the actual situation, the preset correction value can be set to 0 to Any integer or decimal in 1024, and those skilled in the art can understand that the value of the preset correction value is not limited to this.
- the image processor 130 generates the second target image according to the second image signal, specifically:
- the image processor 130 generating the second target image according to the second image signal may include:
- M frame second image signals including the current second image signal, performing wide dynamic synthesis processing on the M frame second image signals to obtain a wide dynamic image, and performing infrared removal processing on the wide dynamic image to obtain the A second target image; wherein the infrared removal processing includes:
- the number of M frames is not limited, and M is less than the total number of exposures in one exposure period.
- High Dynamic (HDR) images have also become wide dynamic range images. Compared with low dynamic range images, there is no local overexposure, which can reflect more image details, so in this embodiment of the application A visible light image capable of obtaining more image details can be obtained, and at least two frames of second image signals can be subjected to wide dynamic synthesis processing to obtain a wide dynamic image signal.
- the process of performing infrared removal processing on the wide dynamic image signal to obtain a visible light image can refer to the aforementioned processing process for a frame of second image signal.
- a frame of second image signal may also be selected, and a visible light image may be generated based on the selected frame of second image signal.
- the specific generation process is the same as the generation process when the second image signal is one frame, which will not be repeated here.
- the intelligent analysis in this application includes but is not limited to the types of objects included in the target scene, the area where the objects are located, etc.
- the results of the intelligent analysis may include but not limited to: types of objects included in the target scene, Coordinate information of the area, location information of interest targets, etc.
- the intelligent analysis device 140 can detect the target object and identify the target object based on the image to be analyzed. For example, according to the image to be analyzed, detect whether there is a target object in the target scene, and the location of the existing target object; for another example, identify the specific target object in the target scene according to the image to be analyzed, and identify the category of the target object Attribute information, etc.
- the target object may be a human face, a vehicle, a license plate, or other objects or objects.
- the intelligent analysis device 140 may analyze the image to be analyzed based on a specific algorithm to perform image processing on the target scene, or, by using a neural network model, analyze the image to be analyzed to analyze the target scene It is reasonable to do image processing.
- the intelligent analysis device 140 may perform feature enhancement processing on the feature image before analyzing the feature image corresponding to the image to be analyzed.
- the intelligent analysis device 140 performs intelligent analysis on the image to be analyzed to obtain an intelligent analysis result corresponding to the image to be analyzed, including:
- an intelligent analysis result corresponding to the image to be analyzed is obtained, and the intelligent analysis result includes an interest target contained in the image to be analyzed and/or location information of the interest target.
- one or more frames of feature images can be generated, and then each frame of feature images is analyzed to obtain the results of the intelligent analysis.
- the feature image can be subjected to feature enhancement processing.
- the feature enhancement processing includes extreme value enhancement processing, where the extreme value enhancement processing is specifically: processing of localized extreme value filtering on the feature image.
- the so-called extreme value may be a maximum value or a minimum value.
- the processing of the extreme value enhancement processing includes: dividing the feature image into blocks to obtain multiple image blocks; for each image block, maximizing the pixels included in the image block, It is determined to be the processing result corresponding to the image block; each processing result is combined to obtain the image after the extreme value enhancement processing.
- the number of image blocks is the resolution of the image after extreme value enhancement processing. It should be noted that the number of image blocks can be set according to the actual situation, which is not limited in this application. For ease of understanding, taking the number of image blocks as 100 as an example, the process of extreme value enhancement processing is introduced:
- the maximum value in the pixels included in the image block is determined as the processing result corresponding to the image block, and 100 processing results are obtained;
- the 100 processing results are merged according to the positional relationship of the image blocks to obtain an image containing 100 pixels.
- the specific implementation method of the extreme value enhancement processing is not limited to the above-mentioned method. For example: you can traverse each pixel position, for each pixel position, determine a maximum value for the pixel position, and use the maximum value to update the pixel value of the pixel position, where, for any pixel position
- the way of the large value may be: determining each pixel position adjacent to the pixel position, determining each adjacent pixel position and the maximum value of the pixels in the pixel position, and using the determined maximum value as the maximum value of the pixel position Great value.
- an embodiment of the present application further provides an image processing method.
- the image processing method provided in the embodiments of the present application can be applied to an electronic device having the functions of an image processor, an intelligent analysis device, and a control unit.
- the functions performed by the electronic device are the same as the images in the above embodiments
- the functions performed by the processor and the intelligent analysis device are the same, and the specific implementation of the image processing method may refer to the foregoing embodiments.
- an image processing method provided by an embodiment of the present application may include:
- S801 Obtain a first image signal and a second image signal output by the image sensor
- the image sensor generates and outputs a first image signal and a second image signal through multiple exposures, wherein the first image signal is an image signal generated according to a first preset exposure, and the second image signal is According to the image signal generated by the second preset exposure, the first preset exposure and the second preset exposure are two of the multiple exposures; at the exposure time of the first preset exposure
- the fill light device in the section performs near infrared fill light, and the fill light device does not perform near infrared fill light in the exposure time period of the second preset exposure.
- S804 Perform intelligent analysis on the image to be analyzed to obtain an intelligent analysis result corresponding to the image to be analyzed.
- the image sensor includes a plurality of photosensitive channels, the plurality of photosensitive channels includes an IR photosensitive channel, and further includes at least two of an R photosensitive channel, a G photosensitive channel, a B photosensitive channel, and a W photosensitive channel.
- the photosensitive channel generates and outputs the first image signal and the second image signal through the multiple exposures;
- R photosensitive channel is used for sensing light in the red and near infrared bands
- G photosensitive channel is used for sensing light in the green and near infrared bands
- B photosensitive channel is used for sensing the blue and near infrared bands.
- Light IR means infrared sensitive channel, used to sense light in near infrared band
- W means all-pass sensitive channel, used to sense light in full band.
- the image sensor is an RGBIR sensor, an RGBWIR sensor, an RWBIR sensor, an RWGIR sensor, or a BWGIR sensor;
- R represents R photosensitive channel
- G represents G photosensitive channel
- B represents B photosensitive channel
- IR represents IR photosensitive channel
- W represents all-pass photosensitive channel.
- the acquiring the image to be analyzed from the first target image and the second target image includes:
- the acquiring the image to be analyzed from the first target image and the second target image includes:
- the received selection signal is switched to the second selection signal
- the second target image is acquired, and the second target image is determined as the image to be analyzed.
- an image processing method provided by an embodiment of the present application further includes:
- the first control signal is used to instruct the fill light device to perform the fill time of near infrared fill light, specifically, during the exposure period of the first preset exposure, perform near infrared fill light
- the start time of is not earlier than the exposure start time of the first preset exposure
- the end time of performing near infrared fill light is not later than the exposure end time of the first preset exposure.
- the first control signal is also used to indicate the number of fill lights of the fill light device, specifically, the number of near infrared fill lights of the fill light device per unit time length is lower than that of the image sensor The number of exposures per unit length of time, in which one or more exposures are spaced every two adjacent periods of near infrared fill light.
- the multiple exposures of the image sensor include odd exposures and even exposures; wherein,
- the first preset exposure is one of odd exposures
- the second preset exposure is one of even exposures
- the first preset exposure is one of the even exposures
- the second preset exposure is one of the odd exposures
- the first preset exposure is one of the specified odd exposures
- the second preset exposure is one of the other exposures other than the specified odd exposures
- the first preset exposure is one of the specified even-numbered exposures
- the second preset exposure is one of the other exposures other than the specified even-numbered exposures.
- an image processing method provided by an embodiment of the present application further includes:
- the acquiring brightness information corresponding to the image to be analyzed includes:
- the intelligent analysis result corresponding to the image to be analyzed includes the position information of the target of interest included in the image to be analyzed, determining at least one target area in the image to be analyzed according to the position information;
- the average brightness of the at least one target area is determined as brightness information corresponding to the image to be analyzed.
- the adjusting the first exposure parameter used for exposure of the image sensor to the second exposure parameter according to the brightness information corresponding to the image to be analyzed includes:
- the first exposure parameter used by the image sensor for exposure is adjusted down to obtain the second exposure parameter
- the first predetermined threshold is higher than the second predetermined threshold.
- the adjusting the first fill light parameter used by the fill light device to the second fill light parameter according to the brightness information corresponding to the image to be analyzed includes:
- the first fill light parameter used by the fill light of the fill light device is adjusted down to obtain the second fill light parameter
- the third predetermined threshold is higher than the fourth predetermined threshold.
- generating the first target image according to the first image signal includes:
- interpolation processing is performed in an averaging manner, and the first target image is obtained according to the image after difference processing.
- the obtaining the first target image according to the difference processed image includes:
- the image after the difference processing is subjected to image enhancement processing, and the image after the image enhancement processing is determined as the first target image.
- the interpolating the channel values of the plurality of pixels included in the neighborhood of each pixel of the first image signal in an average manner includes:
- An average value is obtained for each channel value after interpolation processing of each photosensitive channel corresponding to each pixel to obtain the image after the difference processing.
- the generating the second target image according to the second image signal includes:
- the channel value adjustment for each non-IR photosensitive channel specifically includes: subtracting the IR parameter corresponding to the corresponding pixel position of each channel value before the adjustment of the non-IR photosensitive channel Value, the IR parameter value is the product of the IR value of the corresponding pixel position and a preset correction value, and the IR value is the IR value sensed by the IR light-sensing channel at the corresponding pixel position.
- the generating the second target image according to the second image signal includes:
- M frame second image signals including the current second image signal, performing wide dynamic synthesis processing on the M frame second image signals to obtain a wide dynamic image, and performing infrared removal processing on the wide dynamic image to obtain the A second target image; wherein the infrared removal processing includes:
- performing intelligent analysis on the image to be analyzed to obtain an intelligent analysis result corresponding to the image to be analyzed includes:
- an intelligent analysis result corresponding to the image to be analyzed is obtained, and the intelligent analysis result includes an interest target contained in the image to be analyzed and/or location information of the interest target.
- steps of fill light control and exposure control can be performed by the image processor or the intelligent analysis device, or by the controller in the device integrating the image processor, the intelligent analysis device and the controller , This is reasonable.
- this solution uses near infrared fill light to the target scene to regulate the light environment of the target scene, so that the quality of the image signal received by the image sensor can be guaranteed, which in turn can be guaranteed for output or intelligent analysis.
- the image quality of the image Therefore, this solution can improve the quality of the image to be analyzed for output or intelligent analysis.
- an embodiment of the present application further provides an image processing device.
- an image processing device provided by an embodiment of the present application may include:
- the image signal obtaining module 910 is configured to obtain the first image signal and the second image signal output by the image sensor, wherein the image sensor generates and outputs the first image signal and the second image signal through multiple exposures, the first The image signal is an image signal generated according to a first preset exposure, the second image signal is an image signal generated according to a second preset exposure, and the first preset exposure and the second preset exposure are the Two of the multiple exposures; the fill light device performs near-infrared fill light during the exposure period of the first preset exposure, and the fill light device during the exposure period of the second preset exposure No near infrared fill light;
- An image generation module 920 configured to generate a first target image based on the first image signal, and generate a second target image based on the second image signal;
- An image selection module 930 configured to obtain an image to be analyzed from the first target image and the second target image
- the image analysis module 940 is configured to perform intelligent analysis on the image to be analyzed to obtain an intelligent analysis result corresponding to the image to be analyzed.
- this solution uses near infrared fill light to the target scene to regulate the light environment of the target scene, so that the quality of the image signal received by the image sensor can be guaranteed, which in turn can be guaranteed for output or intelligent analysis.
- the image quality of the image Therefore, this solution can improve the quality of the image to be analyzed for output or intelligent analysis.
- the image selection module 930 is configured to: obtain the first target image from the first target image and the second target image, and determine the first target image as the image to be analyzed Or, obtain the second target image from the first target image and the second target image, and determine the second target image as the image to be analyzed.
- the image selection module 930 is configured to: when the received selection signal is switched to the first selection signal, acquire the first target image and determine the first target image as the image to be analyzed; When the received selection signal is switched to the second selection signal, the second target image is acquired, and the second target image is determined as the image to be analyzed.
- an image processing device provided by an embodiment of the present application further includes:
- the signal sending module is used to send a first control signal to the fill light device, and the first control signal is used to control the fill light device to perform near infrared fill light during the exposure period of the first preset exposure , During the exposure period of the second preset exposure, near infrared fill light is not performed.
- the first control signal is used to instruct the fill light device to perform the fill time of near infrared fill light, specifically, during the exposure period of the first preset exposure, perform near infrared fill light
- the start time of is not earlier than the exposure start time of the first preset exposure
- the end time of performing near infrared fill light is not later than the exposure end time of the first preset exposure.
- the first control signal is also used to indicate the number of fill lights of the fill light device, specifically, the number of near infrared fill lights of the fill light device per unit time length is lower than that of the image sensor The number of exposures per unit length of time, in which one or more exposures are spaced every two adjacent periods of near infrared fill light.
- the multiple exposures of the image sensor include odd exposures and even exposures; the first control signal is used to instruct the fill light device to perform near infrared fill light in the first preset exposure; among them,
- the first preset exposure is one of odd exposures
- the second preset exposure is one of even exposures
- the first preset exposure is one of the even exposures
- the second preset exposure is one of the odd exposures
- the first preset exposure is one of the specified odd exposures
- the second preset exposure is one of the other exposures other than the specified odd exposures
- the first preset exposure is one of the specified even-numbered exposures
- the second preset exposure is one of the other exposures other than the specified even-numbered exposures.
- an image processing device provided by an embodiment of the present application further includes:
- the parameter adjustment module is used to obtain the brightness information corresponding to the image to be analyzed; according to the brightness information corresponding to the image to be analyzed, the first fill light parameter used by the fill light of the fill light device is adjusted to the second fill light Parameter, adjusting the first exposure parameter used by the image sensor exposure to the second exposure parameter; and sending the second fill light parameter to the fill light device, and synchronously sending the second exposure to the image sensor Parameters such that the fill light device receives the second fill light parameter, performs near-infrared fill light during the exposure period of the first preset exposure according to the second fill light parameter, and the The image sensor receives the second exposure parameter, and performs the multiple exposure according to the second exposure parameter.
- the parameter adjustment module acquiring brightness information corresponding to the image to be analyzed includes:
- the intelligent analysis result corresponding to the image to be analyzed includes the position information of the target of interest included in the image to be analyzed, determining at least one target area in the image to be analyzed according to the position information;
- the average brightness of the at least one target area is determined as brightness information corresponding to the image to be analyzed.
- the parameter adjustment module adjusts the first exposure parameter used by the image sensor to the second exposure parameter according to the brightness information corresponding to the image to be analyzed, including:
- the first exposure parameter used by the image sensor for exposure is adjusted down to obtain the second exposure parameter
- the first predetermined threshold is higher than the second predetermined threshold.
- the parameter adjustment module adjusts the first fill light parameter used by the fill light device to the second fill light parameter according to the brightness information corresponding to the image to be analyzed, including:
- the first fill-light parameter used by fill-light of the fill-light device is adjusted down to obtain the second fill-light parameter
- the first fill-in light parameter is adjusted up to obtain the second fill-in light parameter
- the third predetermined threshold is higher than the fourth predetermined threshold.
- the image generation module 920 generates the first target image according to the first image signal, including:
- interpolation processing is performed in an averaging manner, and the first target image is obtained according to the image after difference processing.
- the obtaining the first target image according to the difference processed image includes:
- the image after the difference processing is subjected to image enhancement processing, and the image after the image enhancement processing is determined as the first target image.
- the interpolating the channel values of the plurality of pixels included in the neighborhood of each pixel of the first image signal in an average manner includes:
- An average value is obtained for each channel value after interpolation processing of each photosensitive channel corresponding to each pixel to obtain the image after the difference processing.
- the image generation module 920 generating the second target image according to the second image signal includes:
- the channel value adjustment for each non-IR photosensitive channel specifically includes: subtracting the IR parameter corresponding to the corresponding pixel position of each channel value before the adjustment of the non-IR photosensitive channel Value, the IR parameter value is the product of the IR value of the corresponding pixel position and a preset correction value, and the IR value is the IR value sensed by the IR light-sensing channel at the corresponding pixel position.
- the image generation module 920 generates the second target image according to the second image signal, including:
- M frame second image signals including the current second image signal, performing wide dynamic synthesis processing on the M frame second image signals to obtain a wide dynamic image, and performing infrared removal processing on the wide dynamic image to obtain the A second target image; wherein the infrared removal processing includes:
- the image analysis module 940 performs intelligent analysis on the image to be analyzed to obtain an intelligent analysis result corresponding to the image to be analyzed, including:
- an intelligent analysis result corresponding to the image to be analyzed is obtained, and the intelligent analysis result includes an interest target contained in the image to be analyzed and/or location information of the interest target.
- an embodiment of the present application further provides an electronic device.
- the electronic device includes a processor 1001, a communication interface 1002, a memory 1003, and a communication bus 1004, where the processor 1001 communicates The interface 1002 and the memory 1003 communicate with each other through the communication bus 1004,
- Memory 1003 used to store computer programs
- the processor 1001 is used to implement an image processing method provided in the embodiments of the present application when executing the program stored in the memory 1003.
- the communication bus mentioned in the above electronic device may be a peripheral component interconnection standard (Peripheral Component Interconnect, PCI) bus or an extended industry standard structure (Extended Industry Standard Architecture, EISA) bus, etc.
- PCI peripheral component interconnection standard
- EISA Extended Industry Standard Architecture
- the communication bus can be divided into an address bus, a data bus, and a control bus. For ease of representation, only one thick line is used in the figure, but it does not mean that there is only one bus or one type of bus.
- the communication interface is used for communication between the above electronic device and other devices.
- the memory may include random access memory (Random Access Memory, RAM), or non-volatile memory (Non-Volatile Memory, NVM), for example, at least one disk memory.
- RAM Random Access Memory
- NVM Non-Volatile Memory
- the memory may also be at least one storage device located away from the foregoing processor.
- the aforementioned processor may be a general-purpose processor, including a central processor (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; it may also be a digital signal processor (Digital Signal Processing, DSP), dedicated integration Circuit (Application Specific Integrated Circuit, ASIC), field programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components.
- a central processor Central Processing Unit, CPU
- NP Network Processor
- DSP Digital Signal Processing
- ASIC Application Specific Integrated Circuit
- FPGA Field-Programmable Gate Array
- an embodiment of the present application also provides a computer-readable storage medium in which a computer program is stored, and the computer program is processed
- the image processing method provided by the embodiments of the present application is implemented when the processor is executed.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Studio Devices (AREA)
Abstract
Description
Claims (35)
- 一种图像处理***,其特征在于,包括:图像传感器,用于通过多次曝光产生并输出第一图像信号和第二图像信号,其中,所述第一图像信号是根据第一预设曝光产生的图像信号,所述第二图像信号是根据第二预设曝光产生的图像信号,所述第一预设曝光和所述第二预设曝光为所述多次曝光中的其中两次曝光;补光装置,用于以频闪方式进行近红外补光,具体为:所述补光装置在所述第一预设曝光的曝光时间段中进行近红外补光,在所述第二预设曝光的曝光时间段中不进行近红外补光;图像处理器,用于接收所述图像传感器输出的所述第一图像信号和所述第二图像信号,根据所述第一图像信号生成第一目标图像,根据所述第二图像信号生成第二目标图像;智能分析装置,用于从所述第一目标图像和所述第二目标图像中获取待分析图像,对所述待分析图像进行智能分析,得到所述待分析图像对应的智能分析结果。
- 根据权利要求1所述的***,其特征在于,所述从所述第一目标图像和所述第二目标图像中获取待分析图像,包括:从所述第一目标图像和所述第二目标图像中获取所述第一目标图像,将所述第一目标图像确定为待分析图像;或者,从所述第一目标图像和所述第二目标图像中获取所述第二目标图像,将所述第二目标图像确定为待分析图像。
- 根据权利要求1所述的***,其特征在于,所述从所述第一目标图像和所述第二目标图像中获取待分析图像,包括:当接收到的选择信号切换为第一选择信号时,获取所述第一目标图像,将所述第一目标图像确定为待分析图像;当接收到的选择信号切换为第二选择信号时,获取所述第二目标图像,将所述第二目标图像确定为待分析图像。
- 根据权利要求1所述的***,其特征在于,所述图像处理器,还用于输出所述第二目标图像。
- 根据权利要求1所述的***,其特征在于,所述图像传感器包括多个感光通道,所述多个感光通道包括IR感光通道,还包括R感光通道、G感光通道、B感光通道和W感光通道中的至少两种,所述多个感光通道通过所述多次曝光产生并输出所述第一图像信号和所述第二图像信号;其中,R感光通道,用于感应红光波段和近红外波段的光,G感光通道,用于感应绿光波段和近红外波段的光,B感光通道,用于感应蓝光波段和近红外波段的光,IR表示红外感光通道,用于感应近红外波段的光,W表示全通感光通道,用于感应全波段的光。
- 根据权利要求5所述的***,其特征在于,所述图像传感器为RGBIR传感器、RGBWIR传感器、RWBIR传感器、RWGIR传感器或BWGIR传感器;其中,R表示R感光通道,G表示G感光通道,B表示B感光通道,IR表示IR感光通道,W表示全通感光通道。
- 根据权利要求1至6中任一项所述的***,其特征在于,所述补光装置在所述第一预设曝光的曝光时间段中进行近红外补光,具体为:在所述第一预设曝光的曝光时间段中,进行近红外补光的开始时刻不早于所述第一预设曝光的曝光开始时刻,进行近红外补光的结束时刻不晚于所述第一预设曝光的曝光结束时刻。
- 根据权利要求7所述的***,其特征在于,所述补光装置在单位时间长度内的近红外补光次数低于所述图像传感器在单位时间长度内的曝光次数,其中,每相邻两次近红外补光的时间段内,间隔一次或多次曝光。
- 根据权利要求7所述的***,其特征在于,所述多次曝光包括奇数次曝光和偶数次曝光;所述第一预设曝光为奇数次曝光中的其中一次,所述第二预设曝光为偶数次曝光中的一次;或者,所述第一预设曝光为偶数次曝光中的其中一次,所述第二预设曝光为奇 数次曝光中的其中一次;或者,所述第一预设曝光为指定的奇数次曝光中的其中一次,所述第二预设曝光为除指定的奇数次曝光之外的其他曝光中的其中一次;或者,所述第一预设曝光为指定的偶数次曝光中的其中一次,所述第二预设曝光为除指定的偶数次曝光之外的其他曝光中的其中一次。
- 根据权利要求1所述的***,其特征在于,所述图像传感器多次曝光具体为:所述图像传感器根据第一曝光参数进行所述多次曝光,其中,所述第一曝光参数的参数类型包括曝光时间和曝光增益中的至少一种;所述补光装置在所述第一预设曝光的曝光时间段中进行近红外补光,具体为:所述补光装置根据第一补光参数,在所述第一预设曝光的曝光时间段中进行近红外补光,其中,所述第一补光参数的参数类型包括补光强度和补光集中度中的至少一种。
- 根据权利要求10所述的***,其特征在于,还包括:控制单元,用于获取所述待分析图像对应的亮度信息,根据所述待分析图像对应的亮度信息,将所述第一补光参数调整为第二补光参数,将所述第一曝光参数调整为第二曝光参数;并向所述补光装置发送所述第二补光参数,同步向所述图像传感器发送所述第二曝光参数;所述补光装置在所述第一预设曝光的曝光时间段中进行近红外补光,具体为:所述补光装置接收来自所述控制单元的所述第二补光参数,根据所述第二补光参数,在所述第一预设曝光的曝光时间段中进行近红外补光;所述图像传感器多次曝光具体为:所述图像传感器接收来自所述控制单元的所述第二曝光参数,根据所述第二曝光参数进行所述多次曝光。
- 根据权利要求11所述的***,其特征在于,所述获取待分析图像对应的亮度信息,包括:当所述待分析图像对应的智能分析结果包括所述待分析图像中包括的兴趣目标的位置信息时,根据所述位置信息确定所述待分析图像中的至少一个 目标区域;将所述至少一个目标区域的平均亮度,确定为所述待分析图像对应的亮度信息。
- 根据权利要求11所述的***,其特征在于,所述根据所述待分析图像对应的亮度信息,将所述第一曝光参数调整为第二曝光参数,包括:当所述亮度信息高于第一预定阈值时,调低所述第一曝光参数,得到第二曝光参数;当所述亮度信息低于第二预定阈值时,调高所述第一曝光参数,得到第二曝光参数;其中,所述第一预定阈值高于所述第二预定阈值。
- 根据权利要求11所述的***,其特征在于,所述根据所述待分析图像对应的亮度信息,将所述第一补光参数调整为第二补光参数,包括:当所述亮度信息高于第三预定阈值时,调低所述第一补光参数,得到第二补光参数;当所述亮度信息低于第四预定阈值时,调高所述第一补光参数,得到第二补光参数;其中,所述第三预定阈值高于所述第四预定阈值。
- 根据权利要求1所述的***,其特征在于,根据所述第一图像信号生成第一目标图像,包括:根据所述第一图像信号的每个像素的邻域所包含的多个像素的通道值,以求平均的方式插值处理,根据差值处理后的图像,得到第一目标图像。
- 根据权利要求15所述的***,其特征在于,所述根据差值处理后的图像,得到第一目标图像,包括:将所述差值处理后的图像确定为第一目标图像;或者,将所述差值处理后的图像进行图像增强处理,将图像增强处理之后的图像确定为第一目标图像。
- 根据权利要求15所述的***,其特征在于,所述根据所述第一图像信号的每个像素的邻域所包含的多个像素的通道值,以求平均的方式插值处理,包括:对所述第一图像信号的每个感光通道的各个通道值分别进行插值,得到所述第一图像信号中每个像素分别对应的每个感光通道插值处理后的各个通道值;对每个像素对应的各个感光通道插值处理后的各个通道值求取平均值,得到所述差值处理后的图像。
- 根据权利要求1所述的***,其特征在于,所述根据所述第二图像信号生成第二目标图像,包括:遍历所述第二图像信号,将所遍历到的每一非IR感光通道进行通道值调整,对通道值调整后的每一非IR感光通道的各个通道值分别进行插值,根据差值处理后的图像,得到第二目标图像;其中,针对每一非IR感光通道进行通道值调整具体为:将所述非IR感光通道的调整前的各个通道值减去与对应像素位置相应的IR参数值,所述IR参数值为对应像素位置的IR值与预设修正值的乘积,所述IR值为所述IR感光通道在所述对应像素位置感应的IR值。
- 根据权利要求1所述的***,其特征在于,所述根据所述第二图像信号生成第二目标图像,包括:获取包括当前第二图像信号的M帧第二图像信号,将所述M帧第二图像信号进行宽动态合成处理,得到宽动态图像,并对所述宽动态图像进行去红外处理,得到所述第二目标图像;其中,所述去红外处理包括:遍历所述宽动态图像,将所遍历到的每一非IR感光通道进行通道值调整,对通道值调整后的每一非IR感光通道的各个通道值分别进行插值,根据差值处理后的图像,得到第二目标图像。
- 根据权利要求1所述的***,其特征在于,对所述待分析图像进行智 能分析,得到所述待分析图像对应的智能分析结果,包括:从所述待分析图像获取对应的特征图像进行特征增强处理,得到增强处理后的特征图像;根据所述增强处理后的特征图像,得到所述待分析图像对应的智能分析结果,所述智能分析结果包括所述待分析图像包含的兴趣目标和/或所述兴趣目标的位置信息。
- 根据权利要求20所述的***,其特征在于,所述特征增强处理包括极值增强处理,其中,所述极值增强处理具体为:对所述特征图像进行局部化的极值滤波的处理。
- 根据权利要求21所述的***,其特征在于,所述极值增强处理的处理过程包括:对所述特征图像进行分块,得到多个图像块;针对每一图像块,将该图像块中所包括像素中的极大值,确定为该图像块对应的处理结果;将各个处理结果进行合并,得到极值增强处理后的图像。
- 一种图像处理方法,其特征在于,包括:获得图像传感器输出的第一图像信号和第二图像信号,其中,所述图像传感器通过多次曝光产生并输出第一图像信号和第二图像信号,所述第一图像信号是根据第一预设曝光产生的图像信号,所述第二图像信号是根据第二预设曝光产生的图像信号,所述第一预设曝光和所述第二预设曝光为所述多次曝光中的其中两次曝光;在所述第一预设曝光的曝光时间段中由补光装置进行近红外补光,在所述第二预设曝光的曝光时间段中所述补光装置不进行近红外补光;根据所述第一图像信号生成第一目标图像,根据所述第二图像信号生成第二目标图像;从所述第一目标图像和所述第二目标图像中获取待分析图像;对所述待分析图像进行智能分析,得到所述待分析图像对应的智能分析结果。
- 根据权利要求23所述的方法,其特征在于,所述从所述第一目标图像和所述第二目标图像中获取待分析图像,包括:从所述第一目标图像和所述第二目标图像中获取所述第一目标图像,将所述第一目标图像确定为待分析图像;或者,从所述第一目标图像和所述第二目标图像中获取所述第二目标图像,将所述第二目标图像确定为待分析图像。
- 根据权利要求23所述的方法,其特征在于,所述从所述第一目标图像和所述第二目标图像中获取待分析图像,包括:当接收到的选择信号切换为第一选择信号时,获取所述第一目标图像,将所述第一目标图像确定为待分析图像;当接收到的选择信号切换为第二选择信号时,获取所述第二目标图像,将所述第二目标图像确定为待分析图像。
- 根据权利要求23至25中任一项所述的方法,其特征在于,还包括:向所述补光装置发送第一控制信号,其中,所述第一控制信号用于控制所述补光装置在所述第一预设曝光的曝光时间段中进行近红外补光,在所述第二预设曝光的曝光时间段中不进行近红外补光。
- 根据权利要求26所述的方法,其特征在于,所述第一控制信号用于指示所述补光装置进行近红外补光的补光时长,具体为,在所述第一预设曝光的曝光时间段中,进行近红外补光的开始时刻不早于所述第一预设曝光的曝光开始时刻,进行近红外补光的结束时刻不晚于所述第一预设曝光的曝光结束时刻。
- 根据权利要求27所述的方法,其特征在于,所述第一控制信号还用于指示所述补光装置的补光次数,具体为,所述补光装置在单位时间长度内的近红外补光次数低于所述图像传感器在单位时间长度内的曝光次数,其中,每相邻两次近红外补光的时间段内,间隔一次或多次曝光。
- 根据权利要求26所述的方法,其特征在于,所述图像传感器的多次曝光包括奇数次曝光和偶数次曝光;其中,所述第一预设曝光为奇数次曝光中的其中一次,所述第二预设曝光为偶数次曝光中的一次;或者,所述第一预设曝光为偶数次曝光中的其中一次,所述第二预设曝光为奇数次曝光中的其中一次;或者,所述第一预设曝光为指定的奇数次曝光中的其中一次,所述第二预设曝光为除指定的奇数次曝光之外的其他曝光中的其中一次;或者,所述第一预设曝光为指定的偶数次曝光中的其中一次,所述第二预设曝光为除指定的偶数次曝光之外的其他曝光中的其中一次。
- 根据权利要求23至25中任一项所述的方法,其特征在于,还包括:获取所述待分析图像对应的亮度信息,根据所述待分析图像对应的亮度信息,将所述补光装置补光所利用的第一补光参数调整为第二补光参数,将所述图像传感器曝光所利用的第一曝光参数调整为第二曝光参数;向所述补光装置发送所述第二补光参数,同步向所述图像传感器发送所述第二曝光参数,以使得:所述补光装置接收所述第二补光参数,根据所述第二补光参数,在所述第一预设曝光的曝光时间段中进行近红外补光,以及所述图像传感器接收所述第二曝光参数,根据所述第二曝光参数进行所述多次曝光。
- 根据权利要求30所述的方法,其特征在于,所述获取待分析图像对应的亮度信息,包括:当所述待分析图像对应的智能分析结果包括所述待分析图像中包括的兴趣目标的位置信息时,根据所述位置信息确定所述待分析图像中的至少一个目标区域;将所述至少一个目标区域的平均亮度,确定为所述待分析图像对应的亮度信息。
- 根据权利要求23所述的方法,其特征在于,对所述待分析图像进行 智能分析,得到所述待分析图像对应的智能分析结果,包括:从所述待分析图像获取对应的特征图像进行特征增强处理,得到增强处理后的特征图像;根据所述增强处理后的特征图像,得到所述待分析图像对应的智能分析结果,所述智能分析结果包括所述待分析图像包含的兴趣目标和/或所述兴趣目标的位置信息。
- 一种图像处理装置,其特征在于,包括:图像信号获得模块,用于获得图像传感器输出的第一图像信号和第二图像信号,其中,所述图像传感器通过多次曝光产生并输出第一图像信号和第二图像信号,所述第一图像信号是根据第一预设曝光产生的图像信号,所述第二图像信号是根据第二预设曝光产生的图像信号,所述第一预设曝光和所述第二预设曝光为所述多次曝光中的其中两次曝光;在所述第一预设曝光的曝光时间段中补光装置进行近红外补光,在所述第二预设曝光的曝光时间段中所述补光装置不进行近红外补光;图像生成模块,用于根据所述第一图像信号生成第一目标图像,根据所述第二图像信号生成第二目标图像;图像选择模块,用于从所述第一目标图像和所述第二目标图像中获取待分析图像;图像分析模块,用于对所述待分析图像进行智能分析,得到所述待分析图像对应的智能分析结果。
- 一种电子设备,其特征在于,包括处理器、通信接口、存储器和通信总线,其中,处理器,通信接口,存储器通过通信总线完成相互间的通信;存储器,用于存放计算机程序;处理器,用于执行存储器上所存放的程序时,实现权利要求23-32任一项所述的方法步骤。
- 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质内存储有计算机程序,所述计算机程序被处理器执行时实现权利要求23-32任一项所述的方法步骤。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811517428.8A CN110493506B (zh) | 2018-12-12 | 2018-12-12 | 一种图像处理方法和*** |
CN201811517428.8 | 2018-12-12 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020119504A1 true WO2020119504A1 (zh) | 2020-06-18 |
Family
ID=68545688
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/122437 WO2020119504A1 (zh) | 2018-12-12 | 2019-12-02 | 一种图像处理方法和*** |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110493506B (zh) |
WO (1) | WO2020119504A1 (zh) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113965671A (zh) * | 2021-02-04 | 2022-01-21 | 福建汇川物联网技术科技股份有限公司 | 一种用于测距的补光方法、装置、电子设备及存储介质 |
CN114745509A (zh) * | 2022-04-08 | 2022-07-12 | 深圳鹏行智能研究有限公司 | 图像采集方法、设备、足式机器人及存储介质 |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110493506B (zh) * | 2018-12-12 | 2021-03-02 | 杭州海康威视数字技术股份有限公司 | 一种图像处理方法和*** |
CN111064898B (zh) * | 2019-12-02 | 2021-07-16 | 联想(北京)有限公司 | 图像拍摄方法及装置、设备、存储介质 |
CN112926367B (zh) * | 2019-12-06 | 2024-06-21 | 杭州海康威视数字技术股份有限公司 | 一种活体检测的设备及方法 |
CN113129241B (zh) * | 2019-12-31 | 2023-02-07 | RealMe重庆移动通信有限公司 | 图像处理方法及装置、计算机可读介质、电子设备 |
CN115297268B (zh) * | 2020-01-22 | 2024-01-05 | 杭州海康威视数字技术股份有限公司 | 一种成像***及图像处理方法 |
CN111556225B (zh) * | 2020-05-20 | 2022-11-22 | 杭州海康威视数字技术股份有限公司 | 图像采集装置及图像采集控制方法 |
CN111935415B (zh) * | 2020-08-18 | 2022-02-08 | 浙江大华技术股份有限公司 | 亮度调整方法、装置、存储介质及电子装置 |
CN113301264B (zh) * | 2021-07-26 | 2021-11-23 | 北京博清科技有限公司 | 一种图像亮度调整方法、装置、电子设备及存储介质 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103971351A (zh) * | 2013-02-04 | 2014-08-06 | 三星泰科威株式会社 | 使用多光谱滤光器阵列传感器的图像融合方法和设备 |
CN104134352A (zh) * | 2014-08-15 | 2014-11-05 | 青岛比特信息技术有限公司 | 基于长短曝光结合的视频车辆特征检测***及其检测方法 |
US20160057367A1 (en) * | 2014-08-25 | 2016-02-25 | Hyundai Motor Company | Method for extracting rgb and nir using rgbw sensor |
CN107343132A (zh) * | 2017-08-28 | 2017-11-10 | 中控智慧科技股份有限公司 | 一种基于近红外led补光灯的手掌识别装置及方法 |
CN107920188A (zh) * | 2016-10-08 | 2018-04-17 | 杭州海康威视数字技术股份有限公司 | 一种镜头及摄像机 |
CN108419061A (zh) * | 2017-02-10 | 2018-08-17 | 杭州海康威视数字技术股份有限公司 | 基于多光谱的图像融合设备、方法及图像传感器 |
CN110493506A (zh) * | 2018-12-12 | 2019-11-22 | 杭州海康威视数字技术股份有限公司 | 一种图像处理方法和*** |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7978260B2 (en) * | 2003-09-15 | 2011-07-12 | Senshin Capital, Llc | Electronic camera and method with fill flash function |
CN105306832A (zh) * | 2015-09-15 | 2016-02-03 | 北京信路威科技股份有限公司 | 图像采集设备补光装置及方法 |
CN105657280B (zh) * | 2016-03-01 | 2019-03-08 | Oppo广东移动通信有限公司 | 一种快速对焦方法、装置及移动终端 |
CN106572310B (zh) * | 2016-11-04 | 2019-12-13 | 浙江宇视科技有限公司 | 一种补光强度控制方法与摄像机 |
CN106778518B (zh) * | 2016-11-24 | 2021-01-08 | 汉王科技股份有限公司 | 一种人脸活体检测方法及装置 |
-
2018
- 2018-12-12 CN CN201811517428.8A patent/CN110493506B/zh active Active
-
2019
- 2019-12-02 WO PCT/CN2019/122437 patent/WO2020119504A1/zh active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103971351A (zh) * | 2013-02-04 | 2014-08-06 | 三星泰科威株式会社 | 使用多光谱滤光器阵列传感器的图像融合方法和设备 |
CN104134352A (zh) * | 2014-08-15 | 2014-11-05 | 青岛比特信息技术有限公司 | 基于长短曝光结合的视频车辆特征检测***及其检测方法 |
US20160057367A1 (en) * | 2014-08-25 | 2016-02-25 | Hyundai Motor Company | Method for extracting rgb and nir using rgbw sensor |
CN107920188A (zh) * | 2016-10-08 | 2018-04-17 | 杭州海康威视数字技术股份有限公司 | 一种镜头及摄像机 |
CN108419061A (zh) * | 2017-02-10 | 2018-08-17 | 杭州海康威视数字技术股份有限公司 | 基于多光谱的图像融合设备、方法及图像传感器 |
CN107343132A (zh) * | 2017-08-28 | 2017-11-10 | 中控智慧科技股份有限公司 | 一种基于近红外led补光灯的手掌识别装置及方法 |
CN110493506A (zh) * | 2018-12-12 | 2019-11-22 | 杭州海康威视数字技术股份有限公司 | 一种图像处理方法和*** |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113965671A (zh) * | 2021-02-04 | 2022-01-21 | 福建汇川物联网技术科技股份有限公司 | 一种用于测距的补光方法、装置、电子设备及存储介质 |
CN114745509A (zh) * | 2022-04-08 | 2022-07-12 | 深圳鹏行智能研究有限公司 | 图像采集方法、设备、足式机器人及存储介质 |
CN114745509B (zh) * | 2022-04-08 | 2024-06-07 | 深圳鹏行智能研究有限公司 | 图像采集方法、设备、足式机器人及存储介质 |
Also Published As
Publication number | Publication date |
---|---|
CN110493506A (zh) | 2019-11-22 |
CN110493506B (zh) | 2021-03-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020119504A1 (zh) | 一种图像处理方法和*** | |
WO2020119505A1 (zh) | 一种图像处理方法和*** | |
CN109951646B (zh) | 图像融合方法、装置、电子设备及计算机可读存储介质 | |
US20240137658A1 (en) | Global tone mapping | |
US9661218B2 (en) | Using captured high and low resolution images | |
CN109712102B (zh) | 一种图像融合方法、装置及图像采集设备 | |
US8179445B2 (en) | Providing improved high resolution image | |
EP3849170B1 (en) | Image processing method, electronic device, and computer-readable storage medium | |
EP3038356B1 (en) | Exposing pixel groups in producing digital images | |
CN110493531B (zh) | 一种图像处理方法和*** | |
WO2017152402A1 (zh) | 终端的图像处理方法、装置和终端 | |
CN112118388B (zh) | 图像处理方法、装置、计算机设备和存储介质 | |
JP2002204389A (ja) | 露出制御方法 | |
EP3270587A1 (en) | Image processing device, image processing method, and program | |
CN103546730A (zh) | 基于多摄像头的图像感光度增强方法 | |
WO2019104047A1 (en) | Global tone mapping | |
CN111970432A (zh) | 一种图像处理方法及图像处理装置 | |
US20200228770A1 (en) | Lens rolloff assisted auto white balance | |
EP3270586A1 (en) | Image processing device, image processing method, and program | |
US20200228769A1 (en) | Lens rolloff assisted auto white balance | |
JP2012008845A (ja) | 画像処理装置 | |
US10937230B2 (en) | Image processing | |
JP6492452B2 (ja) | 制御システム、撮像装置、制御方法およびプログラム | |
KR20210107955A (ko) | 컬러 스테인 분석 방법 및 상기 방법을 이용하는 전자 장치 | |
Leznik et al. | Optimization of demanding scenarios in CMS and image quality criteria |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19896120 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19896120 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 03.08.2021) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19896120 Country of ref document: EP Kind code of ref document: A1 |