CN116758520A - Visual monitoring method, device, system, vehicle and readable storage medium - Google Patents

Visual monitoring method, device, system, vehicle and readable storage medium Download PDF

Info

Publication number
CN116758520A
CN116758520A CN202310657941.1A CN202310657941A CN116758520A CN 116758520 A CN116758520 A CN 116758520A CN 202310657941 A CN202310657941 A CN 202310657941A CN 116758520 A CN116758520 A CN 116758520A
Authority
CN
China
Prior art keywords
image
vehicle
target
monitoring system
infrared
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310657941.1A
Other languages
Chinese (zh)
Inventor
田宇
唐庆龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Changan Automobile Co Ltd
Original Assignee
Chongqing Changan Automobile Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Changan Automobile Co Ltd filed Critical Chongqing Changan Automobile Co Ltd
Priority to CN202310657941.1A priority Critical patent/CN116758520A/en
Publication of CN116758520A publication Critical patent/CN116758520A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Signal Processing (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The application discloses a visual monitoring method, a visual monitoring device, a visual monitoring system, a visual monitoring vehicle and a readable storage medium, and relates to the technical field of vehicles, so as to ensure the monitoring effect of the visual monitoring system under the condition that the internal environment of the vehicle is in different brightness degrees. The method comprises the following steps: acquiring environmental parameters of the internal environment of the vehicle; the environment parameter is used for indicating the brightness degree of the internal environment of the vehicle; determining a target image for transmission to a vehicle monitoring system according to the environmental parameters; the vehicle monitoring system is used for monitoring the personnel behaviors in the vehicle; the target image is one of a target infrared image, a target color image and a fusion image, wherein the target infrared image and the target color image are obtained by image acquisition of the internal environment of the vehicle, and the fusion image is obtained by fusion of the target infrared image and the target color image.

Description

Visual monitoring method, device, system, vehicle and readable storage medium
Technical Field
The application relates to the technical field of vehicles, in particular to a visual monitoring method, a visual monitoring device, a visual monitoring system, a vehicle and a readable storage medium.
Background
With the intelligent development of vehicles, the application of the visual monitoring system in automobiles is also becoming wider and wider. Through the vision monitoring system, whether the driver has fatigue, distraction and other conditions can be monitored, and the driver is timely reminded under the condition that the driver has fatigue, distraction and other conditions is monitored, so that the driving safety of the vehicle is improved. Or, the interaction gesture of the driver can be monitored, and the function corresponding to the target interaction gesture is executed under the condition that the target interaction gesture of the driver is monitored.
However, in the case where the brightness of the vehicle interior environment is poor, the detail rendering capability of a color image photographed by a general color RGB (red, green) camera is poor. The infrared image shot by the infrared camera can better present image details under any brightness scene, but because the infrared light affects white balance calibration, the infrared image only retains brightness information and removes color information, namely the infrared camera only can output black and white gray images. Therefore, how to ensure the monitoring effect of the visual monitoring system under the condition that the internal environment of the vehicle is in different brightness is a technical problem to be solved urgently.
Disclosure of Invention
One of the purposes of the present application is to provide a visual monitoring method, device, system, vehicle and readable storage medium, so as to ensure the monitoring effect of the visual monitoring system under the condition that the internal environment of the vehicle is in different brightness.
According to a first aspect to which the present application relates, there is provided a visual monitoring method comprising: acquiring environmental parameters of the internal environment of the vehicle; the environmental parameter indicates a brightness level of an environment inside the vehicle; determining a target image for transmission to a vehicle monitoring system according to the environmental parameters; the target image is one of a target infrared image, a target color image and a fusion image, wherein the target infrared image and the target color image are obtained by image acquisition of the internal environment of the vehicle, and the fusion image is obtained by fusion of the target infrared image and the target color image.
According to the technical means, the target image for sending to the vehicle monitoring system is determined according to the environmental parameters, and the vehicle monitoring system is used for monitoring the personnel behaviors in the vehicle. The target image is one of a target infrared image, a target color image and a fusion image, wherein the target infrared image and the target color image are obtained by image acquisition of the internal environment of the vehicle, and the fusion image is obtained by fusion of the target infrared image and the target color image. The environment parameters are used for indicating the brightness of the environment in the vehicle, the color image can clearly show the personnel behaviors of the driver under the condition of better brightness, the infrared image can clearly show the personnel behaviors of the driver in the form of gray images under the condition of extremely poor brightness, and the fused image can also obtain the personnel behaviors of the driver clearly under the condition of ensuring color detail information. Therefore, the target image can clearly show the personnel behaviors of the driver under different brightness degrees, the monitoring effect of the visual monitoring system is ensured, and the driving safety of the vehicle is improved.
Further, the vehicle detection system includes an occupant monitoring system (occupancy monitoring system, OMS), the environmental parameter including a brightness value; the determining, according to the environmental parameter, the target image for sending to the vehicle monitoring system includes: determining the target infrared image as a target image in the case that the brightness value is less than or equal to the first threshold value; determining the fused image as a target image in the case where the brightness value is greater than the first threshold value and less than or equal to the second threshold value; in the case where the brightness value is greater than the second threshold value, the target color image is determined as the target image.
According to the technical means, under the condition that the brightness value is smaller than or equal to the first threshold value, the brightness degree of the image of the environment in the vehicle is extremely low, the target infrared image is determined to be the target image, and as the infrared image can still keep higher definition in a scene with extremely low brightness degree, the occupant behavior in the vehicle can be accurately determined by the occupant monitoring system OMS through the high-definition image. Under the condition that the brightness value is larger than the first threshold value and smaller than or equal to the second threshold value, the camera in the vehicle collects the brightness degree of the internal environment image of the vehicle, the fusion image is determined to be the target image, and the brightness, the definition and the color restoration degree are higher because the fusion image fuses the brightness information of the infrared image and the color information of the color image, so that the occupant monitoring system OMS can accurately determine the occupant behavior in the vehicle. Under the condition that the brightness value is larger than the second threshold value, the brightness degree of the environment image in the vehicle is extremely high, the target color image is determined to be the target image, the definition and the color reduction degree of the image can be ensured, and the face overexposure caused by the infrared image can be avoided, so that the occupant behavior in the vehicle can be accurately determined by the occupant monitoring system OMS. Thus, the monitoring effect of the visual monitoring system on the vehicle passengers can be ensured under the condition that the internal environment of the vehicle is in different brightness degrees.
Further, the vehicle detection system comprises a driver monitoring system (driver monitor system, DMS), the environmental parameters comprising a brightness value and a disturbance parameter, the disturbance parameter being used to indicate whether a driver of the vehicle is wearing a sunglasses; determining a target image for transmission to a vehicle monitoring system based on the environmental parameters, comprising: if the brightness value is smaller than or equal to the first threshold value under the condition that the driver is not worn with the sunglasses, determining the target infrared image as the target image; if the brightness value is larger than the first threshold value and smaller than or equal to the second threshold value under the condition that the driver is not worn with the sunglasses, determining the fusion image as a target image; in the case where it is determined that the driver is not wearing the sunglasses, if the brightness value is greater than the second threshold value, the target color image is determined as the target image.
According to the technical means, the visible light cannot penetrate through the sunglasses, whether the driver closes eyes when wearing the sunglasses cannot be shown in the target color image, and by adding the interference parameters, the fact that the driver does not wear the sunglasses can be determined on the basis of the brightness value so as to determine the target image for sending to the vehicle monitoring system, the situation that the driver monitoring system DMS does not accurately identify the personnel behaviors of the driver under the condition that the driver wears the sunglasses is avoided, and the safety of vehicle running is improved.
Further, the method comprises the following steps: receiving an initial infrared image and an initial color image sent by a camera; the camera comprises a camera unit, a beam splitting prism, a visible light image sensor and an infrared light image sensor; the camera shooting unit is used for transmitting visible light of the internal environment of the vehicle to the visible light image sensor and the infrared light image sensor respectively through the beam splitting prism; the infrared light image sensor is used for imaging the received visible light into an initial infrared image, and the visible light image sensor is used for imaging the received visible light into an initial color image; performing format conversion on the initial infrared image to obtain a target infrared image; and performing format conversion on the initial color image to obtain a target color image.
According to the technical means, the initial infrared image and the initial color image shot by the camera can be converted into the format which can be identified by the vehicle monitoring system, so that the situation that the vehicle monitoring system cannot directly identify the initial infrared image and the initial color image shot by the camera, so that the behavior of personnel in the monitored vehicle is caused is avoided, and the reliability of the behavior of the personnel in the monitored vehicle is improved.
Further, the camera also comprises an infrared light supplementing lamp, and the method further comprises: controlling the infrared light supplementing lamp to be in an on state under the condition that the brightness value is smaller than a second threshold value; and controlling the infrared light supplementing lamp to be in a closed state under the condition that the brightness value is larger than the second threshold value.
According to the technical means, the infrared light supplementing can be carried out under the condition that the brightness of the vehicle interior environment image is poor, so that the brightness of the vehicle interior environment image can be improved, the monitoring effect of the vehicle monitoring system is further improved, and the infrared light is invisible light, so that the interference of a light source to a driver is avoided, and the running safety of the vehicle is improved. Under the condition that the brightness of the environment image in the vehicle is good, the infrared light filling is turned off, so that the face overexposure caused by the holding of the infrared light filling can be avoided, and the monitoring effect of the vehicle monitoring system is further improved.
Further, the above-described environmental parameter includes a brightness value for instructing a camera in the vehicle to acquire a brightness level of an image of the environment inside the vehicle. According to the technical means, the brightness of the vehicle interior environment image can be acquired by the camera to replace the brightness of the vehicle interior environment, so that the brightness errors of the images shot by the cameras with different performances can be eliminated no matter what performance of the camera is adopted, and the accuracy of the vehicle monitoring system in visual monitoring of the vehicle interior can be further ensured.
Further, obtaining a brightness value includes: weighting the brightness of each pixel in the first image to obtain a brightness value; the first image is any one of a target infrared image and a target color image, and the weight of the brightness of a first pixel in the first image is inversely related to the target distance; the first pixel is any one pixel in the first image, and the target distance is the distance between the first pixel and the central pixel of the first image.
According to the technical means, the brightness of each pixel in the first image is weighted, that is, the brightness of each pixel in the first image is combined, so that the brightness of the first image can be comprehensively reflected. In addition, since the person in the vehicle is usually located in the central area of the image, and the weight of the brightness of the first pixel in the present application is inversely related to the target distance, the target distance is the distance between the first pixel and the central pixel of the first image, that is, the weight of the central area pixel of the image is increased, the brightness value of the first image may have a greater correlation with the effect of monitoring the behavior of the person in the vehicle.
Further, the method comprises the following steps: controlling a vehicle machine screen of a vehicle to display a setting interface of a vehicle monitoring system, wherein the setting interface comprises a first setting option and a second setting option; the first setting option is used for controlling the determination of the target color image or the fused image as the target image, and the second setting option is used for controlling the determination of the target infrared image as the target image
A target image for transmission to a vehicle monitoring system is determined in response to a control instruction of a person inside the vehicle.
According to the technical means, the flexibility of vehicle control can be improved.
In a second aspect, there is provided a visual monitoring apparatus comprising: an acquisition unit, a determination unit; an acquisition unit configured to acquire an environmental parameter of an internal environment of the vehicle; the environment parameter is used for indicating the brightness degree of the internal environment of the vehicle; a determining unit for determining a target image for transmission to the vehicle monitoring system according to the environmental parameter; the target image is one of a target infrared image, a target color image and a fusion image, wherein the target infrared image and the target color image are obtained by image acquisition of the internal environment of the vehicle, and the fusion image is obtained by fusion of the target infrared image and the target color image.
Further, the vehicle monitoring system includes an occupant monitoring system, the environmental parameter includes a brightness value; the determining unit is specifically configured to: determining the target infrared image as a target image in the case that the brightness value is less than or equal to the first threshold value; determining the fused image as a target image in the case where the brightness value is greater than the first threshold value and less than or equal to the second threshold value; in the case where the brightness value is greater than the second threshold value, the target color image is determined as the target image.
Further, the vehicle detection system comprises a driver monitoring system DMS, and the environmental parameters comprise a brightness value and an interference parameter, wherein the interference parameter is used for indicating whether a driver of the vehicle wears a sunglasses; the determining unit is specifically configured to: if the brightness value is smaller than or equal to the first threshold value under the condition that the driver is not worn with the sunglasses, determining the target infrared image as the target image; if the brightness value is larger than the first threshold value and smaller than or equal to the second threshold value under the condition that the driver is not worn with the sunglasses, determining the fusion image as a target image; in the case where it is determined that the driver is not wearing the sunglasses, if the brightness value is greater than the second threshold value, the target color image is determined as the target image.
Further, the apparatus further comprises: a processing unit; the acquisition unit is also used for receiving the initial infrared image and the initial color image which are sent by the camera in the vehicle; the camera comprises a camera unit, a beam splitting prism, a visible light image sensor and an infrared light image sensor; the camera shooting unit is used for transmitting visible light of the internal environment of the vehicle to the visible light image sensor and the infrared light image sensor respectively through the beam splitting prism; the infrared light image sensor is used for imaging the received visible light into an initial infrared image, and the visible light image sensor is used for imaging the received visible light into an initial color image; the processing unit is used for carrying out format conversion on the initial infrared image to obtain a target infrared image; and the processing unit is also used for carrying out format conversion on the initial color image to obtain a target color image.
Further, the camera also comprises an infrared light supplementing lamp, and the processing unit is also used for controlling the infrared light supplementing lamp to be in an on state under the condition that the brightness value is smaller than a second threshold value; and the processing unit is also used for controlling the infrared light supplementing lamp to be in a closed state under the condition that the brightness value is larger than the second threshold value.
Further, in the apparatus, the environmental parameter includes a brightness value for indicating a brightness level of an image of an environment inside the vehicle captured by a camera inside the vehicle.
Further, the acquisition unit is specifically configured to: weighting the brightness of each pixel in the first image to obtain a brightness value; the first image is any one of a target infrared image and a target color image, and the weight of the brightness of a first pixel in the first image is inversely related to the target distance; the first pixel is any one pixel in the first image, and the target distance is the distance between the first pixel and the central pixel of the first image.
Further, the processing unit is further used for controlling a vehicle machine screen of the vehicle to display a setting interface of the vehicle monitoring system, and the setting interface comprises a first setting option and a second setting option; the first setting option is used for controlling the determination of the target color image or the fusion image as the target image, and the second setting option is used for controlling the determination of the target infrared image as the target image; and a determining unit for determining a target image for transmission to the vehicle monitoring system in response to a control instruction of a person inside the vehicle.
In a third aspect, there is provided a visual monitoring system comprising image processing means for performing the method as in the first aspect or any of the possible designs of the first aspect.
In a fourth aspect, there is provided a visual monitoring apparatus comprising: a processor; a memory for storing processor-executable instructions; the processor is configured to execute instructions, the functions performed in the first aspect or any of the possible designs of the first aspect.
In a fifth aspect, there is provided a vehicle comprising a visual monitoring system as provided in the third aspect.
In a sixth aspect, a visual monitoring apparatus is provided, where the visual monitoring apparatus may implement the functions performed by the visual monitoring apparatus in the above aspects or in each possible design, and the functions may be implemented by hardware, for example: in one possible design, the visual monitoring apparatus may include: a processor and a communication interface, the processor being operable to support the visual monitoring apparatus to carry out the functions involved in the first aspect or any one of the possible designs of the first aspect.
In yet another possible design, the visual monitoring apparatus may further comprise a memory for holding computer-executable instructions and data necessary for the visual monitoring apparatus. The processor executes the computer-executable instructions stored by the memory when the visual monitoring apparatus is running to cause the visual monitoring apparatus to perform any one of the possible visual monitoring methods of the first aspect or the first aspect described above.
In a seventh aspect, a computer readable storage medium is provided, which may be a readable non-volatile storage medium, storing computer instructions or a program which, when run on a computer, cause the computer to perform the first aspect or any one of the possible visual monitoring methods of the aspects.
In an eighth aspect, there is provided a computer program product comprising instructions which, when run on a computer, cause the computer to perform the visual monitoring method of the first aspect or any of the possible designs of the aspects.
The invention has the beneficial effects that:
(1) A target image for transmission to a vehicle monitoring system for monitoring personnel behavior within the vehicle is determined based on the environmental parameters. The target image is one of a target infrared image, a target color image and a fusion image, wherein the target infrared image and the target color image are obtained by image acquisition of the internal environment of the vehicle, and the fusion image is obtained by fusion of the target infrared image and the target color image. The environment parameters are used for indicating the brightness of the environment in the vehicle, the color image can clearly show the personnel behaviors of the driver under the condition of better brightness, the infrared image can clearly show the personnel behaviors of the driver in the form of gray images under the condition of extremely poor brightness, and the fused image can also obtain the personnel behaviors of the driver clearly under the condition of ensuring color detail information. Therefore, the target image can clearly show the personnel behaviors of the driver under different brightness degrees, the monitoring effect of the visual monitoring system is ensured, and the driving safety of the vehicle is improved.
(2) Under the condition that the brightness value is smaller than or equal to a first threshold value, a camera in the vehicle collects an environment image in the vehicle, brightness degree is extremely low, a target infrared image is determined to be the target image, and as the infrared image can still keep higher definition in a scene with extremely low brightness degree, the passenger behavior in the vehicle can be accurately determined by the passenger monitoring system OMS through the high-definition image. Under the condition that the brightness value is larger than the first threshold value and smaller than or equal to the second threshold value, the camera in the vehicle collects the brightness degree of the internal environment image of the vehicle, the fusion image is determined to be the target image, and the brightness, the definition and the color restoration degree are higher because the fusion image fuses the brightness information of the infrared image and the color information of the color image, so that the occupant monitoring system OMS can accurately determine the occupant behavior in the vehicle. Under the condition that the brightness value is larger than the second threshold value, the brightness degree of the environment image in the vehicle is extremely high, the target color image is determined to be the target image, the definition and the color reduction degree of the image can be ensured, and the face overexposure caused by the infrared image can be avoided, so that the occupant behavior in the vehicle can be accurately determined by the occupant monitoring system OMS. Thus, the monitoring effect of the visual monitoring system on the vehicle passengers can be ensured under the condition that the internal environment of the vehicle is in different brightness degrees.
(3) The visible light cannot penetrate through the sunglasses, the target color image cannot show whether the driver closes eyes when wearing the sunglasses, and by adding the interference parameters, the fact that the driver does not wear the sunglasses can be determined on the basis of the brightness value so as to determine the target image which is used for being sent to the vehicle monitoring system, the situation that the driver monitoring system DMS does not accurately identify the personnel behaviors of the driver under the condition that the driver wears the sunglasses is avoided, and the safety of vehicle running is improved.
(4) The method has the advantages that the initial infrared image and the initial color image shot by the camera can be converted into the format which can be identified by the vehicle monitoring system, the situation that the vehicle monitoring system cannot directly identify the initial infrared image and the initial color image shot by the camera, so that personnel behaviors in the monitored vehicle are caused is avoided, and the reliability of monitoring the personnel behaviors in the vehicle is improved. Meanwhile, as only one camera is arranged in the vehicle and has a light splitting function, an initial infrared image and an initial color image can be shot at the same time, and the camera is suitable for different vehicle monitoring systems and can greatly save cost.
(5) The infrared light supplementing device can conduct infrared light supplementing under the condition that the brightness of the vehicle interior environment image is poor, so that the brightness of the vehicle interior environment image can be improved, the monitoring effect of the vehicle monitoring system is improved, and the infrared light is invisible light, so that light source interference cannot be caused to a driver, and the running safety of the vehicle is improved. Under the condition that the brightness of the environment image in the vehicle is good, the infrared light filling is turned off, so that the face overexposure caused by the holding of the infrared light filling can be avoided, and the monitoring effect of the vehicle monitoring system is further improved.
(6) The brightness of the vehicle interior environment image can be acquired by adopting the camera in the vehicle to replace the brightness of the vehicle interior environment, so that the brightness errors of the images shot by the cameras with different performances can be eliminated no matter what performance of the camera is adopted, and the accuracy of the vehicle monitoring system in visual monitoring of the vehicle interior can be further ensured.
(7) The brightness of the first image can be comprehensively reflected by weighting the brightness of each pixel in the first image, namely combining the brightness of each pixel in the first image. In addition, since the person in the vehicle is usually located in the central area of the image, and the weight of the brightness of the first pixel in the present application is inversely related to the target distance, the target distance is the distance between the first pixel and the central pixel of the first image, that is, the weight of the central area pixel of the image is increased, the brightness value of the first image may have a greater correlation with the effect of monitoring the behavior of the person in the vehicle.
(8) The flexibility of vehicle control can be improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application and do not constitute a undue limitation on the application.
Fig. 1 is a schematic structural diagram of a visual monitoring system according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a visual monitoring system according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a visual monitoring system according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a visual monitoring device according to an embodiment of the present application;
FIG. 5 is a flow chart of another visual monitoring method according to an embodiment of the present application;
FIG. 6 is a flow chart of another visual monitoring method according to an embodiment of the present application;
FIG. 7 is a flow chart of another visual monitoring method according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of another visual monitoring apparatus according to an embodiment of the present application.
Detailed Description
In order to enable those skilled in the art to better understand the technical solutions of the present disclosure, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the disclosure described herein may be capable of operation in sequences other than those illustrated or described herein. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with aspects of embodiments of the application as detailed in the accompanying claims.
It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, and/or components.
It should be noted that the illustrations provided in the following embodiments merely illustrate the basic concept of the present application by way of illustration, and only the components related to the present application are shown in the drawings and are not drawn according to the number, shape and size of the components in actual implementation, and the form, number and proportion of the components in actual implementation may be arbitrarily changed, and the layout of the components may be more complicated.
With the intelligent development of vehicles, the application of the visual monitoring system in automobiles is also becoming wider and wider. Through the vision monitoring system, whether the driver has fatigue, distraction and other conditions can be monitored, and the driver is timely reminded under the condition that the driver has fatigue, distraction and other conditions is monitored, so that the driving safety of the vehicle is improved. Or, the interaction gesture of the driver can be monitored, and the function corresponding to the target interaction gesture is executed under the condition that the target interaction gesture of the driver is monitored.
In one example, the visual monitoring system may include a driver monitoring system DMS camera and an occupant monitoring system OMS camera. The DMS camera is used for monitoring a driver. For example, an infrared camera and an infrared light supplement lamp mounted above or on the left a-pillar of the steering wheel of the vehicle may be included. The OMS camera is used for monitoring the condition of passengers in the cabin. For example, an RGB color camera mounted at the roof above the inner mirror may be included.
However, since the RGB color camera cannot perform brightness compensation through the infrared light compensating lamp, the detail rendering capability of the color image photographed by the RGB color camera is poor. The common LED or soft light supplementing lamp can influence the sight of a driver, so that a color image shot by the RGB color camera has no brightness compensation, and the picture details can not be distinguished almost under the condition of poor brightness of the internal environment of the vehicle.
The infrared image shot by the infrared camera can better present image details under any brightness scene, but because the infrared light affects white balance calibration, the infrared image only retains brightness information and removes color information, namely the infrared camera only can output black and white gray images.
In yet another example, as shown in fig. 1, a visual monitoring system may include: an in-vehicle monitoring camera module and a driver state monitoring electronic control unit (electronic control unit, ECU).
And the in-vehicle monitoring camera module is used for shooting and outputting shooting images. For example, the in-vehicle monitoring camera module includes: a lens, an image sensor, and an image processor; light passes through the lens and enters the image sensor, and the image sensor is used for sensing the illumination intensity in the vehicle, converting the light signal into an unprocessed image and transmitting the unprocessed image to the image processor in an electric signal mode; the image processor outputs a color photographed image or a black-and-white photographed image by adjusting the unprocessed image according to the illumination intensity sensed by the image sensor and the command signal of the driver state monitoring ECU.
The driver state monitoring ECU is used for monitoring and identifying whether the driver and the passenger in the vehicle have shooting requirements or not and sending instruction signals to the in-vehicle monitoring camera module.
In addition, the monitoring of driver and the monitoring of passenger are realized through the camera module that two cameras are constituteed, and DMS camera needs to stay on steering wheel steering column top or A post, destroys the body structure, influences vehicle security performance to a certain extent. Therefore, how to ensure the monitoring effect of the visual monitoring system under the condition that the internal environment of the vehicle is in different brightness is a technical problem to be solved urgently.
In view of this, an embodiment of the present application provides a visual monitoring method, including: acquiring environmental parameters of the internal environment of the vehicle; the environmental parameter includes a brightness value for indicating a brightness level of a camera in the vehicle for capturing an image of an environment inside the vehicle; determining a target image for transmission to a vehicle monitoring system according to the environmental parameters; the vehicle monitoring system is used for monitoring the personnel behaviors in the vehicle; the target image is one of a target infrared image, a target color image and a fusion image, wherein the target infrared image and the target color image are obtained by image acquisition of the internal environment of the vehicle, and the fusion image is obtained by fusion of the target infrared image and the target color image.
The method provided by the embodiment of the application is described in detail below with reference to the attached drawings.
It should be noted that, the visual monitoring system described in the embodiment of the present application is for more clearly describing the technical solution of the embodiment of the present application, and does not constitute a limitation to the technical solution provided by the embodiment of the present application, and those skilled in the art can know that, with the evolution of the visual monitoring system and the appearance of other visual monitoring systems, the technical solution provided by the embodiment of the present application is equally applicable to similar technical problems.
The visual monitoring system provided by the embodiment of the application can be applied to vehicles. The vehicle may be any type of vehicle. For example, the vehicle may be a fuel vehicle, a hybrid vehicle, a new energy vehicle, etc., and the embodiment of the present application does not limit the specific technology, the specific number and the specific equipment configuration adopted by the vehicle.
Fig. 2 is a schematic diagram of a visual monitoring system 10 according to an embodiment of the present application, and as shown in fig. 2, the visual monitoring system 10 may include an image processing device 11, a camera 12, and a vehicle monitoring system 13.
The image processing device 11 and the camera 12 are connected to a vehicle monitoring system 13. For example, the image processing device 11, the camera 12 and the vehicle monitoring system 13 may be connected wirelessly or by a wired connection, which is not limited in the embodiment of the present application.
Wherein the image processing means 11 may be adapted to determine an environmental parameter of the vehicle interior environment and to determine a target image based on the environmental parameter of the vehicle interior environment and to send the target image to the vehicle monitoring system 13. For example, the image processing apparatus 11 may be any electronic device having a data processing function.
The camera 12 may be used to capture an image of the vehicle interior environment and transmit the image of the vehicle interior environment to the image processing device 11. The camera 12 may be a camera within a vehicle. For example, an in-cabin monitoring (Interior Monitoring System, IMS) camera or the like. The camera in the vehicle may be provided above the inside rearview mirror, or may be provided above the center console, etc., without limitation.
The vehicle monitoring system 13 may be used to monitor the behavior of personnel within the vehicle based on the target images. For example, the vehicle monitoring system 13 may be a server, a computer, or the like.
Fig. 3 is a schematic diagram of a visual monitoring system 20 according to another embodiment of the present application. As shown in fig. 3, the camera 12 further includes an imaging unit 110, a beam splitter prism 111 (for example, may be a glue beam splitter prism), an infrared light compensating lamp 112, a visible light image sensor 113, and an infrared light image sensor 114. The vehicle monitoring system 13 further includes a driver monitoring system 131 and an occupant monitoring system 132.
Wherein the image capturing unit 110 is configured to focus an image of the vehicle interior environment on the beam splitting prism 111. The dichroic prism 111 is used to divide a captured vehicle interior environment image into a visible light image and an infrared light image, and projects the visible light image toward the visible light image sensor 113 and the infrared light image toward the infrared light image sensor 114. The infrared light-compensating lamp 112 performs infrared light-compensating for the scene in the vehicle. The driver monitoring system 131 is used to determine the driver's personal behavior from the visible light images and/or the infrared light images. The occupant monitoring system 132 determines the occupant's personnel behavior from the visible light images and/or the infrared light images.
The beam splitting prism can fully transmit infrared light and reflect visible light exceeding a preset proportion. The preset ratio can be set as required. For example, it may be 90%.
It should be noted that fig. 2 and fig. 3 are only exemplary frame diagrams, and names of the respective modules included in fig. 2 and fig. 3 are not limited, and other modules may be included in addition to the functional modules shown in fig. 2, which is not limited by the embodiment of the present application.
In particular, the image processing apparatus in fig. 2 may employ the constituent structure shown in fig. 4 or include the components shown in fig. 4.
Fig. 4 is a schematic structural diagram of a visual monitoring apparatus 200 according to an embodiment of the present application, where the visual monitoring apparatus 200 may be an image processing apparatus in a visual monitoring system, or the visual monitoring apparatus 200 may be a chip or a system on a chip in the image processing apparatus. As shown in fig. 4, the visual monitoring apparatus 200 includes a processor 201, a communication interface 202, and a communication line 203.
Further, the visual monitoring apparatus 200 may further comprise a memory 204. The processor 201, the memory 204, and the communication interface 202 may be connected by a communication line 203.
The processor 201 is a CPU, a general-purpose processor, a network processor (network processor, NP), a digital signal processor (digital signal processing, DSP), a microprocessor, a microimage processing device, a programmable logic device (programmable logic device, PLD), or any combination thereof. The processor 201 may also be other devices with processing functions, such as, without limitation, circuits, devices, or software modules.
Communication interface 202 is used to communicate with other devices or other communication networks. The communication interface 202 may be a module, a circuit, a communication interface, or any device capable of enabling communication.
Communication line 203 for communicating information between the components included in visual monitoring apparatus 200.
Memory 204 for storing instructions executable by processor 201. Wherein the instructions may be computer programs.
The memory 204 may be, but is not limited to, a read-only memory (ROM) or other type of static storage device capable of storing static information and/or instructions, a random access memory (random access memory, RAM) or other type of dynamic storage device capable of storing information and/or instructions, an EEPROM, a CD-ROM (compact disc read-only memory) or other optical disk storage, an optical disk storage (including compact disk, laser disk, optical disk, digital versatile disk, blu-ray disk, etc.), a magnetic disk storage medium or other magnetic storage device, etc.
It should be noted that the memory 204 may exist separately from the processor 201 or may be integrated with the processor 201. Memory 204 may be used to store instructions or program code or some data, etc. The memory 204 may be located inside the visual monitoring apparatus 200 or outside the visual monitoring apparatus 200, without limitation. The processor 201 is configured to execute instructions stored in the memory 204 to implement the visual monitoring method according to the following embodiment of the present application.
In one example, processor 201 may include one or more CPUs, such as CPU0 and CPU1 in fig. 4.
As an alternative implementation, the visual monitoring apparatus 200 comprises a plurality of processors, e.g. in addition to the processor 201 in fig. 4, a processor 205 may be included.
It should be noted that the constituent structures shown in fig. 4 do not constitute limitations of the respective apparatuses in fig. 2, and that the respective image processing apparatuses in fig. 2 may include more or less components than those shown in fig. 4, or may combine some components, or may be arranged differently, in addition to those shown in fig. 4.
In the embodiment of the application, the chip system can be composed of chips, and can also comprise chips and other discrete devices.
Further, actions, terms, and the like, which are referred to between embodiments of the present application, are not limited thereto. The message names of interactions between the devices or parameter names in the messages in the embodiments of the present application are just an example, and other names may be used in specific implementations without limitation.
The following describes a visual monitoring method according to an embodiment of the present application with reference to the visual monitoring system shown in fig. 2.
The embodiment of the present application is described by taking an image processing apparatus as an example, and as shown in fig. 5, the method includes the following steps S301 to S302:
S301, the image processing device acquires environmental parameters of the vehicle interior environment.
Wherein the environmental parameter is used to indicate the brightness of the interior environment of the vehicle. For example, the brightness level of the vehicle interior environment image may include: better (may also be referred to as better lighting environment), general (may also be referred to as general lighting environment, such as cloudy or cloudy weather), worse (may also be referred to as worse lighting environment, such as night city lighting scene), very worse (may also be referred to as very worse lighting environment, such as night country no-road light scene).
As a possible implementation, the vehicle is provided with a photosensitive element, and the image processing apparatus may acquire a luminance value of the vehicle interior environment based on the photosensitive element and determine the luminance value as the environment parameter.
As another possible implementation, the image processing apparatus may acquire a vehicle interior environment picture from a camera in the vehicle, and determine an environmental parameter of the vehicle interior environment from a brightness value of the vehicle interior environment picture.
It can be understood that the brightness value of the internal environment picture of the vehicle is determined as the environment parameter of the internal environment of the vehicle, and the brightness degree of the internal environment image of the vehicle can be acquired by adopting the camera in the vehicle to replace the brightness of the internal environment of the vehicle, so that the brightness errors of the images shot by the cameras with different performances can be eliminated no matter what performance of the camera is adopted, and the accuracy of the vehicle monitoring system for visually monitoring the internal environment of the vehicle can be further ensured.
In connection with the visual monitoring system shown in fig. 2, the image processing apparatus 11 may acquire a vehicle interior environment picture from the camera 12 and determine an environmental parameter of the vehicle interior environment from a brightness value of the vehicle interior environment picture.
As yet another possible implementation, the image processing device may acquire a color picture of the vehicle interior environment from an image sensor within the vehicle, and/or an infrared picture of the vehicle interior environment, and determine an environmental parameter of the vehicle interior environment from the color picture of the vehicle interior environment, and/or the infrared picture of the vehicle interior environment.
In connection with the visual monitoring system shown in fig. 3, the image processing device 11 may acquire a color picture of the vehicle interior environment from the visible light image sensor 113 and/or an infrared picture of the vehicle interior environment from the infrared light image sensor 114, and determine environmental parameters of the vehicle interior environment from the color picture of the vehicle interior environment and/or the infrared picture of the vehicle interior environment.
Note that, the specific description of acquiring the environmental parameters of the vehicle interior environment may be referred to the description of the subsequent sections, without limitation.
In practical applications, the correspondence between the brightness value and the brightness level of the vehicle interior environment image may be as shown in table 1 below.
Table 1 correspondence table of brightness values and brightness levels of vehicle interior environment images
Brightness value Brightness of vehicle interior environment image
(75,100] The brightness is better
(50,75] Brightness level is generally
(25,50] Poor brightness
[0,25] Very poor brightness
It should be noted that the data of table 1 are only exemplary. In the embodiment of the application, the corresponding relation between the brightness value and the brightness degree of the vehicle interior environment image can also comprise other setting modes, and is not limited.
S302, the image processing device determines a target image for sending to the vehicle monitoring system according to the environment parameters.
The vehicle monitoring system is used for monitoring personnel behaviors in the vehicle; the target image is one of a target infrared image, a target color image and a fusion image, wherein the target infrared image and the target color image are obtained by image acquisition of the internal environment of the vehicle, and the fusion image is obtained by fusion of the target infrared image and the target color image.
Personnel behaviors may include fatigue driving, distraction driving, interactive gesture actions, and the like.
As one possible implementation, the image processing apparatus may select one image from among the target infrared image, the target color image, and the fusion image, as the target image, according to a brightness value of the vehicle interior environment, and transmit the target image to the vehicle monitoring system.
As yet another possible implementation, the environment parameter further includes an interference parameter, and the image processing apparatus may select one image from among the target infrared image, the target color image, and the fusion image as the target image according to the interference parameter and a brightness value of the vehicle interior environment, and transmit the target image to the vehicle monitoring system.
It should be noted that the disturbance parameter is used to indicate whether the driver of the vehicle wears the sunglasses.
In practical applications, the image processing apparatus may also determine whether the driver of the vehicle wears the sunglasses through the head detection model. For example, the head detection model may be a YOLOV5-s model.
It can be appreciated that the visible light cannot penetrate the sunglasses, the color image cannot show whether the driver closes eyes when wearing the sunglasses, and by adding the interference parameter, whether the driver wears the sunglasses or not can be considered on the basis of the environmental parameter so as to determine a target image for sending to the vehicle monitoring system; the target image comprises a target infrared image, so that the behavior of personnel in the vehicle can be monitored when a driver wears the sunglasses, and the running safety of the vehicle is improved.
Based on the technical scheme provided by the application, the target image used for being sent to the vehicle monitoring system is determined according to the environmental parameters, and the vehicle monitoring system is used for monitoring the personnel behaviors in the vehicle. The target image is one of a target infrared image, a target color image and a fusion image, wherein the target infrared image and the target color image are obtained by image acquisition of the internal environment of the vehicle, and the fusion image is obtained by fusion of the target infrared image and the target color image. Because the environment parameter includes the bright value, the bright value is used for instructing the camera in the vehicle to gather the bright degree of vehicle internal environment image, and the color image can clear show driver's personnel action under the better circumstances of bright degree, and the infrared image can clear show driver's personnel action in the form of gray image under the circumstances of bright degree extremely, and the fused image can also obtain clear driver's personnel action under the circumstances of guaranteeing color detail information. Therefore, the target image can clearly show the personnel behaviors of the driver under different brightness degrees, the monitoring effect of the visual monitoring system is ensured, and the driving safety of the vehicle is improved.
In some embodiments, as shown in fig. 6, the vehicle detection system includes an occupant monitoring system OMS, and in order to determine a target image for transmission to the vehicle monitoring system, the visual monitoring method of the present application may further include the following S401-S403.
S401, the image processing apparatus determines the target infrared image as the target image in a case where the brightness value is less than or equal to the first threshold value.
Wherein the first threshold may be set as desired. For example, 25.
As a possible implementation, if the image in the current occupant monitoring system OMS is the target color image or the fused image, the image processing device may switch the target color image or the fused image to the target infrared image in a case where the brightness value is less than or equal to the first threshold value. If the image in the current occupant monitoring system OMS is the target infrared image, the image processing apparatus may maintain the image in the occupant monitoring system OMS as the target infrared image.
For example, the image processing device may switch the target color image or the fused image to the target infrared image by an image processor (Image Signal Processo, ISP) algorithm.
S402, the image processing device determines the fusion image as a target image when the brightness value is larger than a first threshold value and smaller than or equal to a second threshold value.
Wherein the second threshold may be set as desired. For example, 50.
As a possible implementation manner, if the image in the current occupant monitoring system OMS is the target color image or the target infrared image, the image processing device may switch the target color image or the target infrared image to the fusion image in a case where the brightness value is greater than the first threshold value and less than or equal to the second threshold value. If the current image in the occupant monitoring system OMS is the fused image, the image processing apparatus may maintain the image in the occupant monitoring system OMS as the fused image.
S403, the image processing apparatus determines the target color image as the target image when the brightness value is greater than the second threshold value.
As a possible implementation manner, if the image in the current occupant monitoring system OMS is the fusion image or the target infrared image in the case where the brightness value is greater than the second threshold value, the image processing device may switch the fusion image or the target infrared image to the target color image. If the current image in the occupant monitoring system OMS is the target color image, the image processing apparatus may maintain the image in the occupant monitoring system OMS as the target color image.
In summary, in the case where the vehicle detection system includes the occupant monitoring system OMS, the correspondence between the brightness value and the target image may be as described in table 2 below.
Table 2 correspondence table of brightness values and target images
Brightness value Target image in OMS
Less than a first threshold Target infrared image
Greater than a first threshold and less than a second threshold Fusion image
Greater than a second threshold Target color image
It should be noted that the data of table 2 are only exemplary. In the embodiment of the application, in the case that the vehicle detection system includes the occupant monitoring system OMS, the correspondence between the bright value and the target image may also include other setting manners, which are not limited.
According to the technical means, under the condition that the brightness value is smaller than or equal to the first threshold value, the brightness degree of the image of the environment in the vehicle is extremely low, the target infrared image is determined to be the target image, and as the infrared image can still keep higher definition in a scene with extremely low brightness degree, the occupant behavior in the vehicle can be accurately determined by the occupant monitoring system OMS through the high-definition image. Under the condition that the brightness value is larger than the first threshold value and smaller than or equal to the second threshold value, the camera in the vehicle collects the brightness degree of the internal environment image of the vehicle, the fusion image is determined to be the target image, and the brightness, the definition and the color restoration degree are higher because the fusion image fuses the brightness information of the infrared image and the color information of the color image, so that the occupant monitoring system OMS can accurately determine the occupant behavior in the vehicle. Under the condition that the brightness value is larger than the second threshold value, the brightness degree of the environment image in the vehicle is extremely high, the target color image is determined to be the target image, the definition and the color reduction degree of the image can be ensured, and the face overexposure caused by the infrared image can be avoided, so that the occupant behavior in the vehicle can be accurately determined by the occupant monitoring system OMS. Thus, the monitoring effect of the visual monitoring system on the vehicle passengers can be ensured under the condition that the internal environment of the vehicle is in different brightness degrees.
In a possible embodiment, as shown in fig. 7, the vehicle detection system includes a driver monitoring system DMS, and in order to determine the target image for transmission to the vehicle monitoring system, the visual monitoring method of the present application may further include the following S501-S504.
S501, the image processing apparatus determines whether the driver wears a sunglasses.
As a possible implementation, the image processing device may obtain the disturbance parameter from the environment parameter, and determine whether the driver wears the sunglasses according to the disturbance parameter.
The interference parameters are used for indicating whether a driver wears the sunglasses or not, and the interference parameters comprise a first interference parameter and a second interference parameter. The first disturbance parameter indicates that the driver has worn a sunglasses and the second disturbance parameter indicates that the driver has not worn a sunglasses.
The image processing device determines that the driver has worn the sunglasses under the condition that the interference parameter is determined to be a first interference parameter; the image processing device determines that the driver does not wear the sunglasses when determining that the disturbance parameter is the second disturbance parameter.
S502, if the image processing device determines that the driver does not wear the sunglasses, determining the target infrared image as the target image if the brightness value is smaller than or equal to a first threshold value.
As a possible implementation manner, if it is determined that the driver does not wear the sunglasses and the brightness value is less than or equal to the first threshold, the image processing device may switch the target color image or the fusion image to the target infrared image if the image in the current driver monitoring system DMS is the target color image or the fusion image. If the image in the current driver monitoring system DMS is the target infrared image, the image processing apparatus may maintain the image in the driver monitoring system DMS as the target infrared image.
S503, if the image processing device determines that the driver does not wear the sunglasses, if the brightness value is larger than the first threshold value and smaller than or equal to the second threshold value, the fusion image is determined to be a target image.
As a possible implementation manner, if it is determined that the driver does not wear the sunglasses and the brightness value is greater than the first threshold value and less than or equal to the second threshold value, the image processing device may switch the target color image or the target infrared image to the fusion image if the image in the current driver monitoring system DMS is the target color image or the target infrared image. If the image in the current driver monitor system DMS is the fused image, the image processing apparatus may maintain the image in the driver monitor system DMS as the fused image.
S504, when the image processing apparatus determines that the driver is not wearing the sunglasses, the image processing apparatus determines the target color image as the target image if the brightness value is greater than the second threshold value.
As a possible implementation manner, if it is determined that the driver does not wear the sunglasses and the brightness value is greater than the second threshold value, the image processing device may switch the fused image or the target infrared image to the target color image if the image in the current driver monitoring system DMS is the fused image or the target infrared image. If the image in the current driver monitor system DMS is the target color image, the image processing apparatus may maintain the image in the driver monitor system DMS as the target color image.
In still other embodiments, the image processing device may determine the target infrared image as the target image in the event that it is determined that the driver is wearing a sunglasses.
In summary, in the case where the vehicle detection system includes the driver monitor system DMS, the correspondence relationship between the brightness value and the target image can be as described in the following table 3.
Table 3 correspondence table of brightness values and target images
It should be noted that the data of table 3 are only exemplary. In the embodiment of the present application, in the case where the vehicle detection system includes the occupant monitoring system DMS, the correspondence between the bright value and the target image may also include other setting manners, without limitation.
According to the technical means, the visible light cannot penetrate through the sunglasses, whether the driver closes eyes when wearing the sunglasses cannot be shown in the target color image, and by adding the interference parameters, the fact that the driver does not wear the sunglasses can be determined on the basis of the brightness value so as to determine the target image for sending to the vehicle monitoring system, the situation that the driver monitoring system DMS does not accurately identify the personnel behaviors of the driver under the condition that the driver wears the sunglasses is avoided, and the safety of vehicle running is improved.
In one possible embodiment, in order to determine the target infrared image and the target color image, the visual monitoring method provided by the embodiment of the present application may further include the following steps S601-S602.
S601, the image processing device receives an initial infrared image and an initial color image sent by the camera.
The camera comprises a camera unit, a beam splitting prism, a visible light image sensor and an infrared light image sensor; the camera shooting unit is used for transmitting visible light of the internal environment of the vehicle to the visible light image sensor and the infrared light image sensor respectively through the beam splitting prism; the infrared light image sensor is for imaging the received visible light into an initial infrared image, and the visible light image sensor is for imaging the received visible light into an initial color image.
As a possible implementation, the image processing device may receive the initial infrared image and the initial color image sent by the camera through the control bus.
In practical applications, the camera may send the initial infrared image and the initial color image to the image processing device based on a preset frequency. Correspondingly, the image processing device receives the initial infrared image and the initial color image sent by the camera.
Wherein, the preset frequency can be set according to the requirement. For example, it may be 1 second, 0.1 second, or the like.
S602, the image processing device performs format conversion on the initial infrared image to obtain a target infrared image, and performs format conversion on the initial color image to obtain a target color image.
As one possible implementation manner, the image processing device may acquire a format that can be identified by the image processing device from the vehicle monitoring system, and convert the format of the initial infrared image into a format that can be identified by the vehicle monitoring system through the format converter, so as to obtain a target infrared image; and converting the format of the initial color image into a format which can be recognized by a vehicle monitoring system to obtain a target color image.
The image format may be RGB format, brightness, chromaticity, density (luma chrominance chroma, YUV) format, and the like.
For example, the format that the vehicle monitoring system can recognize may be a YUV format, and the format of the initial infrared image and the format of the initial color image may be an RGB format. The image processing device can switch the format of the initial infrared image from an RGB format to a YUV format to obtain a target infrared image; and switching the format of the initial color image from the RGB format to the YUV format to obtain a target color image.
According to the technical means, the initial infrared image and the initial color image shot by the camera can be converted into the format which can be identified by the vehicle monitoring system, so that the situation that the vehicle monitoring system cannot directly identify the initial infrared image and the initial color image shot by the camera, so that the behavior of personnel in the monitored vehicle is caused is avoided, and the reliability of the behavior of the personnel in the monitored vehicle is improved.
In a possible embodiment, the camera further includes an infrared light supplement lamp, and the visual monitoring method provided by the embodiment of the application may further include the following steps S701-S702.
And S701, controlling the infrared light supplementing lamp to be in an on state by the image processing device under the condition that the brightness value is smaller than a second threshold value.
As a possible implementation manner, the image processing device may maintain the on-off state of the infrared light-compensating lamp unchanged in the case that the on-off state of the infrared light-compensating lamp is in the on state; under the condition that the switch of the infrared light supplementing lamp is in an off state, the image processing device controls the switch of the infrared light supplementing lamp to be switched from the off state to the on state so as to control the infrared light supplementing lamp to be in the on state.
S702, the image processing device controls the infrared light supplementing lamp to be in a closed state under the condition that the brightness value is larger than a second threshold value.
As a possible implementation manner, in a case that the switch of the infrared light compensating lamp is in a closed state, the image processing device may control the switch of the infrared light compensating lamp to be switched from the closed state to the open state; under the condition that the switch of the infrared light supplementing lamp is in an off state, the image processing device can maintain the on-off state of the infrared light supplementing lamp unchanged so as to control the infrared light supplementing lamp to be in an off state.
According to the technical means, the infrared light supplementing can be carried out under the condition that the brightness of the vehicle interior environment image is poor, so that the brightness of the vehicle interior environment image can be improved, the monitoring effect of the vehicle monitoring system is further improved, and the infrared light is invisible light, so that the interference of a light source to a driver is avoided, and the running safety of the vehicle is improved. Under the condition that the brightness of the environment image in the vehicle is good, the infrared light filling is turned off, so that the face overexposure caused by the holding of the infrared light filling can be avoided, and the monitoring effect of the vehicle monitoring system is further improved.
In one possible embodiment, in order to obtain the brightness value, the visual monitoring method provided by the embodiment of the present application may further include S801 described below.
S801, the image processing device performs weighting processing on brightness of each pixel in the first image to obtain a brightness value.
The first image is any one of a target infrared image and a target color image, and the weight of the brightness of a first pixel in the first image is inversely related to the target distance; the first pixel is any one pixel in the first image, and the target distance is the distance between the first pixel and the central pixel of the first image.
As a possible implementation manner, the image processing apparatus determines brightness of each pixel in the first image and a corresponding weight value, and performs summation processing on products of the brightness of each pixel in the first image and the corresponding weight value to obtain the brightness value of the first image.
According to the technical means, the brightness of each pixel in the first image is weighted, that is, the brightness of each pixel in the first image is combined, so that the brightness of the first image can be comprehensively reflected. In addition, since the person in the vehicle is usually located in the central area of the image, and the weight of the brightness of the first pixel in the present application is inversely related to the target distance, the target distance is the distance between the first pixel and the central pixel of the first image, that is, the weight of the central area pixel of the image is increased, the brightness value of the first image may have a greater correlation with the effect of monitoring the behavior of the person in the vehicle.
In one possible embodiment, the vehicle monitoring system further includes a manual control mode in which a person in the vehicle can manually select an image as a target image to be transmitted to the vehicle monitoring system. The visual monitoring method provided by the embodiment of the application can further comprise the following steps S901-S902.
S901, controlling a vehicle machine screen of a vehicle by an image processing device to display a setting interface of a vehicle monitoring system, wherein the setting interface comprises a first setting option and a second setting option.
Wherein the first setting option is used for controlling the determination of the target color image or the fusion image as the target image, and the second setting option is used for controlling the determination of the target infrared image as the target image.
As a possible implementation manner, the image processing device may control a vehicle machine screen of the vehicle to display a setting interface of the vehicle monitoring system when the vehicle monitoring system is in a manual control mode.
In practical applications, the setting interface further includes a third setting option (which may also be referred to as an automatic control option) for controlling the image processing device to determine a target image for transmission to the vehicle monitoring system according to the environmental parameter.
S902, the image processing apparatus determines a target image for transmission to the vehicle monitoring system in response to a control instruction of a person inside the vehicle.
The control instruction may refer to an instruction generated in response to a control operation of a person inside the vehicle. For example, the control instruction may be an instruction input by a person in the vehicle through an input device (such as a physical button) of the vehicle screen, or may be a control instruction input by an operator through a touch screen of the vehicle screen.
The control instructions may include a first control instruction and a second control instruction. The first control instruction is used for controlling the target infrared image to be used as a target image sent to the vehicle monitoring system, and the second control instruction is used for controlling the target color image or the fused image to be used as a target image sent to the vehicle monitoring system.
As one possible implementation, the image processing device determines the target image for transmission to the vehicle monitoring system as a target color image or a fused image in response to a first control instruction of a person inside the vehicle.
For example, the image processing apparatus determines that the target image for transmission to the vehicle monitoring system is a fused image if the brightness value is greater than a first threshold value and less than or equal to a second threshold value in the case of receiving a first control instruction of a person inside the vehicle. If the brightness value is greater than the second threshold value, the target image for transmission to the vehicle monitoring system is determined to be a target color image.
As yet another possible implementation, the image processing apparatus may further determine, in response to a second control instruction of a person inside the vehicle, the target image for transmission to the vehicle monitoring system as the target infrared image.
According to the technical means, the flexibility of vehicle control can be improved.
The above embodiments of the present application may be combined without contradiction.
The embodiment of the application can divide the functional modules or functional units of the visual monitoring device or the image processing device according to the method example, for example, each functional module or functional unit can be divided corresponding to each function, and two or more functions can be integrated in one processing module. The integrated modules may be implemented in hardware, or in software functional modules or functional units. The division of the modules or units in the embodiment of the present application is schematic, which is merely a logic function division, and other division manners may be implemented in practice.
In the case of dividing the respective functional modules by the respective functions, fig. 8 shows a schematic structural diagram of a visual monitoring apparatus 800, which may be an image processing apparatus or a chip applied to the image processing apparatus, and the visual monitoring apparatus 800 may be used to perform the functions of the image processing apparatus as described in the above embodiments. The visual monitoring apparatus 800 shown in fig. 8 may include: an acquisition unit 801, a determination unit 802; an acquisition unit 801 for acquiring environmental parameters of an internal environment of the vehicle; the environmental parameter is used for indicating the brightness degree of the vehicle interior environmental image; a determining unit 802 for determining a target image for transmission to the vehicle monitoring system according to the environmental parameter; the target image is one of a target infrared image, a target color image and a fusion image, wherein the target infrared image and the target color image are obtained by image acquisition of the internal environment of the vehicle, and the fusion image is obtained by fusion of the target infrared image and the target color image.
Further, the vehicle detection system includes an occupant monitoring system OMS, the environmental parameter including a brightness value; the determining unit 802 is specifically configured to: determining the target infrared image as a target image in the case that the brightness value is less than or equal to the first threshold value; determining the fused image as a target image in the case where the brightness value is greater than the first threshold value and less than or equal to the second threshold value; in the case where the brightness value is greater than the second threshold value, the target color image is determined as the target image.
Further, the vehicle detection system comprises a driver monitoring system DMS, and the environmental parameters comprise a brightness value and an interference parameter, wherein the interference parameter is used for indicating whether a driver of the vehicle wears a sunglasses; the determining unit 802 is specifically configured to: if the brightness value is smaller than or equal to the first threshold value under the condition that the driver is not worn with the sunglasses, determining the target infrared image as the target image; if the brightness value is larger than the first threshold value and smaller than or equal to the second threshold value under the condition that the driver is not worn with the sunglasses, determining the fusion image as a target image; in the case where it is determined that the driver is not wearing the sunglasses, if the brightness value is greater than the second threshold value, the target color image is determined as the target image.
Further, the apparatus 800 further includes: a processing unit 803; the acquiring unit 801 is further configured to receive an initial infrared image and an initial color image sent by a camera in the vehicle; the camera comprises a camera unit, a beam splitting prism, a visible light image sensor and an infrared light image sensor; the camera shooting unit is used for transmitting visible light of the internal environment of the vehicle to the visible light image sensor and the infrared light image sensor respectively through the beam splitting prism; the infrared light image sensor is used for imaging the received visible light into an initial infrared image, and the visible light image sensor is used for imaging the received visible light into an initial color image; a processing unit 803, configured to perform format conversion on the initial infrared image to obtain a target infrared image; the processing unit 803 is further configured to perform format conversion on the initial color image to obtain a target color image.
Further, the camera further includes an infrared light supplement lamp, and the processing unit 803 is further configured to control the infrared light supplement lamp to be in an on state when the brightness value is less than the second threshold value; the processing unit 803 is further configured to control the infrared light compensating lamp to be in an off state if the brightness value is greater than the second threshold value.
Further, in the apparatus 800, the environmental parameter includes a brightness value for indicating a brightness level of a camera within the vehicle for capturing an image of an environment inside the vehicle.
Further, the acquiring unit 801 is specifically configured to: weighting the brightness of each pixel in the first image to obtain a brightness value; the first image is any one of a target infrared image and a target color image, and the weight of the brightness of a first pixel in the first image is inversely related to the target distance; the first pixel is any one pixel in the first image, and the target distance is the distance between the first pixel and the central pixel of the first image.
Further, the processing unit 803 is further configured to control a vehicle machine screen of the vehicle to display a setting interface of the vehicle monitoring system, where the setting interface includes a first setting option and a second setting option; the first setting option is used for controlling the determination of the target color image or the fusion image as the target image, and the second setting option is used for controlling the determination of the target infrared image as the target image; the determining unit 802 is further configured to determine a target image for transmission to the vehicle monitoring system in response to a control instruction of a person inside the vehicle.
The embodiment of the application also provides a computer readable storage medium. All or part of the flow in the above method embodiments may be implemented by a computer program to instruct related hardware, where the program may be stored in the above computer readable storage medium, and when the program is executed, the program may include the flow in the above method embodiments. The computer readable storage medium may be an internal storage unit of the visual monitoring apparatus or the image processing apparatus (including the data transmitting end and/or the data receiving end) of any of the foregoing embodiments, for example, a hard disk or a memory of the visual monitoring apparatus. The computer readable storage medium may be an external storage device of the visual monitoring apparatus, such as a plug-in hard disk (SMC) provided in the visual monitoring apparatus, a Secure Digital (SD) card, a flash card, or the like. Further, the computer readable storage medium may further include both an internal storage unit and an external storage device of the visual monitoring apparatus. The computer readable storage medium is used for storing the computer program and other programs and data required by the visual monitoring device. The above-described computer-readable storage medium may also be used to temporarily store data that has been output or is to be output.
The embodiment of the application also provides a vehicle, which comprises the visual monitoring system, the image processing device or the visual monitoring device related to the embodiment of the method.
Further, actions, terms, and the like, which are referred to between embodiments of the present application, are not limited thereto. The message names of interactions between the devices or parameter names in the messages in the embodiments of the present application are just an example, and other names may be used in specific implementations without limitation.
It should be noted that the terms "first" and "second" and the like in the description, the claims and the drawings of the present application are used for distinguishing between different objects and not for describing a particular sequential order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
It should be understood that, in the present application, "at least one (item)" means one or more, "a plurality" means two or more, "at least two (items)" means two or three and three or more, "and/or" for describing an association relationship of an association object, three kinds of relationships may exist, for example, "a and/or B" may mean: only a, only B and both a and B are present, wherein a, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b or c may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.
From the foregoing description of the embodiments, it will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of functional modules is illustrated, and in practical application, the above-described functional allocation may be implemented by different functional modules according to needs, i.e. the internal structure of the apparatus is divided into different functional modules to implement all or part of the functions described above.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another apparatus, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and the parts displayed as units may be one physical unit or a plurality of physical units, may be located in one place, or may be distributed in a plurality of different places. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a readable storage medium. Based on such understanding, the technical solution of the embodiments of the present application may be essentially or a part contributing to the prior art or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, including several instructions for causing a device (may be a single-chip microcomputer, a chip or the like) or a processor (processor) to perform all or part of the steps of the method described in the embodiments of the present application. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk, etc.
The foregoing is merely illustrative of specific embodiments of the present application, and the scope of the present application is not limited thereto, but any changes or substitutions within the technical scope of the present application should be covered by the scope of the present application. Therefore, the protection scope of the application is subject to the protection scope of the claims.

Claims (12)

1. A method of visual monitoring, comprising:
acquiring environmental parameters of the internal environment of the vehicle; the environmental parameter is used for indicating the brightness degree of the vehicle interior environment;
determining a target image for sending to a vehicle monitoring system according to the environmental parameter; the target image is one of a target infrared image, a target color image and a fusion image, wherein the target infrared image and the target color image are obtained by image acquisition of the vehicle internal environment, and the fusion image is obtained by fusion of the target infrared image and the target color image.
2. The visual monitoring method of claim 1, wherein the vehicle detection system comprises an occupant monitoring system OMS, the environmental parameter comprising a brightness value; the determining a target image for sending to a vehicle monitoring system according to the environment parameters comprises the following steps:
Determining the target infrared image as the target image in the case that the brightness value is less than or equal to a first threshold value;
determining the fused image as the target image if the brightness value is greater than the first threshold and less than or equal to a second threshold;
in the case where the brightness value is greater than the second threshold value, the target color image is determined as the target image.
3. The visual monitoring method of claim 1, wherein the vehicle detection system comprises a driver monitoring system DMS, the environmental parameters comprising a brightness value and a disturbance parameter, the disturbance parameter being indicative of whether a driver of the vehicle is wearing a sunglasses;
the determining a target image for sending to a vehicle monitoring system according to the environment parameters comprises the following steps:
if the brightness value is less than or equal to a first threshold value under the condition that the driver is not wearing the sunglasses, determining the target infrared image as the target image;
if the brightness value is larger than the first threshold value and smaller than or equal to a second threshold value under the condition that the driver is not worn with the sunglasses, determining the fusion image as the target image;
In the case that the driver is determined not to wear the sunglasses, if the brightness value is greater than the second threshold value, the target color image is determined as the target image.
4. A visual monitoring method according to any one of claims 1-3, characterized in that the method further comprises:
receiving an initial infrared image and an initial color image sent by a camera in the vehicle; the camera comprises a camera unit, a beam splitting prism, a visible light image sensor and an infrared light image sensor; the camera shooting unit is used for respectively transmitting visible light of the vehicle internal environment to the visible light image sensor and the infrared light image sensor through the beam splitting prism; the infrared light image sensor is used for imaging received visible light into the initial infrared image, and the visible light image sensor is used for imaging received visible light into the initial color image;
performing format conversion on the initial infrared image to obtain the target infrared image;
and performing format conversion on the initial color image to obtain the target color image.
5. The visual monitoring method of claim 4, wherein the camera further comprises an infrared light supplement lamp, the method further comprising:
Controlling the infrared light supplementing lamp to be in an on state under the condition that the brightness value is smaller than a second threshold value;
and controlling the infrared light supplementing lamp to be in a closed state under the condition that the brightness value is larger than the second threshold value.
6. A visual monitoring method according to any one of claims 1-3, characterized in that the environmental parameter comprises a brightness value for indicating how bright the vehicle interior environment image was acquired by a camera in the vehicle.
7. The visual monitoring method of claim 6, wherein obtaining the brightness value comprises:
the brightness of each pixel in the first image is weighted to obtain the brightness value; the first image is any one of the target infrared image and the target color image, and the weight of the brightness of a first pixel in the first image is inversely related to the target distance; the first pixel is any one pixel in the first image, and the target distance is the distance between the first pixel and the central pixel of the first image.
8. A visual monitoring method according to any one of claims 1-3, characterized in that the method further comprises:
Controlling a vehicle machine screen of the vehicle to display a setting interface of the vehicle monitoring system, wherein the setting interface comprises a first setting option and a second setting option; the first setting option is used for controlling the determination of the target color image or the fusion image as the target image, and the second setting option is used for controlling the determination of the target infrared image as the target image;
a target image for transmission to the vehicle monitoring system is determined in response to a control instruction of a person inside the vehicle.
9. A visual monitoring device, the device comprising: an acquisition unit, a determination unit;
the acquisition unit is used for acquiring environmental parameters of the internal environment of the vehicle; the environmental parameter includes a brightness value for indicating a brightness level of a camera within the vehicle to capture an image of an environment within the vehicle;
the determining unit is used for determining a target image which is used for being sent to the vehicle monitoring system according to the environment parameter; the vehicle monitoring system is used for monitoring the personnel behaviors in the vehicle; the target image is one of a target infrared image, a target color image and a fusion image, wherein the target infrared image and the target color image are obtained by image acquisition of the vehicle internal environment, and the fusion image is obtained by fusion of the target infrared image and the target color image.
10. A visual monitoring device, comprising: a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the method of any one of claims 1 to 8.
11. A vehicle comprising the visual monitoring apparatus of claim 10; the visual monitoring device is configured to perform the method of any of claims 1-8.
12. A computer readable storage medium, characterized in that, when computer-executable instructions stored in the computer readable storage medium are executed by a processor of an electronic device, the electronic device is capable of performing the method of any one of claims 1 to 8.
CN202310657941.1A 2023-06-05 2023-06-05 Visual monitoring method, device, system, vehicle and readable storage medium Pending CN116758520A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310657941.1A CN116758520A (en) 2023-06-05 2023-06-05 Visual monitoring method, device, system, vehicle and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310657941.1A CN116758520A (en) 2023-06-05 2023-06-05 Visual monitoring method, device, system, vehicle and readable storage medium

Publications (1)

Publication Number Publication Date
CN116758520A true CN116758520A (en) 2023-09-15

Family

ID=87950684

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310657941.1A Pending CN116758520A (en) 2023-06-05 2023-06-05 Visual monitoring method, device, system, vehicle and readable storage medium

Country Status (1)

Country Link
CN (1) CN116758520A (en)

Similar Documents

Publication Publication Date Title
US20110187886A1 (en) Image pickup device, warning method, and recording medium
EP2471691A1 (en) Obstacle detection device, obstacle detection system provided therewith, and obstacle detection method
EP2590397A2 (en) Automatic image equalization for surround-view video camera systems
US20110035099A1 (en) Display control device, display control method and computer program product for the same
KR20120008519A (en) Monitoring apparatus
JP4399174B2 (en) Vehicle equipped with automobile display unit, automobile night vision device and automobile infrared night vision device
US20100207958A1 (en) Color image creating apparatus
CN110901531B (en) Rear display device, rear display method, and recording medium
US10455159B2 (en) Imaging setting changing apparatus, imaging system, and imaging setting changing method
CN108416333B (en) Image processing method and device
JP2017011633A (en) Imaging device
JP2006148690A (en) Imaging device
CN109286783B (en) Vehicle image processing system
JP2013162339A (en) Imaging apparatus
JP6317914B2 (en) In-vehicle image processing device
US20040161159A1 (en) Device and method for enhancing vision in motor vehicles
CN116758520A (en) Visual monitoring method, device, system, vehicle and readable storage medium
JP2019047296A (en) Display device, display method, control program, and electronic mirror system
WO2017158829A1 (en) Display control device and display control method
US10102436B2 (en) Image processing device, warning device and method for processing image
KR20170047384A (en) Motor vehicle camera device with histogram spreading
KR101601324B1 (en) Image acquiring method of vehicle camera system
JP2016136326A (en) Information display apparatus and information display method
EP3226554B1 (en) Imaging device and vehicle
US11190708B2 (en) Information processing apparatus having capability of appropriately setting regions displayed within an image capturing region using different categories

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination