WO2022230168A1 - Passenger status determination device and passenger status determination method - Google Patents

Passenger status determination device and passenger status determination method Download PDF

Info

Publication number
WO2022230168A1
WO2022230168A1 PCT/JP2021/017174 JP2021017174W WO2022230168A1 WO 2022230168 A1 WO2022230168 A1 WO 2022230168A1 JP 2021017174 W JP2021017174 W JP 2021017174W WO 2022230168 A1 WO2022230168 A1 WO 2022230168A1
Authority
WO
WIPO (PCT)
Prior art keywords
occupant
vehicle
image
determination
state
Prior art date
Application number
PCT/JP2021/017174
Other languages
French (fr)
Japanese (ja)
Inventor
洸暉 安部
Original Assignee
三菱電機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 三菱電機株式会社 filed Critical 三菱電機株式会社
Priority to DE112021007131.9T priority Critical patent/DE112021007131T8/en
Priority to JP2023516996A priority patent/JP7330418B2/en
Priority to PCT/JP2021/017174 priority patent/WO2022230168A1/en
Publication of WO2022230168A1 publication Critical patent/WO2022230168A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/98Detection or correction of errors, e.g. by rescanning the pattern or by human intervention; Evaluation of the quality of the acquired patterns
    • G06V10/993Evaluation of the quality of the acquired pattern
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30268Vehicle interior

Definitions

  • the present disclosure relates to an occupant state determination device and an occupant state determination method for determining the state of a vehicle occupant.
  • the driver monitoring system which monitors the condition of the driver by analyzing the video of the driver captured by the camera installed in the vehicle, is being put to practical use.
  • a conventional DMS determines the state of the driver for each video frame, accumulates the determination results, and makes a final determination of the driver's state based on the determination results accumulated over a certain period of time (for example, Patent document 1) below. This improves the reliability of the final determination result of the driver's condition.
  • the timing for judging the driver's condition is reduced in a situation where photographing failures frequently occur, and the purpose of monitoring the driver's condition cannot be sufficiently achieved.
  • Situations in which shooting defects frequently occur include, for example, scenes in which the brightness distribution of the image of the driver changes irregularly due to sunlight filtering through the trees inside the vehicle, and scenes in which there are continuous curves and the hand or rotation of the steering wheel. For example, the driver's face is often blocked by the horn pad of the steering wheel.
  • the present disclosure has been made in order to solve the above-described problems. It is an object of the present invention to prevent the timing of determination from being reduced.
  • the occupant state determination device includes an in-vehicle image acquisition unit that acquires an in-vehicle image that is an image captured inside the vehicle, and determines whether the in-vehicle image is suitable for detecting the face of the occupant of the vehicle for each frame.
  • a bad image judgment unit that judges the in-vehicle image that is judged to be unsuitable for occupant face detection as a bad image
  • an occupant state determination unit that makes a final determination of the occupant state based on the accumulated determination results of the occupant state; or accumulation of determination results of the state of the occupants based on the in-vehicle images determined to be defective, and furthermore, the ratio of the in-vehicle images determined to be defective within a certain period exceeds a predetermined threshold.
  • a control unit that, when exceeded, erases the determination result of the occupant's condition accumulated up to that point.
  • the occupant state determination device According to the occupant state determination device according to the present disclosure, even if an in-vehicle video shooting failure occurs, unless it is a continuous shooting failure, the accumulated determination result is not deleted, and the occupant state determination is continued. can be done by Therefore, even if image shooting failures occur frequently to some extent, it is possible to prevent the timing of judging the state of the occupant from decreasing.
  • FIG. 1 is a diagram showing a configuration of an occupant state determination device according to Embodiment 1; FIG. It is a figure which shows the structural example of a bad image determination part.
  • 4 is a flow chart showing the operation of the occupant state determination device according to Embodiment 1; It is a figure which shows the hardware structural example of a passenger
  • FIG. 1 is a diagram showing the configuration of an occupant state determination device 10 according to Embodiment 1.
  • occupant condition determination device 10 is mounted on a vehicle.
  • the occupant state determination device 10 does not necessarily have to be permanently installed in the vehicle, and may be realized on a portable device that can be brought into the vehicle, such as a mobile phone, a smart phone, or a portable navigation device (PND).
  • PND portable navigation device
  • part of the functions of the occupant condition determination device 10 may be implemented on a server that is installed outside the vehicle and that can communicate with the occupant condition determination device 10 .
  • the occupant condition determination device 10 is connected to a camera 1 that captures an image inside the vehicle in which the occupant condition determination device 10 is mounted (hereinafter referred to as "vehicle image"), and based on the vehicle interior image captured by the camera 1, Determining the condition of vehicle occupants (including the driver). For example, if the occupant state determination device 10 determines only the state of the driver, at least the driver's seat should be shown in the in-vehicle image. If the in-vehicle image includes images of not only the driver's seat but also the passenger's seat and the rear seat, the occupant state determination device 10 may also determine the state of the driver's occupants, that is, the passenger's seat and the rear seat occupants. good.
  • the occupant condition determination device 10 includes an in-vehicle image acquisition unit 11, a defective image determination unit 12, an occupant condition determination unit 13, and a control unit .
  • the in-vehicle video acquisition unit 11 acquires the in-vehicle video captured by the camera 1 .
  • the bad image determination unit 12 determines whether or not the in-vehicle image acquired by the in-vehicle image acquisition unit 11 is a defective image for each frame. Specifically, the defective image determination unit 12 determines whether or not the in-vehicle image is suitable for occupant face detection for each frame, and determines whether or not the vehicle interior image is suitable for occupant face detection. It is judged as a bad image.
  • a specific example of a method for determining whether or not an in-vehicle image is suitable for occupant face detection will be described later.
  • the occupant state determination unit 13 determines the state of the occupants of the vehicle based on the in-vehicle image acquired by the in-vehicle image acquisition unit 11 . More specifically, the occupant state determination unit 13 determines the occupant state based on the in-vehicle video of each frame, accumulates the determination results, and based on the occupant state determination results accumulated over a certain period of time, A final determination of the condition of the occupants is made. For example, the occupant state determination unit 13 makes the final determination result the determination result with the highest ratio among the determination results obtained in a certain period of time, thereby improving the reliability of the final determination result.
  • inattentiveness determination for determining whether the driver is not distracted while driving
  • dozing determination for determining whether the driver is dozing off
  • Poor posture determination which determines whether or not the passenger is seated in a normal posture
  • stiffness determination which determines whether or not the occupant's body is stiff due to seizures
  • the determination result by the occupant state determination unit 13 is output to, for example, a vehicle alarm device or an automatic driving device, and used for various processes.
  • a vehicle alarm device outputs an alarm when the driver is distracted or drowsy while driving
  • an automatic driving device moves the vehicle to a safe location when the driver loses posture or becomes stiff. be able to.
  • the control unit 14 controls the operation of the occupant state determination unit 13 as follows based on the result of determination as to whether the in-vehicle image is a defective image. First, the control unit 14 does not allow the occupant state determination unit 13 to determine the state of the occupant based on the in-vehicle image determined to be defective (hereinafter, may be simply referred to as “defective image”). To prevent determination results of a passenger's state based on images from being accumulated. In addition, when a defective image occurs, the control unit 14 does not immediately erase (clear) the determination result of the occupant's condition accumulated up to that point. When the ratio of defective images to the frames within the period exceeds a predetermined threshold value, they are erased.
  • a continuous or high-frequency shooting failure such that the proportion of defective images in frames within a certain period exceeds a threshold is referred to as "continuous shooting failure".
  • the occupant state determination unit 13 since the occupant state determination unit 13 does not accumulate the determination results of the occupant state based on the defective image, erroneous determination of the occupant state due to the defective image is prevented. be done. In addition, even if a bad image occurs, the occupant state determination unit 13 does not delete the accumulated determination result of the occupant state unless it is a continuous shooting failure. Determination of the state can be performed continuously. Therefore, even if image shooting failures occur frequently to some extent, it is possible to prevent the timing of judging the state of the occupant from decreasing.
  • the defective video determination unit 12 extracts a target region that is a target region for occupant face detection (so-called ROI (Region of Interest)) in the in-vehicle video, In-vehicle video is suitable for occupant face detection based on the amount of jump, luminance variance, edge strength, and amount of shadowing objects reflected, and the number of faces detected as the face of one occupant from the in-vehicle video. determine whether or not there is
  • the target area for occupant face detection in the in-vehicle video will be referred to as a "face detection target area”.
  • FIG. 2 is a diagram showing a configuration example of the defective image determination unit 12.
  • the defective image determination unit 12 in FIG. 2 includes a face detection target area setting unit 121, a whiteout amount calculation unit 122, a luminance variance amount calculation unit 123, an edge strength calculation unit 124, a shield detection unit 125, a multiple face detection unit 126, A shooting failure determination unit 127 , a continuous shooting failure determination unit 128 , and a steering wheel steering angle detection unit 129 are provided.
  • the face detection target area setting unit 121 sets the face detection target area for the latest frame of the in-vehicle video.
  • the face detection target area may be set by any method.
  • the face detection target area may be set by actually detecting the position of the occupant's face from the latest frame of the in-vehicle video, or by setting the face detection target area by detecting the position of the occupant's face from the latest frame of the in-car video.
  • the face detection target area may be set by estimating the position of the occupant's face in the latest frame of the vehicle interior video from the position of the occupant's face detected in the multiple frames of the vehicle interior video.
  • the whiteout amount calculation unit 122 calculates the amount of whiteout in the face detection target area. judge not.
  • the brightness variance calculation unit 123 calculates the brightness variance of the face detection target area, and determines that the in-vehicle image is not suitable for face detection if the calculated value is below a predetermined threshold value.
  • the edge strength calculator 124 uses, for example, a Laplacian filter to calculate the integrated value of the edge strength of the face detection target area. I judge.
  • the shielding object detection unit 125 calculates the area of the area where the brightness variance value of the face detection target area is equal to or less than a certain value, and sets a predetermined threshold value (for example, 75%) for the ratio of the area to the face detection target area. If it exceeds, it is determined that the occupant's face is blocked and the in-vehicle image is not suitable for face detection.
  • the defective image determination unit 12 in FIG. 2 is provided with a steering wheel steering angle detection unit 129 that detects the steering angle of the steering wheel. The area shielded by the horn pad is obtained, and if the ratio of that area to the face detection target area exceeds a predetermined threshold value, it is determined that the in-vehicle image is not suitable for face detection.
  • the multiple face detection unit 126 counts the number of faces of each passenger detected from the in-vehicle image, and determines that the in-vehicle image is not suitable for face detection if multiple faces are detected as the face of one passenger. . Any method may be used to count the number of faces of each passenger.
  • the multiple face detection unit 126 of the present embodiment counts the number of occupant faces in each seat by counting the number of face detection target areas set in the portion corresponding to each seat in the in-vehicle image.
  • the imaging failure determination unit 127 performs an OR operation on the determination results of the whiteout amount calculation unit 122, the luminance variance amount calculation unit 123, the edge strength calculation unit 124, the shield detection unit 125, and the multiple face detection unit 126. It is determined whether or not the in-vehicle image of the latest frame is a defective image. That is, the imaging failure determination unit 127 determines whether or not the in-vehicle image is detected by one or more of the whiteout amount calculation unit 122, the luminance variance amount calculation unit 123, the edge strength calculation unit 124, the shield detection unit 125, and the multiple face detection unit 126. If the in-vehicle image is determined to be unsuitable for face detection, it is determined to be a defective image.
  • the continuous imaging failure determination unit 128 determines whether continuous imaging failure has occurred based on the determination results of the imaging failure determination unit 127 for a certain period of time in the past. Specifically, the continuous poor shooting determination unit 128 accumulates the determination results of the poor shooting determination unit 127 for a certain period of time, and sets a predetermined threshold for the ratio of the poor video to the in-vehicle video of the frames within the certain period. If it exceeds, it is determined that a continuous imaging failure has occurred.
  • the fixed period and the threshold may be arbitrary values. For example, if 75% or more of the in-vehicle video in the previous frame of 1.5 seconds contains a bad video, it is determined that a continuous shooting failure has occurred. You may do so. In addition, the above ratio may be 100%, in which case it is determined that a continuous shooting failure has occurred when the defective image continues for a certain period of time.
  • a difference may be provided between the threshold for detecting the start of continuous poor imaging by the continuous poor imaging determination unit 128 and the threshold for detecting the end of continuous poor imaging.
  • the threshold for detecting the start of continuous poor imaging may be set to be greater than the threshold for detecting the end of continuous poor imaging to provide hysteresis characteristics to the determination of the presence or absence of continuous poor imaging. , it is possible to prevent the output of the continuous imaging failure determination unit 128 from becoming unstable.
  • the shooting failure determination unit 127 determines whether there is an instantaneous shooting failure for each frame, and the continuous shooting failure determination unit 128 determines whether there is a continuous shooting failure. .
  • the control unit 14 controls the operation of the occupant state determination unit 13 based on the determination results of the imaging failure determination unit 127 and the continuous imaging failure determination unit 128 . In other words, the control unit 14 does not allow the occupant state determination unit 13 to determine the condition of the occupant based on the in-vehicle image determined by the poor photography determination unit 127 to be a poor image. When it is determined that continuous imaging failure has occurred, the determination result of the passenger state accumulated in the passenger state determination unit 13 is deleted.
  • the defective image determination unit 12 detects the amount of overexposure, the amount of luminance dispersion, the edge strength of the target area, the amount of reflection of the shielding object, and the face of one occupant detected from the image inside the vehicle.
  • the defective image determination unit 12 includes a face detection target area setting unit 121, a whiteout amount calculation unit 122, a luminance variance amount calculation unit 123, an edge strength calculation unit 124, a shield detection unit 125, and a multiple face detection unit 126. It is sufficient if one or more of
  • the shooting failure determination unit 127 determines the determination results of the whiteout amount calculation unit 122, the luminance variance amount calculation unit 123, the edge strength calculation unit 124, the shield detection unit 125, and the multiple face detection unit 126. Although it is determined whether or not the latest frame of the in-vehicle image is a defective image (whether or not it is suitable for face detection), the determination method is not limited to this. For example, the shooting failure determination unit 127 determines based on one or more determination results of the whiteout amount calculation unit 122, the luminance variance amount calculation unit 123, the edge strength calculation unit 124, the shield detection unit 125, and the multiple face detection unit 126.
  • Face detection reliability when it is assumed that the face of the occupant is detected using the latest frame of the in-vehicle video, and the in-vehicle video in which the reliability is smaller than a predetermined threshold. may be determined as unsuitable for occupant face detection.
  • the determination of the presence or absence of continuous shooting defects is performed by the defective video determination unit 12, but the determination may be performed by the control unit 14. That is, the continuous shooting failure determination unit 128 may be included in the control unit 14 instead of the bad video determination unit 12 .
  • FIG. 3 is a flow chart showing the operation of the occupant state determination device 10 according to the first embodiment. The operation of the occupant condition determination device 10 will be described below with reference to the flowchart of FIG.
  • the in-vehicle image acquisition unit 11 acquires the latest frame of the in-vehicle image from the camera 1 (step ST1). Subsequently, the bad image determination unit 12 determines whether or not the in-vehicle image acquired by the in-vehicle image acquisition unit 11 is a defective image (step ST2).
  • the occupant condition determining unit 13 determines the condition of the vehicle occupants based on the in-vehicle image (step ST4), and accumulates the determination results. (step ST5). If the in-vehicle image is determined to be a defective image (YES in step ST3), the control section 14 causes the occupant state determination section 13 to skip the processing of steps ST4 and ST5.
  • the bad image determination unit 12 determines whether or not continuous shooting failure of the in-vehicle image has occurred (step ST6). If it is determined that continuous imaging failure has not occurred (NO in step ST7), the occupant state determination unit 13 determines the final occupant state based on the occupant state determination results accumulated over a certain period of time. determination is made (step ST8). If it is determined that continuous imaging failure has occurred (YES in step ST7), the control unit 14 causes the occupant state determination unit 13 to skip step ST8, and the occupant state accumulated up to that point is determined. The determination result is erased (step ST9).
  • the occupant state determination device 10 repeatedly executes the above processing at the frame period of the in-vehicle video captured by the camera 1 .
  • the flow of FIG. 3 shows an example in which the control unit 14 causes the occupant state determination unit 13 to skip both steps ST4 and ST5 when the in-vehicle image is determined to be a defective image.
  • the control section 14 may cause the passenger state determination section 13 to skip only step ST5. In other words, even if the occupant state determination unit 13 determines the occupant state based on the defective image, the control unit 14 does not need to accumulate the determination results.
  • FIG. 4 and 5 are diagrams showing examples of the hardware configuration of the occupant state determination device 10, respectively.
  • Each function of the components of the occupant condition determination device 10 shown in FIG. 1 is realized by, for example, a processing circuit 50 shown in FIG. That is, the occupant state determination device 10 acquires an in-vehicle image that is an image of the inside of the vehicle, determines whether or not the in-vehicle image is suitable for occupant face detection for each frame, and determines whether or not the occupant face detection is performed.
  • the in-vehicle image that is determined to be unsuitable is determined as a bad image
  • the condition of the occupant is determined based on the in-vehicle image of each frame, and the determination results are accumulated.
  • the processing circuit 50 for deleting the determination result of the occupant's condition accumulated up to that point.
  • the processing circuit 50 may be dedicated hardware, or a processor (central processing unit (CPU: Central Processing Unit), processing device, arithmetic device, microprocessor, microcomputer, etc.) that executes a program stored in memory. DSP (also called Digital Signal Processor)).
  • CPU Central Processing Unit
  • DSP Digital Signal Processor
  • the processing circuit 50 may be, for example, a single circuit, a composite circuit, a programmed processor, a parallel programmed processor, an ASIC (Application Specific Integrated Circuit), an FPGA (Field-Programmable Gate Array), or a combination of these.
  • ASIC Application Specific Integrated Circuit
  • FPGA Field-Programmable Gate Array
  • FIG. 5 shows an example of the hardware configuration of the occupant state determination device 10 when the processing circuit 50 is configured using a processor 51 that executes programs.
  • the functions of the constituent elements of the occupant state determination device 10 are implemented by software or the like (software, firmware, or a combination of software and firmware).
  • Software or the like is written as a program and stored in the memory 52 .
  • the processor 51 implements the function of each part by reading and executing the program stored in the memory 52 . That is, the occupant state determination device 10, when executed by the processor 51, acquires an in-vehicle image, which is an image of the inside of the vehicle, and determines whether the in-vehicle image is suitable for detecting the faces of the occupants of the vehicle.
  • Judgment is performed on a frame-by-frame basis, and in-vehicle video that is determined to be unsuitable for occupant face detection is judged to be defective. Processing for final determination of the state of the occupant based on the determination result of the state of the occupant accumulated during the period, determination of the state of the occupant based on the in-vehicle image determined as defective, or determination as the defective image. In addition, when the proportion of in-vehicle images determined to be defective images within a certain period of time exceeds a predetermined threshold value, accumulation is not performed until then.
  • a memory 52 is provided for storing a program to be executed as a result of processing for erasing the determination result of the occupant's condition. In other words, it can be said that this program causes a computer to execute the procedures and methods of operation of the constituent elements of the occupant condition determination device 10 .
  • the memory 52 is, for example, a non-volatile or Volatile semiconductor memory, HDD (Hard Disk Drive), magnetic disk, flexible disk, optical disk, compact disk, mini disk, DVD (Digital Versatile Disc) and their drive devices, etc., or any storage media that will be used in the future.
  • HDD Hard Disk Drive
  • magnetic disk flexible disk
  • optical disk compact disk
  • mini disk mini disk
  • DVD Digital Versatile Disc
  • the present invention is not limited to this, and a configuration may be adopted in which some components of the occupant condition determination device 10 are realized by dedicated hardware and other components are realized by software or the like.
  • the functions of some of the components are realized by the processing circuit 50 as dedicated hardware, and the processing circuit 50 as a processor 51 executes the programs stored in the memory 52 for some of the other components. Its function can be realized by reading and executing it.
  • the occupant state determination device 10 can implement the functions described above by hardware, software, etc., or a combination thereof.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

In this passenger status determination device (10), a vehicle interior image acquisition unit (11) acquires a vehicle interior image in which the vehicle interior is captured. A faulty-image determination unit (12) determines, for each frame, whether the vehicle interior image is suited for detecting the face of a passenger, and determines that a vehicle interior image determined to not be suited for detecting the face of a passenger is a faulty image. A passenger status determination unit (13) determines the status of the passenger on the basis of the vehicle interior images for each of the frames, accumulates the results of said determination, and finally determines the status of the passenger on the basis of the results of determining the status of the passenger that were accumulated over a given period. A control unit (14) causes the passenger status determination unit (13) to not implement the determination of the status of the passenger based on vehicle interior images determined to be faulty images or to not implement the accumulation of the results of determining the status of the passenger based on the vehicle interior images determined to be faulty, and furthermore, when the proportion of vehicle interior images determined to be faulty images within the given period exceeds a predetermined threshold value, the control unit (14) causes the passenger status determination unit (13) to delete the results of determining the status of the passenger that were accumulated until the time of deletion.

Description

乗員状態判定装置および乗員状態判定方法Occupant state determination device and occupant state determination method
 本開示は、車両の乗員の状態を判定する乗員状態判定装置および乗員状態判定方法に関するものである。 The present disclosure relates to an occupant state determination device and an occupant state determination method for determining the state of a vehicle occupant.
 車両に設置されたカメラで撮影した運転者の映像を解析して、運転者の状態を監視するドライバーモニタリングシステム(DMS)の実用化が進んでいる。従来のDMSは、運転者の状態を映像のフレームごとに判定してその判定結果を累積し、一定期間に累積された判定結果に基づいて、運転者の状態の最終的な判定を行う(例えば下記の特許文献1)。それにより、運転者の状態の最終的な判定結果の信頼性が向上する。  The driver monitoring system (DMS), which monitors the condition of the driver by analyzing the video of the driver captured by the camera installed in the vehicle, is being put to practical use. A conventional DMS determines the state of the driver for each video frame, accumulates the determination results, and makes a final determination of the driver's state based on the determination results accumulated over a certain period of time (for example, Patent document 1) below. This improves the reliability of the final determination result of the driver's condition.
特開2008-226163号公報JP 2008-226163 A
 上記のようなDMSでは、運転者の映像の撮影不良が生じた場合、撮影不良に起因する運転者の状態の誤判定を防止するために、累積された判定結果がクリア(消去)される。しかし、その手法では、撮影不良が多発する状況において、運転者の状態を判定するタイミングが少なくなり、運転者の状態の監視という目的を十分に達成できない。撮影不良が多発する状況としては、例えば、車両内に木洩れ日が差して運転者の映像の輝度分布が不規則に変化するシーンや、カーブが連続し、ハンドル(ステアリングホイール)を操作する手や回転するステアリングホイールのホーンパッドで運転者の顔が度々遮蔽されるシーンなどが考えられる。 In the DMS as described above, when a defective image of the driver occurs, the accumulated determination results are cleared (erased) in order to prevent an erroneous determination of the driver's state due to the defective imaging. However, with this method, the timing for judging the driver's condition is reduced in a situation where photographing failures frequently occur, and the purpose of monitoring the driver's condition cannot be sufficiently achieved. Situations in which shooting defects frequently occur include, for example, scenes in which the brightness distribution of the image of the driver changes irregularly due to sunlight filtering through the trees inside the vehicle, and scenes in which there are continuous curves and the hand or rotation of the steering wheel. For example, the driver's face is often blocked by the horn pad of the steering wheel.
 本開示は以上のような課題を解決するためになされたものであり、車両内の映像から乗員の状態を判定する乗員状態判定装置において、映像の撮影不良が多発したときに、乗員の状態を判定するタイミングが少なくなることを抑制することを目的とする。 The present disclosure has been made in order to solve the above-described problems. It is an object of the present invention to prevent the timing of determination from being reduced.
 本開示に係る乗員状態判定装置は、車両内を撮影した映像である車内映像を取得する車内映像取得部と、車内映像が車両の乗員の顔検出に適しているか否かをフレームごとに判定し、乗員の顔検出に適さないと判定された車内映像を不良映像と判定する不良映像判定部と、各フレームの車内映像に基づき乗員の状態を判定してその判定結果を累積し、一定期間に累積された乗員の状態の判定結果に基づいて、乗員の状態の最終的な判定を行う乗員状態判定部と、乗員状態判定部に対し、不良映像と判定された車内映像に基づく乗員の状態の判定、または、不良映像と判定された車内映像に基づく乗員の状態の判定結果の累積を実施させず、さらに、一定期間内に不良映像と判定された車内映像の割合が予め定められた閾値を超えると、それまでに累積された乗員の状態の判定結果を消去させる制御部と、を備える。 The occupant state determination device according to the present disclosure includes an in-vehicle image acquisition unit that acquires an in-vehicle image that is an image captured inside the vehicle, and determines whether the in-vehicle image is suitable for detecting the face of the occupant of the vehicle for each frame. , a bad image judgment unit that judges the in-vehicle image that is judged to be unsuitable for occupant face detection as a bad image; an occupant state determination unit that makes a final determination of the occupant state based on the accumulated determination results of the occupant state; or accumulation of determination results of the state of the occupants based on the in-vehicle images determined to be defective, and furthermore, the ratio of the in-vehicle images determined to be defective within a certain period exceeds a predetermined threshold. and a control unit that, when exceeded, erases the determination result of the occupant's condition accumulated up to that point.
 本開示に係る乗員状態判定装置によれば、車内映像の撮影不良が生じても、それが継続的な撮影不良でなければ、累積された判定結果は消去されず、乗員の状態の判定を継続して行うことができる。よって、映像の撮影不良がある程度多発しても、乗員の状態を判定するタイミングが少なくなることは抑制される。 According to the occupant state determination device according to the present disclosure, even if an in-vehicle video shooting failure occurs, unless it is a continuous shooting failure, the accumulated determination result is not deleted, and the occupant state determination is continued. can be done by Therefore, even if image shooting failures occur frequently to some extent, it is possible to prevent the timing of judging the state of the occupant from decreasing.
 本開示の目的、特徴、態様、および利点は、以下の詳細な説明と添付図面とによって、より明白となる。 The objects, features, aspects, and advantages of the present disclosure will become more apparent with the following detailed description and accompanying drawings.
実施の形態1に係る乗員状態判定装置の構成を示す図である。1 is a diagram showing a configuration of an occupant state determination device according to Embodiment 1; FIG. 不良映像判定部の構成例を示す図である。It is a figure which shows the structural example of a bad image determination part. 実施の形態1に係る乗員状態判定装置の動作を示すフローチャートである。4 is a flow chart showing the operation of the occupant state determination device according to Embodiment 1; 乗員状態判定装置のハードウェア構成例を示す図である。It is a figure which shows the hardware structural example of a passenger|crew state determination apparatus. 乗員状態判定装置のハードウェア構成例を示す図である。It is a figure which shows the hardware structural example of a passenger|crew state determination apparatus.
 <実施の形態1>
 図1は、実施の形態1に係る乗員状態判定装置10の構成を示す図である。本実施の形態では、乗員状態判定装置10は、車両に搭載されているものと仮定する。ただし、乗員状態判定装置10は、必ずしも車両に常設されなくてもよく、例えば、携帯電話やスマートフォン、ポータブルナビゲーションデバイス(PND)など、車両に持ち込み可能な携帯型機器上で実現されていてもよい。また、乗員状態判定装置10の機能の一部が、車両の外部に設置され乗員状態判定装置10と通信可能なサーバー上で実現されていてもよい。
<Embodiment 1>
FIG. 1 is a diagram showing the configuration of an occupant state determination device 10 according to Embodiment 1. As shown in FIG. In the present embodiment, it is assumed that occupant condition determination device 10 is mounted on a vehicle. However, the occupant state determination device 10 does not necessarily have to be permanently installed in the vehicle, and may be realized on a portable device that can be brought into the vehicle, such as a mobile phone, a smart phone, or a portable navigation device (PND). . Also, part of the functions of the occupant condition determination device 10 may be implemented on a server that is installed outside the vehicle and that can communicate with the occupant condition determination device 10 .
 乗員状態判定装置10は、乗員状態判定装置10が搭載された車両内の映像(以下「車内映像」という)を撮影するカメラ1に接続されており、カメラ1が撮影した車内映像に基づいて、車両の乗員(運転者を含む)の状態を判定する。例えば、乗員状態判定装置10が運転者の状態だけを判定するのであれば、車内映像には少なくとも運転席が写っていればよい。車内映像に運転席だけでなく助手席や後部座席が映像も含まれている場合、乗員状態判定装置10は、運転者の乗員、すなわち助手席や後部座席の乗員の状態の判定も行ってもよい。 The occupant condition determination device 10 is connected to a camera 1 that captures an image inside the vehicle in which the occupant condition determination device 10 is mounted (hereinafter referred to as "vehicle image"), and based on the vehicle interior image captured by the camera 1, Determining the condition of vehicle occupants (including the driver). For example, if the occupant state determination device 10 determines only the state of the driver, at least the driver's seat should be shown in the in-vehicle image. If the in-vehicle image includes images of not only the driver's seat but also the passenger's seat and the rear seat, the occupant state determination device 10 may also determine the state of the driver's occupants, that is, the passenger's seat and the rear seat occupants. good.
 図1のように、乗員状態判定装置10は、車内映像取得部11、不良映像判定部12、乗員状態判定部13および制御部14を備えている。 As shown in FIG. 1, the occupant condition determination device 10 includes an in-vehicle image acquisition unit 11, a defective image determination unit 12, an occupant condition determination unit 13, and a control unit .
 車内映像取得部11は、カメラ1が撮影した車内映像を取得する。不良映像判定部12は、車内映像取得部11が取得した車内映像が不良映像であるか否かをフレームごとに判定する。具体的には、不良映像判定部12は、車内映像が車両の乗員の顔検出に適しているか否かをフレームごとに判定し、乗員の顔検出に適さないと判定されたフレームの車内映像を不良映像と判定する。車内映像が乗員の顔検出に適しているか否かの判定方法の具体例は後述する。 The in-vehicle video acquisition unit 11 acquires the in-vehicle video captured by the camera 1 . The bad image determination unit 12 determines whether or not the in-vehicle image acquired by the in-vehicle image acquisition unit 11 is a defective image for each frame. Specifically, the defective image determination unit 12 determines whether or not the in-vehicle image is suitable for occupant face detection for each frame, and determines whether or not the vehicle interior image is suitable for occupant face detection. It is judged as a bad image. A specific example of a method for determining whether or not an in-vehicle image is suitable for occupant face detection will be described later.
 乗員状態判定部13は、車内映像取得部11が取得した車内映像に基づいて、車両の乗員の状態を判定する。より具体的には、乗員状態判定部13は、各フレームの車内映像に基づき乗員の状態を判定してその判定結果を累積し、一定期間に累積された乗員の状態の判定結果に基づいて、乗員の状態の最終的な判定を行う。例えば、乗員状態判定部13が、一定期間に得られた判定結果のうちの最も割合が高い判定結果を最終的な判定結果することで、最終的な判定結果の信頼性が向上する。 The occupant state determination unit 13 determines the state of the occupants of the vehicle based on the in-vehicle image acquired by the in-vehicle image acquisition unit 11 . More specifically, the occupant state determination unit 13 determines the occupant state based on the in-vehicle video of each frame, accumulates the determination results, and based on the occupant state determination results accumulated over a certain period of time, A final determination of the condition of the occupants is made. For example, the occupant state determination unit 13 makes the final determination result the determination result with the highest ratio among the determination results obtained in a certain period of time, thereby improving the reliability of the final determination result.
 乗員状態判定部13が行う乗員の状態の判定の種類に制約はなく、例えば、運転者が脇見運転していないか判定する脇見判定、運転者が居眠りしていないか判定する居眠り判定、乗員が正常な姿勢で着座しているか判定する姿勢崩れ判定、発作などによって乗員の体が硬直していないか判定する硬直判定などが考えられる。 There are no restrictions on the type of determination of the state of the occupant performed by the occupant state determining unit 13. For example, inattentiveness determination for determining whether the driver is not distracted while driving, dozing determination for determining whether the driver is dozing off, Poor posture determination, which determines whether or not the passenger is seated in a normal posture, and stiffness determination, which determines whether or not the occupant's body is stiff due to seizures, can be considered.
 また、乗員状態判定部13による判定結果は、例えば車両の警報装置や自動運転装置に出力され、各種の処理に利用される。例えば、運転者の脇見運転や居眠り運転が発生したときに警報装置が警報を出力したり、運転者の姿勢崩れや硬直が発生したときに自動運転装置が車両を安全な場所へ移動させたりすることができる。 In addition, the determination result by the occupant state determination unit 13 is output to, for example, a vehicle alarm device or an automatic driving device, and used for various processes. For example, an alarm device outputs an alarm when the driver is distracted or drowsy while driving, and an automatic driving device moves the vehicle to a safe location when the driver loses posture or becomes stiff. be able to.
 制御部14は、車内映像が不良映像か否かの判定結果に基づいて、乗員状態判定部13の動作を以下のように制御する。まず、制御部14は、不良映像と判定された車内映像(以下、単に「不良映像」ということもある)に基づく乗員の状態の判定を乗員状態判定部13に行わせず、それにより、不良映像に基づく乗員の状態の判定結果が累積されることを防止する。また、制御部14は、不良映像が生じたときに、それまでに累積された乗員の状態の判定結果をすぐには消去(クリア)させるのではなく、不良映像が連続または頻発して、一定期間内のフレームに占める不良映像の割合が予め定められた閾値を超えたときに消去させる。以下、一定期間内のフレームに占める不良映像の割合が閾値を超えるような連続的あるいは高頻度な撮影不良を、「継続的撮影不良」という。 The control unit 14 controls the operation of the occupant state determination unit 13 as follows based on the result of determination as to whether the in-vehicle image is a defective image. First, the control unit 14 does not allow the occupant state determination unit 13 to determine the state of the occupant based on the in-vehicle image determined to be defective (hereinafter, may be simply referred to as “defective image”). To prevent determination results of a passenger's state based on images from being accumulated. In addition, when a defective image occurs, the control unit 14 does not immediately erase (clear) the determination result of the occupant's condition accumulated up to that point. When the ratio of defective images to the frames within the period exceeds a predetermined threshold value, they are erased. Hereinafter, a continuous or high-frequency shooting failure such that the proportion of defective images in frames within a certain period exceeds a threshold is referred to as "continuous shooting failure".
 本実施の形態に係る乗員状態判定装置10によれば、乗員状態判定部13が、不良映像に基づく乗員の状態の判定結果を累積しないため、不良映像に起因する乗員の状態の誤判定が防止される。また、乗員状態判定部13は、不良映像が生じても、それが継続的撮影不良でなければ、累積された乗員の状態の判定結果を消去しないため、継続的撮影不良が生じるまでは乗員の状態の判定を継続して行うことができる。よって、映像の撮影不良がある程度多発しても、乗員の状態を判定するタイミングが少なくなることは抑制される。 According to the occupant state determination device 10 according to the present embodiment, since the occupant state determination unit 13 does not accumulate the determination results of the occupant state based on the defective image, erroneous determination of the occupant state due to the defective image is prevented. be done. In addition, even if a bad image occurs, the occupant state determination unit 13 does not delete the accumulated determination result of the occupant state unless it is a continuous shooting failure. Determination of the state can be performed continuously. Therefore, even if image shooting failures occur frequently to some extent, it is possible to prevent the timing of judging the state of the occupant from decreasing.
 ここで、不良映像判定部12の詳細について説明する。本実施の形態において、不良映像判定部12は、車内映像における乗員の顔検出の対象とする領域(いわゆるROI(Region of Interest))である対象領域を抽出し、抽出した対象領域の映像の白飛び量、輝度分散量、エッジ強度、および遮蔽物の写り込み量、ならびに、車内映像から1人の乗員の顔として検出された顔の個数に基づいて、車内映像が乗員の顔検出に適しているか否かを判定する。以下、車内映像における乗員の顔検出の対象領域を、「顔検出対象領域」という。 Here, the details of the defective image determination unit 12 will be described. In the present embodiment, the defective video determination unit 12 extracts a target region that is a target region for occupant face detection (so-called ROI (Region of Interest)) in the in-vehicle video, In-vehicle video is suitable for occupant face detection based on the amount of jump, luminance variance, edge strength, and amount of shadowing objects reflected, and the number of faces detected as the face of one occupant from the in-vehicle video. determine whether or not there is Hereinafter, the target area for occupant face detection in the in-vehicle video will be referred to as a "face detection target area".
 図2は、不良映像判定部12の構成例を示す図である。図2の不良映像判定部12は、顔検出対象領域設定部121、白飛び量算出部122、輝度分散量算出部123、エッジ強度算出部124、遮蔽物検出部125、マルチプルフェイス検出部126、撮影不良判定部127、継続的撮影不良判定部128およびステアリングホイール舵角検出部129を備えている。 FIG. 2 is a diagram showing a configuration example of the defective image determination unit 12. As shown in FIG. The defective image determination unit 12 in FIG. 2 includes a face detection target area setting unit 121, a whiteout amount calculation unit 122, a luminance variance amount calculation unit 123, an edge strength calculation unit 124, a shield detection unit 125, a multiple face detection unit 126, A shooting failure determination unit 127 , a continuous shooting failure determination unit 128 , and a steering wheel steering angle detection unit 129 are provided.
 顔検出対象領域設定部121は、最新フレームの車内映像に対し、顔検出対象領域を設定する。顔検出対象領域の設定方法は任意の方法でよく、例えば、最新フレームの車内映像から実際に乗員の顔の位置を検出して顔検出対象領域を設定してもよいし、直前の1フレームまたは複数フレームの車内映像で検出された乗員の顔の位置から、最新フレームの車内映像における乗員の顔の位置を推定して、顔検出対象領域を設定してもよい。 The face detection target area setting unit 121 sets the face detection target area for the latest frame of the in-vehicle video. The face detection target area may be set by any method. For example, the face detection target area may be set by actually detecting the position of the occupant's face from the latest frame of the in-vehicle video, or by setting the face detection target area by detecting the position of the occupant's face from the latest frame of the in-car video. The face detection target area may be set by estimating the position of the occupant's face in the latest frame of the vehicle interior video from the position of the occupant's face detected in the multiple frames of the vehicle interior video.
 例えば乗員の顔に直射日光が当たるなどして、車内映像における乗員の顔の全体もしくは一部が著しく明るくなると、顔検出対象領域に白飛びが生じて顔のパーツが判別できなくなるため、そのような車内映像は顔検出に適さない。白飛び量算出部122は、顔検出対象領域における白飛び量を算出し、顔検出対象領域に占める白飛び部分の割合が予め定められた閾値を超えていれば、車内映像は顔検出に適さないと判定する。 For example, if the occupant's face is exposed to direct sunlight and the whole or part of the occupant's face in the in-vehicle video becomes significantly brighter, the face detection target area will be overexposed and parts of the face cannot be identified. In-car images are not suitable for face detection. The whiteout amount calculation unit 122 calculates the amount of whiteout in the face detection target area. judge not.
 例えば太陽光が乗員の背後からカメラ1に入射する場合、カメラ1のAE(Auto Exposure)制御により露光時間が減少すると、顔検出対象領域に黒つぶれが生じて顔のパーツを検出できなくなるため、そのような車内映像は顔検出に適さない。輝度分散量算出部123は、顔検出対象領域の輝度分散量を算出し、その値が予め定められた閾値を下回っていれば、車内映像は顔検出に適さないと判定する。 For example, when sunlight is incident on the camera 1 from behind the occupant, if the exposure time is reduced by AE (Auto Exposure) control of the camera 1, the face detection target area will be underexposed and the parts of the face will not be detected. Such in-vehicle images are not suitable for face detection. The brightness variance calculation unit 123 calculates the brightness variance of the face detection target area, and determines that the in-vehicle image is not suitable for face detection if the calculated value is below a predetermined threshold value.
 正常に撮影された車内映像はエッジが強いが、何らかの異常により車内映像のエッジが弱くなると、顔のパーツを検出できなくなるため、そのような車内映像は顔検出に適さない。エッジ強度算出部124は、例えばラプラシアンフィルタを用いて、顔検出対象領域のエッジ強度の積分値を算出し、その値が予め定められた閾値を下回っていれば、車内映像は顔検出に適さないと判定する。 Normally captured images inside the vehicle have strong edges, but if the edges of the images inside the vehicle become weak due to some abnormality, facial parts cannot be detected, so such images inside the vehicle are not suitable for face detection. The edge strength calculator 124 uses, for example, a Laplacian filter to calculate the integrated value of the edge strength of the face detection target area. I judge.
 例えば乗員とカメラ1との間に車両のステアリングホイールが位置している場合、ステアリングホイールを操作する手や回転するステアリングホイールのホーンパッドで乗員の顔が遮蔽されると、顔検出対象領域から顔のパーツを検出できなくなるため、そのような状態で撮影された車内映像は顔検出に適さない。遮蔽物検出部125は、顔検出対象領域の輝度分散値が一定値以下となる領域の面積を算出し、その面積の顔検出対象領域に占める割合が予め定められた閾値(例えば75%)を超えていれば、乗員の顔が遮蔽されていると判断し、車内映像は顔検出に適さないと判定する。 For example, when the steering wheel of the vehicle is positioned between the passenger and the camera 1, if the passenger's face is blocked by the hand operating the steering wheel or by the horn pad of the rotating steering wheel, the face detection target area will be detected. In-vehicle video captured in such a state is not suitable for face detection because it is not possible to detect the parts of the car. The shielding object detection unit 125 calculates the area of the area where the brightness variance value of the face detection target area is equal to or less than a certain value, and sets a predetermined threshold value (for example, 75%) for the ratio of the area to the face detection target area. If it exceeds, it is determined that the occupant's face is blocked and the in-vehicle image is not suitable for face detection.
 なお、ステアリングホイールのホーンパッドで乗員の顔が遮蔽されるか否か、言い換えれば、ホーンパッドがカメラ1と乗員との間に位置するか否かは、例えば車両のCAN(Controller Area Network)情報から分かるステアリングホイールの舵角から判断することもできる。そこで、図2の不良映像判定部12では、ステアリングホイールの舵角を検出するステアリングホイール舵角検出部129を設け、遮蔽物検出部125が、ステアリングホイールの舵角に基づいて顔検出対象領域のホーンパッドで遮蔽される面積を求め、その面積の顔検出対象領域に占める割合が予め定められた閾値を超えていれば、車内映像は顔検出に適さないと判定するようにしている。 Whether or not the occupant's face is shielded by the horn pad of the steering wheel, in other words, whether or not the horn pad is positioned between the camera 1 and the occupant is determined, for example, by vehicle CAN (Controller Area Network) information. It can also be determined from the steering angle of the steering wheel, which can be found from Therefore, the defective image determination unit 12 in FIG. 2 is provided with a steering wheel steering angle detection unit 129 that detects the steering angle of the steering wheel. The area shielded by the horn pad is obtained, and if the ratio of that area to the face detection target area exceeds a predetermined threshold value, it is determined that the in-vehicle image is not suitable for face detection.
 例えば後部座席の乗員が運転席に体を乗り出した場合など、後部座席の乗員の顔が運転者の顔として誤検出されることがある。それによって運転者の顔として2つの顔が検出されると、運転者の顔のパーツを正しく検出できなくなるため、そのような車内映像は顔検出に適さない。マルチプルフェイス検出部126は、車内映像から検出された各乗員の顔の個数を数え、1人の乗員の顔として複数の顔が検出されていれば、車内映像は顔検出に適さないと判定する。各乗員の顔の個数を数える方法は任意の方法でよい。本実施の形態のマルチプルフェイス検出部126は、車内映像の各座席に対応する部分に設定された顔検出対象領域の個数を数えることによって、各座席の乗員の顔の個数をカウントしている。 For example, when a rear seat occupant leans into the driver's seat, the face of the rear seat occupant may be erroneously detected as the driver's face. If two faces are detected as the driver's face, the part of the driver's face cannot be detected correctly, and such an in-vehicle image is not suitable for face detection. The multiple face detection unit 126 counts the number of faces of each passenger detected from the in-vehicle image, and determines that the in-vehicle image is not suitable for face detection if multiple faces are detected as the face of one passenger. . Any method may be used to count the number of faces of each passenger. The multiple face detection unit 126 of the present embodiment counts the number of occupant faces in each seat by counting the number of face detection target areas set in the portion corresponding to each seat in the in-vehicle image.
 撮影不良判定部127は、白飛び量算出部122、輝度分散量算出部123、エッジ強度算出部124、遮蔽物検出部125およびマルチプルフェイス検出部126それぞれの判定結果のOR演算を行うことで、最新フレームの車内映像が不良映像か否かの判定を行う。すなわち、撮影不良判定部127は、白飛び量算出部122、輝度分散量算出部123、エッジ強度算出部124、遮蔽物検出部125およびマルチプルフェイス検出部126のうちの1つ以上により車内映像が顔検出に適さないと判定されていれば、その車内映像を不良映像と判定する。 The imaging failure determination unit 127 performs an OR operation on the determination results of the whiteout amount calculation unit 122, the luminance variance amount calculation unit 123, the edge strength calculation unit 124, the shield detection unit 125, and the multiple face detection unit 126. It is determined whether or not the in-vehicle image of the latest frame is a defective image. That is, the imaging failure determination unit 127 determines whether or not the in-vehicle image is detected by one or more of the whiteout amount calculation unit 122, the luminance variance amount calculation unit 123, the edge strength calculation unit 124, the shield detection unit 125, and the multiple face detection unit 126. If the in-vehicle image is determined to be unsuitable for face detection, it is determined to be a defective image.
 継続的撮影不良判定部128は、過去の一定期間の撮影不良判定部127による判定結果に基づいて、継続的撮影不良が生じているか否かを判定する。具体的には、継続的撮影不良判定部128は、一定期間の撮影不良判定部127による判定結果を累積し、一定期間内のフレームの車内映像に占める不良映像の割合が予め定められた閾値を超えると、継続的撮影不良が生じていると判定する。 The continuous imaging failure determination unit 128 determines whether continuous imaging failure has occurred based on the determination results of the imaging failure determination unit 127 for a certain period of time in the past. Specifically, the continuous poor shooting determination unit 128 accumulates the determination results of the poor shooting determination unit 127 for a certain period of time, and sets a predetermined threshold for the ratio of the poor video to the in-vehicle video of the frames within the certain period. If it exceeds, it is determined that a continuous imaging failure has occurred.
 上記の一定期間および閾値はそれぞれ任意の値でよく、例えば、直前の1.5秒間のフレームの車内映像に不良映像が75%以上含まれていれば、継続的撮影不良が発生したと判断されるようにしてもよい。また、上記の割合は100%でもよい、その場合、不良映像が一定期間だけ連続したときに、継続的撮影不良が発生したと判断される。 The fixed period and the threshold may be arbitrary values. For example, if 75% or more of the in-vehicle video in the previous frame of 1.5 seconds contains a bad video, it is determined that a continuous shooting failure has occurred. You may do so. In addition, the above ratio may be 100%, in which case it is determined that a continuous shooting failure has occurred when the defective image continues for a certain period of time.
 また、継続的撮影不良判定部128が継続的撮影不良の開始を検出するための閾値と、継続的撮影不良の終了を検出するための閾値とに差を設けてもよい。例えば、継続的撮影不良の開始を検出するための閾値を、継続的撮影不良の終了を検出するための閾値よりも大きく設定して、継続的撮影不良の有無の判定にヒステリシス特性を持たせることで、継続的撮影不良判定部128の出力が不安定になることを防止できる。 Further, a difference may be provided between the threshold for detecting the start of continuous poor imaging by the continuous poor imaging determination unit 128 and the threshold for detecting the end of continuous poor imaging. For example, the threshold for detecting the start of continuous poor imaging may be set to be greater than the threshold for detecting the end of continuous poor imaging to provide hysteresis characteristics to the determination of the presence or absence of continuous poor imaging. , it is possible to prevent the output of the continuous imaging failure determination unit 128 from becoming unstable.
 このように、不良映像判定部12において、撮影不良判定部127は、フレームごとの瞬時的な撮影不良の有無を判定し、継続的撮影不良判定部128は、継続的撮影不良の有無を判定する。 In this way, in the defective video determination unit 12, the shooting failure determination unit 127 determines whether there is an instantaneous shooting failure for each frame, and the continuous shooting failure determination unit 128 determines whether there is a continuous shooting failure. .
 制御部14は、撮影不良判定部127および継続的撮影不良判定部128の判定結果に基づいて、乗員状態判定部13の動作を制御する。つまり、制御部14は、撮影不良判定部127により不良映像と判定された車内映像に基づく乗員の状態の判定を、乗員状態判定部13に行わせず、また、継続的撮影不良判定部128により継続的撮影不良が発生したと判定されたときに、乗員状態判定部13に累積された乗員の状態の判定結果を消去させる。 The control unit 14 controls the operation of the occupant state determination unit 13 based on the determination results of the imaging failure determination unit 127 and the continuous imaging failure determination unit 128 . In other words, the control unit 14 does not allow the occupant state determination unit 13 to determine the condition of the occupant based on the in-vehicle image determined by the poor photography determination unit 127 to be a poor image. When it is determined that continuous imaging failure has occurred, the determination result of the passenger state accumulated in the passenger state determination unit 13 is deleted.
 本実施の形態では、不良映像判定部12が、対象領域の白飛び量、輝度分散量、エッジ強度、および遮蔽物の写り込み量、ならびに、車内映像から1人の乗員の顔として検出された顔の個数の全てを考慮して、車内映像が乗員の顔検出に適しているか否かを判定する例を示したが、それらのうちの少なくとも1つが考慮されればよい。つまり、不良映像判定部12は、顔検出対象領域設定部121、白飛び量算出部122、輝度分散量算出部123、エッジ強度算出部124、遮蔽物検出部125およびマルチプルフェイス検出部126のうちの1つ以上を備えていればよい。 In the present embodiment, the defective image determination unit 12 detects the amount of overexposure, the amount of luminance dispersion, the edge strength of the target area, the amount of reflection of the shielding object, and the face of one occupant detected from the image inside the vehicle. Although the example of determining whether or not the in-vehicle image is suitable for occupant face detection by considering all the number of faces has been shown, at least one of them may be considered. In other words, the defective image determination unit 12 includes a face detection target area setting unit 121, a whiteout amount calculation unit 122, a luminance variance amount calculation unit 123, an edge strength calculation unit 124, a shield detection unit 125, and a multiple face detection unit 126. It is sufficient if one or more of
 また、本実施の形態では、撮影不良判定部127は、白飛び量算出部122、輝度分散量算出部123、エッジ強度算出部124、遮蔽物検出部125およびマルチプルフェイス検出部126それぞれの判定結果のOR演算を行うことで、最新フレームの車内映像が不良映像か否か(顔検出に適しているか否か)の判定を行ったが、その判定方法はこれに限られない。例えば、撮影不良判定部127が、白飛び量算出部122、輝度分散量算出部123、エッジ強度算出部124、遮蔽物検出部125およびマルチプルフェイス検出部126の1つ以上の判定結果に基づいて、最新フレームの車内映像を用いて乗員の顔検出を行ったと仮定した場合の顔検出の信頼度(顔検出信頼度)を算出し、当該信頼度が予め定められた閾値よりも小さくなる車内映像を、乗員の顔検出に適さないものと判定してもよい。 Further, in the present embodiment, the shooting failure determination unit 127 determines the determination results of the whiteout amount calculation unit 122, the luminance variance amount calculation unit 123, the edge strength calculation unit 124, the shield detection unit 125, and the multiple face detection unit 126. Although it is determined whether or not the latest frame of the in-vehicle image is a defective image (whether or not it is suitable for face detection), the determination method is not limited to this. For example, the shooting failure determination unit 127 determines based on one or more determination results of the whiteout amount calculation unit 122, the luminance variance amount calculation unit 123, the edge strength calculation unit 124, the shield detection unit 125, and the multiple face detection unit 126. Calculate the reliability of face detection (face detection reliability) when it is assumed that the face of the occupant is detected using the latest frame of the in-vehicle video, and the in-vehicle video in which the reliability is smaller than a predetermined threshold. may be determined as unsuitable for occupant face detection.
 また、本実施の形態では、継続的撮影不良の発生の有無の判定が、不良映像判定部12によって行われたが、その判定は制御部14によって行われてもよい。つまり、継続的撮影不良判定部128は、不良映像判定部12ではなく、制御部14に含まれてもよい。 Also, in the present embodiment, the determination of the presence or absence of continuous shooting defects is performed by the defective video determination unit 12, but the determination may be performed by the control unit 14. That is, the continuous shooting failure determination unit 128 may be included in the control unit 14 instead of the bad video determination unit 12 .
 図3は、実施の形態1に係る乗員状態判定装置10の動作を示すフローチャートである。以下、図3のフローチャートを参照しつつ、乗員状態判定装置10の動作を説明する。 FIG. 3 is a flow chart showing the operation of the occupant state determination device 10 according to the first embodiment. The operation of the occupant condition determination device 10 will be described below with reference to the flowchart of FIG.
 乗員状態判定装置10が起動すると、車内映像取得部11が、カメラ1から最新フレームの車内映像を取得する(ステップST1)。続いて、不良映像判定部12が、車内映像取得部11が取得した車内映像が不良映像であるか否かを判定する(ステップST2)。 When the occupant state determination device 10 is activated, the in-vehicle image acquisition unit 11 acquires the latest frame of the in-vehicle image from the camera 1 (step ST1). Subsequently, the bad image determination unit 12 determines whether or not the in-vehicle image acquired by the in-vehicle image acquisition unit 11 is a defective image (step ST2).
 車内映像が不良映像でないと判定された場合(ステップST3でNO)、乗員状態判定部13は、その車内映像に基づいて、車両の乗員の状態を判定し(ステップST4)、その判定結果を累積する(ステップST5)。車内映像が不良映像であると判定された場合は(ステップST3でYES)、制御部14が、乗員状態判定部13にステップST4およびST5の処理をスキップさせる。 If it is determined that the in-vehicle image is not a defective image (NO in step ST3), the occupant condition determining unit 13 determines the condition of the vehicle occupants based on the in-vehicle image (step ST4), and accumulates the determination results. (step ST5). If the in-vehicle image is determined to be a defective image (YES in step ST3), the control section 14 causes the occupant state determination section 13 to skip the processing of steps ST4 and ST5.
 次に、不良映像判定部12が、車内映像の継続的撮影不良が発生しているか否かを判定する(ステップST6)。継続的撮影不良が発生していないと判定された場合(ステップST7でNO)、乗員状態判定部13は、一定期間に累積された乗員の状態の判定結果に基づいて、乗員の状態の最終的な判定を行う(ステップST8)。継続的撮影不良が発生していると判定された場合は(ステップST7でYES)、制御部14が、乗員状態判定部13にステップST8をスキップさせるとともに、それまでに累積された乗員の状態の判定結果を消去させる(ステップST9)。 Next, the bad image determination unit 12 determines whether or not continuous shooting failure of the in-vehicle image has occurred (step ST6). If it is determined that continuous imaging failure has not occurred (NO in step ST7), the occupant state determination unit 13 determines the final occupant state based on the occupant state determination results accumulated over a certain period of time. determination is made (step ST8). If it is determined that continuous imaging failure has occurred (YES in step ST7), the control unit 14 causes the occupant state determination unit 13 to skip step ST8, and the occupant state accumulated up to that point is determined. The determination result is erased (step ST9).
 乗員状態判定装置10は、以上の処理をカメラ1が撮影する車内映像のフレーム周期で繰り返し実行する。 The occupant state determination device 10 repeatedly executes the above processing at the frame period of the in-vehicle video captured by the camera 1 .
 図3のフローでは、車内映像が不良映像であると判定された場合に、制御部14が、乗員状態判定部13にステップST4およびST5の両方をスキップさせる例を示した。しかし、不良映像に基づく乗員の状態の判定結果が累積されることさえ防止できればよいため、制御部14は、乗員状態判定部13にステップST5だけをスキップさせてもよい。つまり、制御部14は、乗員状態判定部13が不良映像に基づく乗員の状態の判定を実施しても、その判定結果を累積させなければよい。 The flow of FIG. 3 shows an example in which the control unit 14 causes the occupant state determination unit 13 to skip both steps ST4 and ST5 when the in-vehicle image is determined to be a defective image. However, since it is only necessary to prevent accumulation of the passenger state determination results based on the bad images, the control section 14 may cause the passenger state determination section 13 to skip only step ST5. In other words, even if the occupant state determination unit 13 determines the occupant state based on the defective image, the control unit 14 does not need to accumulate the determination results.
 図4および図5は、それぞれ乗員状態判定装置10のハードウェア構成の例を示す図である。図1に示した乗員状態判定装置10の構成要素の各機能は、例えば図4に示す処理回路50により実現される。すなわち、乗員状態判定装置10は、車両内を撮影した映像である車内映像を取得し、車内映像が車両の乗員の顔検出に適しているか否かをフレームごとに判定し、乗員の顔検出に適さないと判定された車内映像を不良映像と判定し、各フレームの車内映像に基づき乗員の状態を判定してその判定結果を累積し、一定期間に累積された乗員の状態の判定結果に基づいて、乗員の状態の最終的な判定を行い、不良映像と判定された車内映像に基づく乗員の状態の判定、または、不良映像と判定された車内映像に基づく乗員の状態の判定結果の累積を実施させず、さらに、一定期間内に不良映像と判定された車内映像の割合が予め定められた閾値を超えると、それまでに累積された乗員の状態の判定結果を消去させるための処理回路50を備える。処理回路50は、専用のハードウェアであってもよいし、メモリに格納されたプログラムを実行するプロセッサ(中央処理装置(CPU:Central Processing Unit)、処理装置、演算装置、マイクロプロセッサ、マイクロコンピュータ、DSP(Digital Signal Processor)とも呼ばれる)を用いて構成されていてもよい。 4 and 5 are diagrams showing examples of the hardware configuration of the occupant state determination device 10, respectively. Each function of the components of the occupant condition determination device 10 shown in FIG. 1 is realized by, for example, a processing circuit 50 shown in FIG. That is, the occupant state determination device 10 acquires an in-vehicle image that is an image of the inside of the vehicle, determines whether or not the in-vehicle image is suitable for occupant face detection for each frame, and determines whether or not the occupant face detection is performed. The in-vehicle image that is determined to be unsuitable is determined as a bad image, the condition of the occupant is determined based on the in-vehicle image of each frame, and the determination results are accumulated. Then, the occupant state is determined based on the in-vehicle video that is determined to be defective, or the occupant state determination results based on the in-vehicle video that is determined to be defective are accumulated. If the ratio of in-vehicle images determined as defective images within a certain period of time exceeds a predetermined threshold value, the processing circuit 50 for deleting the determination result of the occupant's condition accumulated up to that point. Prepare. The processing circuit 50 may be dedicated hardware, or a processor (central processing unit (CPU: Central Processing Unit), processing device, arithmetic device, microprocessor, microcomputer, etc.) that executes a program stored in memory. DSP (also called Digital Signal Processor)).
 処理回路50が専用のハードウェアである場合、処理回路50は、例えば、単一回路、複合回路、プログラム化したプロセッサ、並列プログラム化したプロセッサ、ASIC(Application Specific Integrated Circuit)、FPGA(Field-Programmable Gate Array)、またはこれらを組み合わせたものなどが該当する。乗員状態判定装置10の構成要素の各々の機能が個別の処理回路で実現されてもよいし、それらの機能がまとめて一つの処理回路で実現されてもよい。 If the processing circuit 50 is dedicated hardware, the processing circuit 50 may be, for example, a single circuit, a composite circuit, a programmed processor, a parallel programmed processor, an ASIC (Application Specific Integrated Circuit), an FPGA (Field-Programmable Gate Array), or a combination of these. Each function of the components of the occupant state determination device 10 may be realized by separate processing circuits, or these functions may be collectively realized by one processing circuit.
 図5は、処理回路50がプログラムを実行するプロセッサ51を用いて構成されている場合における乗員状態判定装置10のハードウェア構成の例を示している。この場合、乗員状態判定装置10の構成要素の機能は、ソフトウェア等(ソフトウェア、ファームウェア、またはソフトウェアとファームウェアとの組み合わせ)により実現される。ソフトウェア等はプログラムとして記述され、メモリ52に格納される。プロセッサ51は、メモリ52に記憶されたプログラムを読み出して実行することにより、各部の機能を実現する。すなわち、乗員状態判定装置10は、プロセッサ51により実行されるときに、車両内を撮影した映像である車内映像を取得する処理と、車内映像が車両の乗員の顔検出に適しているか否かをフレームごとに判定し、乗員の顔検出に適さないと判定された車内映像を不良映像と判定する処理と、各フレームの車内映像に基づき乗員の状態を判定してその判定結果を累積し、一定期間に累積された乗員の状態の判定結果に基づいて、乗員の状態の最終的な判定を行う処理と、不良映像と判定された車内映像に基づく乗員の状態の判定、または、不良映像と判定された車内映像に基づく乗員の状態の判定結果の累積を実施させず、さらに、一定期間内に不良映像と判定された車内映像の割合が予め定められた閾値を超えると、それまでに累積された乗員の状態の判定結果を消去させる処理と、が結果的に実行されることになるプログラムを格納するためのメモリ52を備える。換言すれば、このプログラムは、乗員状態判定装置10の構成要素の動作の手順や方法をコンピュータに実行させるものであるともいえる。 FIG. 5 shows an example of the hardware configuration of the occupant state determination device 10 when the processing circuit 50 is configured using a processor 51 that executes programs. In this case, the functions of the constituent elements of the occupant state determination device 10 are implemented by software or the like (software, firmware, or a combination of software and firmware). Software or the like is written as a program and stored in the memory 52 . The processor 51 implements the function of each part by reading and executing the program stored in the memory 52 . That is, the occupant state determination device 10, when executed by the processor 51, acquires an in-vehicle image, which is an image of the inside of the vehicle, and determines whether the in-vehicle image is suitable for detecting the faces of the occupants of the vehicle. Judgment is performed on a frame-by-frame basis, and in-vehicle video that is determined to be unsuitable for occupant face detection is judged to be defective. Processing for final determination of the state of the occupant based on the determination result of the state of the occupant accumulated during the period, determination of the state of the occupant based on the in-vehicle image determined as defective, or determination as the defective image. In addition, when the proportion of in-vehicle images determined to be defective images within a certain period of time exceeds a predetermined threshold value, accumulation is not performed until then. A memory 52 is provided for storing a program to be executed as a result of processing for erasing the determination result of the occupant's condition. In other words, it can be said that this program causes a computer to execute the procedures and methods of operation of the constituent elements of the occupant condition determination device 10 .
 ここで、メモリ52は、例えば、RAM(Random Access Memory)、ROM(Read Only Memory)、フラッシュメモリ、EPROM(Erasable Programmable Read Only Memory)、EEPROM(Electrically Erasable Programmable Read Only Memory)などの、不揮発性または揮発性の半導体メモリ、HDD(Hard Disk Drive)、磁気ディスク、フレキシブルディスク、光ディスク、コンパクトディスク、ミニディスク、DVD(Digital Versatile Disc)およびそのドライブ装置等、または、今後使用されるあらゆる記憶媒体であってもよい。 Here, the memory 52 is, for example, a non-volatile or Volatile semiconductor memory, HDD (Hard Disk Drive), magnetic disk, flexible disk, optical disk, compact disk, mini disk, DVD (Digital Versatile Disc) and their drive devices, etc., or any storage media that will be used in the future. may
 以上、乗員状態判定装置10の構成要素の機能が、ハードウェアおよびソフトウェア等のいずれか一方で実現される構成について説明した。しかしこれに限ったものではなく、乗員状態判定装置10の一部の構成要素を専用のハードウェアで実現し、別の一部の構成要素をソフトウェア等で実現する構成であってもよい。例えば、一部の構成要素については専用のハードウェアとしての処理回路50でその機能を実現し、他の一部の構成要素についてはプロセッサ51としての処理回路50がメモリ52に格納されたプログラムを読み出して実行することによってその機能を実現することが可能である。 The configuration in which the functions of the constituent elements of the occupant state determination device 10 are realized by either hardware or software has been described above. However, the present invention is not limited to this, and a configuration may be adopted in which some components of the occupant condition determination device 10 are realized by dedicated hardware and other components are realized by software or the like. For example, the functions of some of the components are realized by the processing circuit 50 as dedicated hardware, and the processing circuit 50 as a processor 51 executes the programs stored in the memory 52 for some of the other components. Its function can be realized by reading and executing it.
 以上のように、乗員状態判定装置10は、ハードウェア、ソフトウェア等、またはこれらの組み合わせによって、上述の各機能を実現することができる。 As described above, the occupant state determination device 10 can implement the functions described above by hardware, software, etc., or a combination thereof.
 なお、実施の形態を適宜、変形、省略することが可能である。 Note that the embodiment can be modified or omitted as appropriate.
 上記した説明は、すべての態様において、例示であって、例示されていない無数の変形例が想定され得るものと解される。 It is understood that the above description is an example in all aspects, and that countless variations not illustrated can be assumed.
 1 カメラ、10 乗員状態判定装置、11 車内映像取得部、12 不良映像判定部、13 乗員状態判定部、14 制御部、121 顔検出対象領域設定部、122 白飛び量算出部、123 輝度分散量算出部、124 エッジ強度算出部、125 遮蔽物検出部、126 マルチプルフェイス検出部、127 撮影不良判定部、128 継続的撮影不良判定部、129 ステアリングホイール舵角検出部、50 処理回路、51 プロセッサ、52 メモリ。 1 camera, 10 occupant state determination device, 11 in-vehicle image acquisition unit, 12 defective image determination unit, 13 occupant state determination unit, 14 control unit, 121 face detection target area setting unit, 122 whiteout amount calculation unit, 123 brightness variance amount Calculation unit 124 Edge strength calculation unit 125 Shield detection unit 126 Multiple face detection unit 127 Imaging failure determination unit 128 Continuous imaging failure determination unit 129 Steering wheel steering angle detection unit 50 Processing circuit 51 Processor 52 Memory.

Claims (6)

  1.  車両内を撮影した映像である車内映像を取得する車内映像取得部と、
     前記車内映像が前記車両の乗員の顔検出に適しているか否かをフレームごとに判定し、前記乗員の顔検出に適さないと判定された前記車内映像を不良映像と判定する不良映像判定部と、
     各フレームの前記車内映像に基づき前記乗員の状態を判定してその判定結果を累積し、一定期間に累積された前記乗員の状態の前記判定結果に基づいて、前記乗員の状態の最終的な判定を行う乗員状態判定部と、
     前記乗員状態判定部に対し、前記不良映像と判定された前記車内映像に基づく前記乗員の状態の判定、または、前記不良映像と判定された前記車内映像に基づく前記乗員の状態の判定結果の累積を実施させず、さらに、前記一定期間内に前記不良映像と判定された前記車内映像の割合が予め定められた閾値を超えると、それまでに累積された前記乗員の状態の前記判定結果を消去させる制御部と、
    を備える乗員状態判定装置。
    an in-vehicle image acquisition unit that acquires an in-vehicle image that is an image captured inside the vehicle;
    a bad video determination unit that determines whether the in-vehicle video is suitable for face detection of the occupant of the vehicle for each frame, and determines that the in-vehicle video determined as unsuitable for occupant face detection is a bad video; ,
    Determining the state of the occupant based on the in-vehicle video of each frame, accumulating the determination results, and finally determining the state of the occupant based on the determination results of the state of the occupant accumulated over a certain period of time. an occupant state determination unit that performs
    The occupant state determination unit determines the state of the occupant based on the in-vehicle image determined to be defective, or accumulates determination results of the state of the occupant based on the in-vehicle image determined to be defective. is not performed, and further, when the ratio of the in-vehicle images determined as the defective images within the predetermined period exceeds a predetermined threshold value, the determination result of the occupant state accumulated up to that point is erased. a control unit that causes
    An occupant state determination device comprising:
  2.  前記不良映像判定部は、前記車内映像における前記乗員の顔検出の対象領域の白飛び量、輝度分散量、エッジ強度、および遮蔽物の写り込み量、ならびに、前記車内映像から前記乗員の顔として検出された顔の個数、のうちの少なくとも1つに基づいて、前記車内映像が前記乗員の顔検出に適しているか否かを判定する、
    請求項1に記載の乗員状態判定装置。
    The defective image determination unit determines the amount of overexposure, the amount of luminance variance, the edge strength, and the amount of reflection of a shield in the target area for face detection of the occupant in the in-vehicle image, and determines the face of the occupant from the in-vehicle image. determining whether the in-vehicle image is suitable for detecting the occupant's face based on at least one of the number of detected faces;
    The occupant state determination device according to claim 1.
  3.  前記不良映像判定部は、前記車内映像を用いたときの前記乗員の顔検出の信頼度である顔検出信頼度を算出し、前記顔検出信頼度が予め定められた閾値よりも小さくなる前記車内映像を、前記乗員の顔検出に適さないものと判定する、
    請求項1に記載の乗員状態判定装置。
    The bad image determination unit calculates a face detection reliability, which is a reliability of face detection of the occupant when the in-vehicle image is used, and determines whether the in-vehicle image is detected when the face detection reliability is smaller than a predetermined threshold. determining that the video is not suitable for face detection of the occupant;
    The occupant state determination device according to claim 1.
  4.  前記不良映像判定部は、前記車内映像における前記乗員の顔検出の対象領域の白飛び量、輝度分散量、エッジ強度、および遮蔽物の写り込み量、ならびに、前記車内映像から前記乗員の顔として検出された顔の個数、のうちの少なくとも1つに基づいて、前記顔検出信頼度を算出する、
    請求項3に記載の乗員状態判定装置。
    The defective image determination unit determines the amount of overexposure, the amount of luminance variance, the edge strength, and the amount of reflection of a shield in the target area for face detection of the occupant in the in-vehicle image, and determines the face of the occupant from the in-vehicle image. calculating the face detection reliability based on at least one of:
    The occupant state determination device according to claim 3.
  5.  前記乗員状態判定部が行う前記乗員の状態の判定は、脇見判定、居眠り判定、姿勢崩れ判定、または硬直判定のうちのいずれかである
    請求項1に記載の乗員状態判定装置。
    2. The occupant state determination device according to claim 1, wherein the occupant state determination performed by the occupant state determination unit is any one of inattention determination, doze determination, posture collapse determination, and stiffness determination.
  6.  乗員状態判定装置の車内映像取得部が、車両内を撮影した映像である車内映像を取得し、
     前記乗員状態判定装置の不良映像判定部が、前記車内映像が前記車両の乗員の顔検出に適しているか否かをフレームごとに判定し、前記乗員の顔検出に適さないと判定された前記車内映像を不良映像と判定し、
     前記乗員状態判定装置の乗員状態判定部が、各フレームの前記車内映像に基づき前記乗員の状態を判定してその判定結果を累積し、一定期間に累積された前記乗員の状態の前記判定結果に基づいて、前記乗員の状態の最終的な判定を行い、
     前記乗員状態判定装置の制御部が、前記乗員状態判定部に対し、前記不良映像と判定された前記車内映像に基づく前記乗員の状態の判定、または、前記不良映像と判定された前記車内映像に基づく前記乗員の状態の判定結果の累積を実施させず、さらに、前記一定期間内に前記不良映像と判定された前記車内映像の割合が予め定められた閾値を超えると、それまでに累積された前記乗員の状態の前記判定結果を消去させる、
    乗員状態判定方法。
    The in-vehicle image acquisition unit of the occupant state determination device acquires an in-vehicle image that is an image of the inside of the vehicle,
    A defective image determination unit of the occupant state determination device determines whether or not the in-vehicle image is suitable for detecting the face of the occupant of the vehicle for each frame, and Judging the image as a bad image,
    The occupant state determination unit of the occupant state determination device determines the state of the occupant based on the in-vehicle image of each frame, accumulates the determination results, Based on, make a final determination of the state of the occupant,
    The control unit of the occupant state determination device instructs the occupant state determination unit to determine the state of the occupant based on the in-vehicle image determined to be defective, or based on the in-vehicle image determined to be defective. Further, when the proportion of the in-vehicle image determined as the defective image within the certain period of time exceeds a predetermined threshold value, the accumulation up to then erasing the determination result of the state of the occupant;
    Occupant condition determination method.
PCT/JP2021/017174 2021-04-30 2021-04-30 Passenger status determination device and passenger status determination method WO2022230168A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
DE112021007131.9T DE112021007131T8 (en) 2021-04-30 2021-04-30 Occupant condition assessment device and occupant condition assessment method
JP2023516996A JP7330418B2 (en) 2021-04-30 2021-04-30 Occupant state determination device and occupant state determination method
PCT/JP2021/017174 WO2022230168A1 (en) 2021-04-30 2021-04-30 Passenger status determination device and passenger status determination method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2021/017174 WO2022230168A1 (en) 2021-04-30 2021-04-30 Passenger status determination device and passenger status determination method

Publications (1)

Publication Number Publication Date
WO2022230168A1 true WO2022230168A1 (en) 2022-11-03

Family

ID=83848170

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/017174 WO2022230168A1 (en) 2021-04-30 2021-04-30 Passenger status determination device and passenger status determination method

Country Status (3)

Country Link
JP (1) JP7330418B2 (en)
DE (1) DE112021007131T8 (en)
WO (1) WO2022230168A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013054717A (en) * 2011-09-02 2013-03-21 Hyundai Motor Co Ltd Driver's condition monitoring device using infrared sensor and method thereof
JP2019079285A (en) * 2017-10-25 2019-05-23 いすゞ自動車株式会社 Safe driving promotion system and safe driving promotion method
JP2020181281A (en) * 2019-04-24 2020-11-05 株式会社デンソーアイティーラボラトリ Line-of-sight direction estimation device, method of calibrating line-of-sight direction estimation device, and program
JP2020194227A (en) * 2019-05-24 2020-12-03 日本電産モビリティ株式会社 Face hiding determination device, face hiding determination method, face hiding determination program, and occupant monitoring system
JP2021043526A (en) * 2019-09-06 2021-03-18 大日本印刷株式会社 Image processing apparatus and image search method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4795281B2 (en) 2007-03-15 2011-10-19 本田技研工業株式会社 Vehicle safety device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013054717A (en) * 2011-09-02 2013-03-21 Hyundai Motor Co Ltd Driver's condition monitoring device using infrared sensor and method thereof
JP2019079285A (en) * 2017-10-25 2019-05-23 いすゞ自動車株式会社 Safe driving promotion system and safe driving promotion method
JP2020181281A (en) * 2019-04-24 2020-11-05 株式会社デンソーアイティーラボラトリ Line-of-sight direction estimation device, method of calibrating line-of-sight direction estimation device, and program
JP2020194227A (en) * 2019-05-24 2020-12-03 日本電産モビリティ株式会社 Face hiding determination device, face hiding determination method, face hiding determination program, and occupant monitoring system
JP2021043526A (en) * 2019-09-06 2021-03-18 大日本印刷株式会社 Image processing apparatus and image search method

Also Published As

Publication number Publication date
DE112021007131T8 (en) 2024-01-25
JP7330418B2 (en) 2023-08-21
DE112021007131T5 (en) 2024-01-18
JPWO2022230168A1 (en) 2022-11-03

Similar Documents

Publication Publication Date Title
JP6571424B2 (en) Fault diagnosis device
US9172863B2 (en) Video signal processing apparatus
US20180005057A1 (en) Apparatus and method for capturing face image of decreased reflection on spectacles in vehicle
JP2005165805A (en) Operation condition recorder
JP6981319B2 (en) Recording control device, recording control method, and recording control program
US10432854B2 (en) Image processing apparatus to determine whether there is fog or mist in a captured image
US11240439B2 (en) Electronic apparatus and image capture apparatus capable of detecting halation, method of controlling electronic apparatus, method of controlling image capture apparatus, and storage medium
WO2022230168A1 (en) Passenger status determination device and passenger status determination method
WO2019068699A1 (en) Method for classifying an object point as static or dynamic, driver assistance system, and motor vehicle
KR101341243B1 (en) Apparatus and method of restoring image damaged by weather condition
JP7463910B2 (en) Drive recorder, drive recorder control method, and imaging control program
CN110611772A (en) Image capturing device for vehicle and exposure parameter setting method thereof
WO2019097677A1 (en) Image capture control device, image capture control method, and driver monitoring system provided with image capture control device
JP2022143854A (en) Occupant state determination device and occupant state determination method
JP6970546B2 (en) Image recording device and image recording method
WO2022157880A1 (en) Hand detection device, gesture recognition device, and hand detection method
CN115942131B (en) Method for guaranteeing vehicle looking-around function, cabin system, vehicle and storage medium
JP2020194224A (en) Driver determination device, driver determination method, and driver determination program
US20230067332A1 (en) Image capturing apparatus, control method of image capturing apparatus, and storage medium
JP6934981B1 (en) Image processing device
CN116750002B (en) Method for selecting items based on vehicle-mounted touch screen
JP7003332B2 (en) Driver monitoring device and driver monitoring method
WO2023032029A1 (en) Blocking determination device, passenger monitoring device, and blocking determination method
KR102550200B1 (en) System for providing panorama impact image and method thereof
JP2024057511A (en) Vehicle image display system, vehicle image display method, and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21939326

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2023516996

Country of ref document: JP

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 112021007131

Country of ref document: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21939326

Country of ref document: EP

Kind code of ref document: A1