WO2022234662A1 - 眠気推定装置及び眠気推定システム - Google Patents
眠気推定装置及び眠気推定システム Download PDFInfo
- Publication number
- WO2022234662A1 WO2022234662A1 PCT/JP2021/017544 JP2021017544W WO2022234662A1 WO 2022234662 A1 WO2022234662 A1 WO 2022234662A1 JP 2021017544 W JP2021017544 W JP 2021017544W WO 2022234662 A1 WO2022234662 A1 WO 2022234662A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- face
- drowsiness
- elements
- detection process
- detected
- Prior art date
Links
- 206010041349 Somnolence Diseases 0.000 title claims abstract description 552
- 238000001514 detection method Methods 0.000 claims abstract description 394
- 230000001815 facial effect Effects 0.000 claims abstract description 182
- 238000003384 imaging method Methods 0.000 claims abstract description 35
- 238000000034 method Methods 0.000 claims description 327
- 208000032140 Sleepiness Diseases 0.000 description 26
- 230000037321 sleepiness Effects 0.000 description 26
- 238000004364 calculation method Methods 0.000 description 24
- 238000010586 diagram Methods 0.000 description 19
- 230000004397 blinking Effects 0.000 description 11
- 230000006870 function Effects 0.000 description 8
- 238000013528 artificial neural network Methods 0.000 description 7
- 230000007423 decrease Effects 0.000 description 5
- 210000000744 eyelid Anatomy 0.000 description 4
- 241001282135 Poromitra oscitans Species 0.000 description 3
- 206010048232 Yawning Diseases 0.000 description 3
- 210000002569 neuron Anatomy 0.000 description 3
- 239000000284 extract Substances 0.000 description 2
- 210000001747 pupil Anatomy 0.000 description 2
- 230000002787 reinforcement Effects 0.000 description 2
- NOQGZXFMHARMLW-UHFFFAOYSA-N Daminozide Chemical compound CN(C)NC(=O)CCC(O)=O NOQGZXFMHARMLW-UHFFFAOYSA-N 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 210000001142 back Anatomy 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000002068 genetic effect Effects 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 230000004886 head movement Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B21/00—Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
- G08B21/02—Alarms for ensuring the safety of persons
- G08B21/06—Alarms for ensuring the safety of persons indicating a condition of sleep, e.g. anti-dozing alarms
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W40/00—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
- B60W40/08—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
- B60W2040/0818—Inactivity or incapacity of driver
- B60W2040/0827—Inactivity or incapacity of driver due to sleepiness
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2420/00—Indexing codes relating to the type of sensors based on the principle of their operation
- B60W2420/40—Photo, light or radio wave sensitive means, e.g. infrared sensors
- B60W2420/403—Image sensing, e.g. optical camera
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/59—Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
- G06V20/597—Recognising the driver's state or behaviour, e.g. attention or drowsiness
Definitions
- the present disclosure relates to a drowsiness estimation device and a drowsiness estimation system for estimating drowsiness of passengers in a vehicle.
- facial elements In order to prevent drowsy driving, a technology has been developed for estimating drowsiness of the occupant by detecting elements of the occupant's face, such as the eyes and mouth (hereinafter referred to as facial elements), and calculating the degree of eye opening, opening, etc. ing.
- facial elements When detecting the facial elements of the occupant, there is a possibility that the facial elements of the occupant cannot be detected due to various factors such as the occupant wearing sunglasses or a mask.
- the shading of the scenery reflected in the lenses of the sunglasses is extracted as a feature value.
- have extracted the shape of the mask as a feature quantity and estimated the state of the occupant's face, etc. (see, for example, Patent Document 1).
- the present disclosure has been made to solve the above-described problems, and is intended to ensure the reliability of the result of estimating drowsiness of the occupant even when some facial elements of the occupant are not detected. It is an object of the present invention to provide a drowsiness estimation device capable of
- a drowsiness estimation device includes an image acquisition unit that acquires a captured image from an imaging device that captures an image of an occupant in a vehicle, and a face detection unit that performs face detection processing to detect a plurality of facial elements of the occupant from the captured image. and a drowsiness estimation unit for estimating the drowsiness of the passenger detected from the captured image. If some of the face elements of the occupant are not detected by the second face detection process, the drowsiness estimation unit detects some of the face elements detected by the first face detection process. estimating the drowsiness of the occupant by using the first face information obtained from the face elements of and the second face information obtained from other face elements different from the part of the face elements detected by the second face detection processing It is something to do.
- the drowsiness estimation system includes an imaging device that is mounted on a vehicle and captures an image of an occupant in the vehicle; an image acquisition unit that acquires an imaged image from the imaging device; and a drowsiness estimation unit for estimating drowsiness of the occupant detected from the captured image.
- the face detection unit includes a first face detection process, A second face detection process is performed after the first face detection process. First face information obtained from some face elements detected by the first face detection process, and second face information obtained from other face elements different from the part of the face elements detected by the second face detection process Information is used to estimate the drowsiness of the occupant.
- the reliability of the estimation result of drowsiness of the occupant can be ensured.
- FIG. 1 is a block diagram showing a configuration example of a drowsiness estimation system according to Embodiment 1;
- FIG. 4 is an explanatory diagram showing an imaging range of the imaging device according to Embodiment 1;
- FIG. 4 is an explanatory diagram showing an example of feature point detection by the drowsiness estimation device according to Embodiment 1;
- FIG. 10 is an explanatory diagram showing an example of a face element identification result of the drowsiness estimation device according to Embodiment 1;
- 4 is a flowchart showing an operation example of the drowsiness estimation device according to Embodiment 1;
- 1 is a diagram showing a hardware configuration example of a drowsiness estimation device according to Embodiment 1;
- FIG. 9 is a flow chart showing an operation example of the drowsiness estimation device according to Embodiment 2;
- FIG. 11 is a configuration diagram of a learning device for a drowsiness estimation device according to Embodiment 3;
- FIG. 12 is an explanatory diagram showing a learning example of the neural network of the learning device according to Embodiment 3;
- FIG. 12 is a flow chart showing an example of a learned model generation process of the learning device according to Embodiment 3;
- FIG. FIG. 12 is an explanatory diagram showing the degree of contribution to inference of the amount of change in face information according to Embodiment 3;
- FIG. 11 is a configuration diagram showing a configuration example of a drowsiness estimation system according to Embodiment 3;
- 10 is a flowchart showing an operation example of the drowsiness estimation device according to Embodiment 3;
- FIG. 1 is a block diagram showing a configuration example of a drowsiness estimation system 100 according to Embodiment 1.
- the drowsiness estimation system 100 includes a drowsiness estimation device 10 and an imaging device 20, and the drowsiness estimation device 10 and the imaging device 20 are mounted on a vehicle.
- the drowsiness estimation device 10 is connected to a vehicle-side control device 200 that controls in-vehicle devices such as air conditioners, audio devices, navigation devices, and notification units, and the engine, etc., in the vehicle in which the drowsiness estimation device 10 is mounted.
- in-vehicle devices such as air conditioners, audio devices, navigation devices, and notification units, and the engine, etc.
- FIG. 2 is an explanatory diagram showing the imaging range of the imaging device 20 according to the first embodiment.
- FIG. 2 shows a top view of the inside of a vehicle equipped with the drowsiness estimation device 10.
- the imaging device 20 is composed of, for example, a wide-angle camera, an infrared camera, etc., and images the inside of the vehicle. Further, the imaging device 20 may be a TOF (Time-of-flight) camera capable of capturing an image reflecting the distance between the imaging device 20 and the subject.
- TOF Time-of-flight
- the imaging device 20 captures an image of the inside of the vehicle at intervals of, for example, 30 to 60 fps (frames per second), and outputs the captured image (hereinafter referred to as captured image) to the image acquisition unit 1 of the drowsiness estimation device 10 .
- area A indicates the imaging range of the imaging device 20 .
- One or a plurality of imaging devices 20 are arranged on an overhead console, an instrument panel, a steering column, a room mirror, or the like so that at least a driver 503 in a driver's seat 501 is included in an imaging range.
- the imaging device 20 may be arranged so that the imaging range includes the passenger in the front passenger seat 502 or the rear seat, that is, the fellow passenger 504 . That is, in addition to the driver's seat 501, the imaging range of the imaging device 20 may include, for example, at least one of the front passenger seat 502, the left rear seat, the center rear seat, and the right rear seat.
- the driver 503 and fellow passenger 504 may be referred to as "passengers".
- the drowsiness estimation device 10 includes an image acquisition unit 1 that acquires a captured image from an imaging device 20, a face detection unit 11 that detects facial elements of the occupant from the captured image, and a drowsiness that estimates the drowsiness of the occupant detected from the captured image. and an estimating unit 12 .
- the image acquisition unit 1 of the drowsiness estimation device 10 is connected to the imaging device 20 and acquires captured images from the imaging device 20 . Then, the image acquisition section 1 outputs the acquired captured image to the face detection section 11 . Further, as shown in FIG. 1, the image acquisition section 1 is connected to the vehicle information acquisition section 2 described below. For example, when the image acquisition unit 1 acquires a signal indicating that the engine of the vehicle has started from the vehicle information acquisition unit 2, acquisition of the captured image is started. On the other hand, when the image acquisition unit 1 acquires, for example, a signal indicating that the engine of the vehicle has stopped from the vehicle information acquisition unit 2, the acquisition of the captured image ends.
- the drowsiness estimation device 10 has a vehicle information acquisition unit 2 connected to the vehicle-side control device 200 .
- the vehicle information acquisition unit 2 acquires signals relating to starting, stopping, etc. of the vehicle from the vehicle-side control device 200 . Then, using the signal acquired from the vehicle-side control device 200, the image acquisition unit 1 outputs a signal to start acquisition of the captured image or a signal to end acquisition of the captured image.
- the vehicle information acquisition unit 2 receives information from the vehicle-side control device 200 that the doors are unlocked, the doors are opened, the ignition is turned on, the human sensor is turned on, the shift lever is moved to the drive position, and the vehicle speed is 0 km/h. is exceeded, the navigation device has started guidance, or the vehicle has left home, a signal is output to the image acquisition unit 1 to start acquisition of a captured image.
- the vehicle information acquisition unit 2 receives information from the vehicle-side control device 200 that the ignition is turned off, the motion sensor is turned off, the shift lever is moved to the parking position, the navigation device has finished guidance, and the vehicle is heading home.
- a signal is output to the image acquisition unit 1 to indicate that acquisition of the captured image is to be terminated.
- the face detection unit 11 includes a feature point detection unit 3 for detecting feature points of facial elements of the occupant in the captured image, a feature amount calculation unit 4 for calculating feature amounts related to the facial elements of the occupant, and a plurality of facial elements of the occupant. and a face element determination unit 5 for determining whether or not some of the face elements have not been detected.
- FIG. 3 is an explanatory diagram showing an example of feature point detection by the drowsiness estimation device 10 according to the first embodiment.
- FIG. 3 shows a captured image 61 in which the driver 503 is captured.
- the feature point detection unit 3 analyzes the captured image and detects a face area in which the face of the passenger exists in the captured image.
- the feature point detection unit 3 extracts an area where the face of the passenger exists, for example, from the contrast ratio in the captured image.
- the feature point detection unit 3 sets an area such as a rectangle so as to include the area where the passenger's face exists, and detects it as a face area 71, as in the example of FIG. 3B. Furthermore, the feature point detection unit 3 acquires the positional information of the face region and outputs it to the storage unit (not shown) of the drowsiness estimation device 10 .
- the position information of the face region means, for example, when the face region is rectangular, the coordinates of each vertex of the face region with respect to a specific point in the captured image (for example, point O shown in FIG. 3A), the width of the face region, height, size, and the like.
- the feature point detection unit 3 extracts feature points of the occupant's face elements (hereinafter referred to as face elements) included in the face area.
- the facial elements of the occupant are, for example, the occupant's left eye 81, right eye 82, nose 83, mouth 84, etc. shown in FIG. 3B.
- the facial elements of the occupant may include not only the parts of the occupant's face, but also the shape of the occupant's head, the outline of the face, and the like. That is, the occupant's face element may be any element that constitutes the occupant's body.
- the feature point detection processing by the feature point detection unit 3 can use various known algorithms, and detailed description of these algorithms will be omitted. For example, when detecting the feature points of the facial elements of the occupant, the feature point detection unit 3 detects one or more feature points of each of the facial elements (eg, left eye, right eye, nose, mouth, etc.). Execute the process to detect.
- the feature point detection unit 3 for example, regarding the left eye and the right eye, the position information of the characteristic parts of the face elements, such as the pupil, the corner of the eye, the inner corner of the eye, the upper eyelid, and the lower eyelid, that is, the feature point, in the captured image. to get Hereinafter, for the sake of explanation, the left eye and the right eye may be collectively referred to as an eye.
- the feature point detection unit 3 acquires position information in the captured image of feature points of facial elements such as the root of the nose, the tip of the nose, the dorsum of the nose, and the alar of the nose, for example. Further, the feature point detection unit 3 acquires position information in the captured image of feature points of face elements such as the upper lip, the lower lip, and the corners of the mouth, for example. Then, the positional information about each facial element acquired by the feature point detection unit 3 , that is, the positional information of the feature points is output to the feature amount calculation unit 4 and the facial element determination unit 5 .
- the position information of the feature points of each face element acquired by the feature point detection unit 3 is, for example, coordinates starting from a specific position O in the captured image shown in FIG. This is information indicating coordinates, etc., starting from the position of .
- the feature amount calculation unit 4 of the face detection unit 11 calculates the feature amount related to the facial elements of the occupant from the detected feature points.
- the feature amount is information used for drowsiness estimation processing, which will be described later, and which indicates the state of the occupant, such as the degree of eye opening and the degree of openness.
- the feature amount may be information obtained over a plurality of captured images, such as the frequency of blinking, the duration of blinking, the presence or absence of yawning, and the speed of nodding.
- Various well-known algorithms can be used for the feature amount calculation processing by the feature amount calculation unit 4 as well.
- the feature amount calculated by the feature amount calculation unit 4 may also be recorded in the storage unit of the drowsiness estimation device 10 .
- face information related to occupant drowsiness estimation processing detected by face detection processing that is, position information of feature points possessed by face elements, and feature amounts calculated using face elements are collectively referred to as face information.
- facial elements of a passenger using a captured image it may be difficult to detect facial elements due to various factors.
- sunglasses the eyes, which are facial elements of the occupant, are hidden by the sunglasses, making it difficult for the face detection unit 11 to detect them.
- the occupant wears a mask
- the nose and mouth which are facial elements of the occupant, are hidden behind the mask, making it difficult for the face detection unit 11 to detect them.
- the occupant covers a part of the face with his or her hand, the facial elements hidden by the occupant's hand cannot be detected.
- the captured image becomes dark or bright due to backlight or the like, the contrast ratio between the face element and the background of the face element becomes small, making it difficult for the face detection unit 11 to detect the face element. may be.
- the drowsiness estimation unit 12 estimates the drowsiness of the occupant by using face information detected in the past when some face elements are not detected among a plurality of face elements by face detection processing. to estimate
- the facial element determination unit 5 acquires the position information of the feature points of the facial elements from the feature point detection unit 3 of the face detection unit 11, and identifies the facial elements detected from the captured image.
- the facial element determination unit 5 detects the left eye of the occupant from the captured image. identify it as On the other hand, if the position information of the feature points acquired from the feature point detection unit 3 indicates that the left eye of the occupant does not exist in the captured image, the face element determination unit 5 detects the left eye by the face detection unit 11. specify that it was not.
- the facial element determining unit 5 may specify whether or not the facial element of the occupant has been detected based on, for example, at which position in the facial area the feature point of the facial element exists. In this case, for example, with respect to the eye, if the feature points of the eye, that is, the pupil, the outer corner of the eye, the inner corner of the eye, the upper eyelid, the lower eyelid, etc., are detected at possible positions in the face region, it is determined that the eye is detected. should be Similarly, for other facial elements such as the nose, mouth, and outline of the occupant, if a feature point of the facial element is detected at a position where the facial element may exist in the facial region, the face detection unit 11 It may be specified that a face element is detected.
- the face element determination unit 5 may acquire the calculation result of the feature amount from the feature amount calculation unit 4 of the face detection unit 11 and identify the face element detected from the captured image. In this case, for example, when the degree of opening is acquired from the feature amount calculating unit 4, the face element determining unit 5 determines that the mouth is detected, and when the degree of opening is not acquired, it determines that the mouth is not detected. may In this way, if the feature amount corresponding to the face element is acquired, the face element determination unit 5 determines that the face element has been detected from the captured image. It may be determined that no face element has been detected from the image.
- the feature amount corresponding to the face element is, for example, the degree of eye opening for the eyes, the degree of opening for the mouth, and the orientation of the face for the outline.
- the facial element identification processing by the facial element determination unit 5 is not limited to the above example.
- the facial element determination unit 5 may identify facial elements detected from the captured image using an image processing technique such as template matching.
- an image processing technique such as template matching.
- a captured image and an image for similarity determination such as an image obtained by capturing the face element, stored in advance in a storage unit or the like are used.
- the face element determination unit 5 to identify the face element detected from the captured image.
- the facial element determination unit 5 determines that no facial element that may be hidden by the clothing has been detected. may For example, if it is detected that the occupant is wearing a mask, it may be determined that the nose and mouth were not detected among the face elements. Further, for example, when it is detected that the occupant is wearing sunglasses, it may be specified that the eyes are not detected among the face elements.
- FIG. 4 is an explanatory diagram showing an example of a face element identification result of the drowsiness estimation apparatus 10 according to the first embodiment.
- FIG. 4 shows a recording example of the result of specifying whether or not the left eye, right eye, nose, mouth, and outline of the occupant are detected by the face detection process for the frames (frames N1 to N5) from which the captured image is acquired. is shown.
- frames N1 to N5 frames
- the face elements determined by the face element determination unit 5 as detected by the face detection process are indicated as OK, and the face elements determined as not detected by the face detection process are indicated as NG. ing. From frame N1 to frame N5, time advances in the order of N1, N2, N3, N4, and N5, and the interval between each frame is arbitrary.
- the left eye, right eye, nose, mouth, and contour of the occupant are detected by the face detection processing of the face detection unit 11 in frames N1 to N5, and the face detection unit 11 detects the face in frames N3 to N5. It shows an example in which the occupant's left eye and right eye are not detected by the detection process. In such a case, the left and right eyes cannot be used for the drowsiness estimation process of the passenger in frames N3 to N5. That is, for a captured image in which some face elements are not detected, the undetected face elements cannot be used for drowsiness estimation processing.
- the facial element determination unit 5 uses the results of the facial element identification process to determine whether or not some of the multiple facial elements of the occupant have not been detected in a specific frame. For example, when a preset face element is not detected among the plurality of face elements in the face detection process, the face element determination unit 5 determines that some face elements are not detected among the plurality of face elements. I judge that. On the other hand, in the face detection process, when all preset face elements are detected among the plurality of face elements, the face element determination unit 5 determines that all face elements are detected among the plurality of face elements. do.
- the face element determination unit 5 determines that some of the plurality of face elements are detected. It may be determined that it was not detected. In this case, when all of the plurality of face elements set in advance are detected in the face detection process, the face element determination unit 5 determines that all of the plurality of face elements have been detected.
- the preset face element is one or a plurality of face elements used in the drowsiness estimation process, such as the left eye, the right eye, or the mouth. Further, as described above, all facial elements include not only all facial elements possessed by the occupant, but also a plurality of facial elements selected from all facial elements possessed by the occupant. Then, the face element determination unit 5 outputs to the drowsiness estimation unit 12 a determination result indicating whether or not some of the plurality of face elements have not been detected in a specific frame.
- the drowsiness estimating unit 12 will be described with an example of estimating the drowsiness of the occupant by calculating the drowsiness level as an example of estimating the drowsiness of the occupant detected from the captured image.
- the drowsiness level is an index value indicating the drowsiness index of the occupant divided in stages from the awakened state to the strongly drowsy state. For example, if the occupant is awake, the drowsiness level is 1, and if the occupant is strongly drowsy, the drowsiness level is 9. It is assumed that sleepiness level 1 changes to sleepiness level 9 as the passenger feels sleepy.
- the drowsiness estimation unit 12 uses the face information acquired from the face detection unit 11 to calculate the drowsiness level of the passenger. For example, when the drowsiness estimation unit 12 acquires the degree of eye openness, which is a feature amount, as face information, the drowsiness level of the occupant is calculated to increase as the degree of eye openness decreases. This is because the occupant's eye-opening degree tends to decrease as the drowsiness increases. In this case, for example, the drowsiness estimation unit 12 may set a plurality of thresholds for the degree of eye openness, determine whether the degree of eye openness is less than each threshold, and calculate the drowsiness level.
- the drowsiness estimation unit 12 acquires the degree of opening, which is a feature amount, as face information, the drowsiness level is calculated higher as the degree of opening increases. This is because if the degree of opening is large, the occupant may have felt drowsy and yawned.
- a plurality of thresholds may be provided for the degree of opening, and the drowsiness estimation unit 12 may determine whether the degree of opening is less than each threshold to calculate the drowsiness level.
- the drowsiness estimation unit 12 acquires the frequency of blinking, which is a feature quantity, as face information
- the drowsiness level of the occupant is calculated to increase as the frequency of blinking decreases. This is because the occupant's blinking frequency tends to decrease as the drowsiness increases.
- a plurality of thresholds may be provided for the frequency of blinking, and the drowsiness estimation unit 12 may determine whether the frequency of blinking is less than each threshold to calculate the drowsiness level.
- the drowsiness estimation unit 12 calculates the drowsiness level comprehensively using the pieces of face information. For example, when the drowsiness estimation unit 12 acquires the degree of eye opening and the degree of opening, which are feature amounts, as face information from the face detection unit 11, the drowsiness estimation unit 12 calculates the drowsiness level using the degree of eye opening and the degree of opening.
- the drowsiness estimation unit 12 may calculate a drowsiness level calculated from the degree of eye opening and a drowsiness level calculated from the degree of openness, and take the average of the drowsiness levels as the total drowsiness level.
- the drowsiness estimation unit 12 compares, for example, the drowsiness level calculated from the degree of eye opening with the drowsiness level calculated from the degree of openness, and the one with the higher drowsiness level among the respective drowsiness levels may be taken as the overall drowsiness level. good.
- the drowsiness estimation unit 12 compares, for example, the drowsiness level calculated from the degree of eye opening with the drowsiness level calculated from the degree of opening, and the lower drowsiness level among the drowsiness levels may be used as the overall drowsiness level. good.
- the drowsiness estimation unit 12 acquires a plurality of pieces of face information from the face detection unit 11, it outputs the calculated overall drowsiness level to the vehicle-side control device 200 or the like as the drowsiness estimation result.
- the drowsiness estimation process by the drowsiness estimation unit 12 is not limited to the above example, and various known algorithms can be used.
- the drowsiness estimation unit 12 uses the face information acquired from the face detection unit 11 without calculating the drowsiness level, and estimates which drowsiness state the occupant is in among the drowsiness states classified in stages.
- the drowsiness estimation unit 12 acquires the position information of the feature points as face information from the face detection unit 11, similarly to the feature amount calculation unit 4, the drowsiness estimation unit 12 calculates the feature amount from the position information of the feature points.
- the drowsiness estimation process described above may be performed using the feature amount.
- the drowsiness estimation unit 12 performs different drowsiness estimation processes depending on the result of determination by the face element determination unit 5 whether or not some of the plurality of face elements have been detected in the face detection process.
- the drowsiness estimating unit 12 according to the present embodiment, if some face elements among a plurality of face elements are not detected in face detection processing in a specific frame, face detection performed before the specific frame.
- face detection processing performed before the specific frame face detection processing performed before the specific frame
- the drowsiness estimation process of the passenger is performed using the face information obtained by the process.
- face detection processing performed before a specific frame will be referred to as first face detection processing
- face detection processing in a specific frame will be referred to as second face detection processing. That is, the second face detection process is a process performed after the first face detection process.
- first face information one or more pieces of face information obtained by the first face detection processing
- second face information one or more pieces of face information obtained by the second face detection processing
- the interval between the first face detection process and the second face detection process is arbitrary.
- the first face information and the second face information may be expressed as "face information" respectively.
- the drowsiness estimation unit 12 determines the first face related to the part of the face elements obtained by the first face detection process.
- the drowsiness level of the occupant is calculated from the information and second face information related to other face elements obtained by the second face detection process.
- the other facial elements are one or more facial elements that are different from some of the multiple facial elements.
- the drowsiness estimation process by the drowsiness estimation unit 12 will be described by taking an example in which the occupant wears sunglasses and the occupant's mouth is detected while the occupant's eyes are not detected in the second face detection process.
- the face element determination unit 5 determines that some of the plurality of face elements are not detected in the second face detection process.
- a determination result indicating that the sleepiness is estimated is output to the drowsiness estimation unit 12 .
- the drowsiness estimation unit 12 obtains the degree of eye opening of the first face information related to some facial elements (eyes) obtained by the first face detection process, and the degree of eye opening obtained by the second face detection process. , and the degree of opening of the second facial information related to other facial elements (mouth) are used to estimate drowsiness of the occupant.
- the face element determination unit 5 indicates that some face elements out of the plurality of face elements were not detected in the second face detection process.
- the determination result is output to the drowsiness estimation unit 12 .
- the drowsiness estimating unit 12 obtains the degree of opening of the first face information related to some facial elements (mouth) obtained by the first face detection process, and the degree of opening of the first face information obtained by the second face detection process. , eye openness of the second facial information regarding other facial elements (eyes), and drowsiness of the occupant is estimated. Note that the drowsiness estimation process is the same as the process described above, so a detailed description thereof will be omitted.
- the drowsiness estimation result by the drowsiness estimation unit 12 may be output to the vehicle-side control device 200 .
- the vehicle-side control device 200 controls the air conditioner to eliminate the drowsiness, or controls the occupant's drowsiness. You may control a notification part so that a warning may be issued to .
- FIG. 5 is a flow chart showing an operation example of the drowsiness estimation device 10 according to the first embodiment.
- the drowsiness estimation device 10 starts operating, for example, when the vehicle information acquisition unit 2 acquires a signal indicating that the vehicle engine has started from the vehicle-side control device 200 .
- the flowchart of FIG. 5 does not show a process for terminating the operation of the drowsiness estimation device 10, the drowsiness estimation device 10, for example, the vehicle information acquisition unit 2 receives from the vehicle-side control device 200 the engine of the vehicle. When it acquires a signal indicating that the has stopped, the operation is terminated.
- the image acquisition unit 1 of the drowsiness estimation device 10 acquires a captured image from the imaging device 20 (ST101). Then, the face detection unit 11 of the drowsiness estimation device 10 uses the captured image to perform the first face detection process (ST102). In the first face detection process, the feature point detection unit 3 and the feature amount calculation unit 4 of the face detection unit 11 detect the feature points of the facial elements of the occupant in the captured image and calculate the feature amount. processing. Then, the facial element determining section 5 determines whether or not facial elements of the occupant have been detected by the first facial detection process in the processes of ST103 and ST105, which will be described below.
- the facial element determination unit 5 determines whether or not some of the multiple facial elements of the occupant have been detected by the first face detection process (ST103). For example, the facial element determination unit 5 acquires the position information of the feature points from the feature point detection unit 3, and determines whether or not the facial element is detected based on whether or not the feature point is detected at a position where the facial element can exist. identify. Note that, in the example of FIG. 5, some face elements detected by the first face detection process are shown as face elements A1. Here, the face element A1 is one or a plurality of set face elements among the plurality of face elements of the occupant.
- the face element determination unit 5 identifies that the face element A1 has been detected by the first face detection process (ST103; YES)
- the identification result indicating that the face element A1 has been detected by the first face detection process is recorded in the storage section (ST104).
- the first face information of the face element A1 recorded in the storage unit is, for example, position information of feature points possessed by the face element A1, feature amounts related to the face element A1, and the like.
- the facial element determination unit 5 determines that the facial element A1 has not been detected by the first face detection process (ST103; NO)
- the operation of the drowsiness estimation device 10 proceeds to the process of ST105.
- the facial element determination unit 5 determines whether or not other facial elements have been detected among the multiple facial elements of the occupant in the first face detection process (ST105).
- Other facial elements are one or a plurality of set facial elements that are different from some facial elements, that is, the facial element A1.
- another face element detected by the first face detection process is indicated as face element B1.
- some facial elements are referred to as a facial element A1, and other facial elements different from the facial element A1 are referred to as a facial element B1. Some of these are face elements.
- the relationship between the facial element A1 and the facial element B1 is such that one is a part of the facial element and the other is the other facial element.
- the face element determination unit 5 identifies that the face element B1 has been detected in the first face detection process (ST105; YES)
- the identification result indicating that the face element B1 has been detected in the first face detection process is recorded in the storage section (ST106).
- the first face information of the face element B1 recorded in the storage unit is, for example, position information of feature points possessed by the face element B1, feature amounts related to the face element B1, and the like.
- the face information of the face elements A1 and B1 may be recorded in the storage unit together with time-series data or the like that is information indicating the time when the first face detection process was performed.
- the drowsiness estimation unit 12 of the drowsiness estimation device 10 may perform a drowsiness level calculation process using the first face information obtained from the face elements A1 and B1.
- the operation of the drowsiness estimation device 10 proceeds to the process of ST107.
- the image acquisition unit 1 acquires a captured image from the imaging device 20 (ST107). Then, the face detection unit 11 of the drowsiness estimation device 10 uses the captured image to perform a second face detection process (ST108).
- the second face detection process is a process performed after the first face detection process. This is a process of detecting feature points of elements and calculating feature amounts.
- the face element determination unit 5 identifies which face element is the occupant's face element detected by the second face detection process in the processes of ST109, ST110, and ST114, which will be described below. A detailed description of the processing similar to the processing of ST101 to ST106 described above will be omitted.
- the facial element determination unit 5 determines whether or not some of the facial elements of the occupant have been detected (ST109).
- some face elements detected by the second face detection process are shown as face elements A2.
- the facial element A2 is the same facial element as the facial element A1.
- the operation of the drowsiness estimation device 10 proceeds to the process of ST110. If the face element determination unit 5 determines that the face element A2 has been detected in the second face detection process, the face element determination unit 5 determines that the face element A2 has been detected in the second face detection process, as in the process of ST104. Along with the result, second facial information related to the facial element A2 may be recorded in the storage unit.
- the facial element determination unit 5 determines whether or not other set facial elements are detected among the multiple facial elements of the occupant (ST110).
- another face element detected by the second face detection process is indicated as face element B2.
- the facial element B2 is the same facial element as the facial element B1.
- the relationship between the facial elements A2 and B2 is such that one of them is a partial facial element and the other is the other facial element.
- the face element determination unit 5 determines that the face element B2 has been detected in the second face detection process (ST110; YES)
- the face element determination unit 5 determines that the face element A2 and the face element B2 have been detected in the second face detection process. It is determined that all face elements have been detected among the plurality of face elements.
- the facial element determination unit 5 then outputs the determination result to the drowsiness estimation unit 12 .
- the drowsiness estimation unit 12 When the drowsiness estimation unit 12 acquires a determination result indicating that all face elements have been detected among the plurality of face elements in the second face detection process, the drowsiness estimation unit 12 detects the second face obtained by the second face detection process. Using the information, occupant drowsiness estimation processing is performed (ST111). That is, the drowsiness estimation unit 12 calculates the drowsiness level of the passenger using the second face information obtained from the face element A2 and the face element B2. The occupant's drowsiness level calculated by the drowsiness estimation unit 12 may be output to the vehicle-side control device 200 . Then, the operation of drowsiness estimation device 10 proceeds to the process of ST101.
- the face element determination unit 5 determines that the face element B2 is not detected in the second face detection process (ST110; NO), the face element A2 is detected in the second face detection process, and the face element B2 is detected in the second face detection process. was not detected, that is, some of the face elements were not detected.
- the facial element determination unit 5 then outputs the determination result to the drowsiness estimation unit 12 .
- the drowsiness estimation unit 12 obtains a determination result indicating that some face elements among the plurality of face elements have not been detected in the second face detection process, the drowsiness estimation unit 12 has It is checked whether or not the first face information of the face element has been obtained by the first face detection process.
- the face element not detected in the second face detection process is the face element B2. That is, the drowsiness estimation unit 12 confirms whether or not the first face information of the face element B1 is obtained in the first face detection process, that is, whether or not the first face information of the face element B1 is recorded in the storage unit. (ST112).
- the sleepiness estimation unit 12 When the sleepiness estimation unit 12 confirms that the first face information of the face element B1 is recorded in the storage unit (ST112; YES), the sleepiness estimation unit 12 combines the first face information obtained by the first face detection process with the second face information. Using the second face information obtained by the face detection process, the drowsiness estimation process of the passenger is performed. That is, the drowsiness estimation unit 12 uses the first face information of the face element B1 obtained by the first face detection process and the second face information of the face element A2 obtained by the second face detection process to sleepiness estimation processing (ST113).
- the drowsiness estimation unit 12 combines the face information about the part of the face elements obtained by the first face detection process with the second face detection process.
- the drowsiness of the occupant is estimated by using face information related to other face elements obtained by the detection process.
- the occupant's drowsiness level calculated by drowsiness estimation section 12 in the process of ST 113 may be output to vehicle-side control device 200 . Then, the operation of drowsiness estimation device 10 proceeds to the process of ST101.
- the operation of the drowsiness estimation device 10 proceeds to the process of ST101.
- the interval between the first face detection process and the second face detection process is arbitrary. That is, when the first face information of the face element B1 is not recorded in the storage unit, when the face element B1 has never been detected in the face detection process before the second face detection process, and when the set This includes the case where the face element B1 is not detected during the period.
- FIG. 5 shows an example in which the same processing is performed in the processing of ST110 and the processing of ST114.
- the facial element determination unit 5 determines whether or not another facial element, that is, the facial element B2, has been detected among the multiple facial elements of the occupant in the second face detection process (ST114).
- face element determination section 5 determines in the second face detection process that face element B2 is not detected (ST114; NO)
- face element determination section 5 outputs the determination result to drowsiness estimation section 12.
- the operation of drowsiness estimation device 10 proceeds to the process of ST101.
- the drowsiness estimation unit 12 acquires a specific result indicating that neither the face element A2 nor the face element B2 is detected in the second face detection process
- the first face information of the face element A1 and the face element B1 The drowsiness estimation process of the occupant may be performed using at least one of the first face information.
- the face element determination unit 5 determines that the face element B2 is detected in the second face detection process (ST114; YES), the face element B2 is detected in the second face detection process, and the face element A2 is detected. Not detected, that is, it is determined that some face elements among the plurality of face elements have not been detected.
- the facial element determination unit 5 then outputs the determination result to the drowsiness estimation unit 12 .
- the drowsiness estimation unit 12 obtains a determination result indicating that some face elements among the plurality of face elements have not been detected in the second face detection process, the drowsiness estimation unit 12 has It is checked whether or not the first face information of the face element has been obtained by the first face detection process.
- the face element not detected in the second face detection process is face element A2. That is, the drowsiness estimation unit 12 confirms whether or not the first face information of the face element A1 has been obtained in the first face detection process, that is, whether or not the first face information of the face element A1 is recorded in the storage unit. (ST115).
- the drowsiness estimation unit 12 When the drowsiness estimation unit 12 confirms that the first face information of the face element A1 is recorded in the storage unit (ST115; YES), the drowsiness estimation unit 12 combines the first face information obtained by the first face detection process with the second face information. Using the second face information obtained by the face detection process, the drowsiness estimation process of the passenger is performed. That is, the drowsiness estimation unit 12 uses the first face information of the face element A1 obtained by the first face detection process and the second face information of the face element B2 obtained by the second face detection process to sleepiness estimation processing (ST116).
- the drowsiness estimation unit 12 determines whether the face elements related to the other face elements obtained by the first face detection process are detected.
- the drowsiness of the occupant is estimated using the face information and the face information related to some face elements obtained by the second face detection process.
- the occupant's drowsiness level calculated by drowsiness estimation section 12 in the process of ST 116 may be output to vehicle-side control device 200 . Then, the operation of drowsiness estimation device 10 proceeds to the process of ST101.
- the operation of the drowsiness estimation device 10 proceeds to the process of ST101.
- the interval between the first face detection process and the second face detection process is arbitrary.
- the case where the detection result of the face element A1 is not recorded in the storage unit includes the case where the face element A1 has never been detected in the face detection process before the second face detection process, and the case where the face element A1 has not been detected for a set period of time. and the case where the face element A1 is not detected between them.
- the third face detection process is performed after the second face detection process, and the face elements detected by the second face detection process and the face elements detected by the third face detection process are detected. are the same, the drowsiness estimation process after the third face detection process may estimate drowsiness using other face elements. That is, if some of the face elements of the occupant are not detected by the second face detection process and the third face detection process, the drowsiness estimation unit 12 detects them by the third face detection process. Drowsiness may be estimated using third face information obtained from other face elements.
- the drowsiness estimation unit 12 estimates drowsiness using the first face information obtained from the eyes, nose, and mouth, and then the second face information obtained in the second face detection process and the first face detection process. Drowsiness is estimated using the first face information obtained in . Then, the drowsiness estimation unit 12 estimates drowsiness using the third face information obtained from the nose and mouth in the third face detection process if the occupant is still wearing sunglasses.
- the drowsiness estimation unit 12 estimates the drowsiness of the occupant using the face information of the detected face element. Further, if some of the facial elements of the occupant are not detected in the second face detection process, the drowsiness estimation unit 12 detects the first face detected before the second face detection process. Drowsiness of the occupant is estimated using the face information of some face elements obtained by the detection process and the face information of other face elements obtained by the second face detection process.
- FIG. 6 is a diagram showing a hardware configuration example of the drowsiness estimation device 10 according to the first embodiment.
- the functions of the image acquisition unit 1, the vehicle information acquisition unit 2, the feature point detection unit 3, the feature amount calculation unit 4, the face element determination unit 5, the face detection unit 11, the drowsiness estimation unit 12, and the storage unit in the drowsiness estimation device 10 are as follows. , is implemented by a processing circuit.
- the image acquisition unit 1, the vehicle information acquisition unit 2, the feature point detection unit 3, the feature amount calculation unit 4, the face element determination unit 5, the face detection unit 11, the drowsiness estimation unit 12, and the storage unit of the drowsiness estimation device 10 may be a processing circuit 10a that is dedicated hardware as shown in FIG. 6A, or a processor 10b that executes a program stored in a memory 10c as shown in FIG. 6B.
- an image acquisition unit 1 As shown in FIG. 6A, an image acquisition unit 1, a vehicle information acquisition unit 2, a feature point detection unit 3, a feature amount calculation unit 4, a face element determination unit 5, a face detection unit 11, a drowsiness estimation unit 12, and a storage unit.
- the processing circuit 10a may be, for example, a single circuit, a composite circuit, a programmed processor, a parallel programmed processor, an ASIC (Application Specific Integrated Circuit), an FPGA (Field-programmable Gate Array), or a combination of these.
- the image acquisition unit 1, the vehicle information acquisition unit 2, the feature point detection unit 3, the feature amount calculation unit 4, the face element determination unit 5, the face detection unit 11, the drowsiness estimation unit 12, and the storage unit are processed by a processing circuit. , or the functions of each unit may be collectively realized by one processing circuit.
- an image acquisition unit 1 As shown in FIG. 6B, an image acquisition unit 1, a vehicle information acquisition unit 2, a feature point detection unit 3, a feature amount calculation unit 4, a face element determination unit 5, a face detection unit 11, a drowsiness estimation unit 12, and a storage unit.
- the function of each unit is implemented by software, firmware, or a combination of software and firmware.
- Software or firmware is written as a program and stored in the memory 10c.
- the processor 10b reads out and executes the programs stored in the memory 10c to perform the image acquisition unit 1, the vehicle information acquisition unit 2, the feature point detection unit 3, the feature amount calculation unit 4, the face element determination unit 5, the face detection unit. It implements the functions of the unit 11, the drowsiness estimation unit 12, and the storage unit.
- the image acquisition unit 1, the vehicle information acquisition unit 2, the feature point detection unit 3, the feature amount calculation unit 4, the face element determination unit 5, the face detection unit 11, the drowsiness estimation unit 12, and the storage unit are executed by the processor 10b.
- a memory 10c is provided for storing a program which, when executed, results in the execution of the steps shown in FIG.
- These programs include an image acquisition unit 1, a vehicle information acquisition unit 2, a feature point detection unit 3, a feature amount calculation unit 4, a face element determination unit 5, a face detection unit 11, a drowsiness estimation unit 12, and a storage unit. It can also be said that it causes a computer to execute a procedure or a method.
- the processor 10b is, for example, a CPU (Central Processing Unit), a processing device, an arithmetic device, a processor, a microprocessor, a microcomputer, or a DSP (Digital Signal Processor).
- the memory 10c may be non-volatile or volatile semiconductor memory such as RAM (Random Access Memory), ROM (Read Only Memory), flash memory, EPROM (Erasable Programmable ROM), EEPROM (Electrically EPROM), etc.
- it may be a magnetic disk such as a hard disk or a flexible disk, or an optical disk such as a mini disk, a CD (Compact Disc) or a DVD (Digital Versatile Disc).
- the functions of the image acquisition unit 1, the vehicle information acquisition unit 2, the feature point detection unit 3, the feature amount calculation unit 4, the face element determination unit 5, the face detection unit 11, the drowsiness estimation unit 12, and the storage unit are briefly described. A part may be realized by dedicated hardware, and a part may be realized by software or firmware. Thus, the processing circuit 10a in the drowsiness estimation device 10 can implement each of the functions described above by hardware, software, firmware, or a combination thereof. Functions of at least part of the image acquisition unit 1, the vehicle information acquisition unit 2, the feature point detection unit 3, the feature amount calculation unit 4, the facial element determination unit 5, the face detection unit 11, the drowsiness estimation unit 12, and the storage unit may be executed by an external server.
- the drowsiness estimation device 10 includes an image acquisition unit 1 that acquires a captured image from an imaging device 20 that captures an image of an occupant in the vehicle, and a face detection process that detects a plurality of face elements of the occupant from the captured image.
- a face detection unit 11 and a drowsiness estimation unit 12 for estimating the drowsiness of the passenger detected from the captured image are provided.
- the second face detection process is performed, and if some of the face elements of the occupant are not detected by the second face detection process, the drowsiness estimation unit 12 performs the first face detection process Using the first face information obtained from the part of face elements detected by the second face detection process and the second face information obtained from other face elements different from the part of the face elements detected by the second face detection process Therefore, by estimating the drowsiness of the occupant, the reliability of the estimation result of the drowsiness of the occupant can be ensured even when part of the facial elements of the occupant is not detected.
- the drowsiness estimating unit 12 determines the first face elements detected in the first face detection process. In addition to the face information and the second face information of the other face elements detected by the second face detection process, the drowsiness of the occupant is detected using the first face information of the other face elements detected by the first face detection process. can be estimated. By doing so, the amount of face information used for estimating drowsiness of the occupant is increased, so the reliability of the estimation result of drowsiness of the occupant can be ensured.
- a drowsiness estimation apparatus 10 includes an image acquisition unit 1 that acquires a captured image, and a face detection unit that performs face detection processing for detecting a plurality of facial elements of a passenger from the captured image. and a drowsiness estimation unit 12 for estimating the drowsiness of the passenger detected from the captured image.
- the drowsiness level estimated from the first face information obtained by the first face detection process and the second face detection process It is different from the first embodiment in that the drowsiness of the passenger is estimated using the second face information obtained from other face elements detected by .
- the same reference numerals are given to the same components as in the first embodiment, and the description thereof is omitted.
- estimating the drowsiness of the occupant using a plurality of facial elements such as the left eye, right eye, mouth, etc.
- estimating the drowsiness of the occupant using one facial element is more drowsy than estimating the drowsiness of the occupant using one facial element.
- the estimation accuracy of is improved.
- the drowsiness estimation process is not performed when some face elements among a plurality of face elements are not detected in a specific frame, the state of drowsiness of the occupant becomes indefinite, and the drowsiness estimation result is changed. Reliability may decrease.
- drowsiness estimation apparatus 10 detects that some face elements are not detected.
- the drowsiness level calculated before detection becomes impossible is taken over, and the drowsiness of the occupant is estimated using face information obtained by face detection processing in a specific frame.
- the drowsiness estimation process by the drowsiness estimation unit 12 will be described. Note that the drowsiness estimation process when the face element determination unit 5 determines in the second face detection process that all of the plurality of face elements have been detected is the same as in the first embodiment. , detailed description is omitted.
- the drowsiness estimating unit 12 calculates drowsiness calculated from the first face information obtained by the first face detection process when some of the face elements are not detected by the second face detection process. A general drowsiness level is calculated from the level and second face information related to other face elements obtained by the second face detection process.
- the drowsiness estimation unit 12 calculates the drowsiness level of the passenger from the first face information obtained by the first face detection process.
- the drowsiness level calculated from the first face information obtained by the first face detection process will be referred to as the first drowsiness level.
- the drowsiness estimation unit 12 calculates the amount of change in drowsiness level with respect to the first drowsiness level from the second face information related to other face elements obtained by the second face detection process. Furthermore, the drowsiness estimation unit 12 calculates a comprehensive drowsiness level from the first drowsiness level and the amount of change in the drowsiness level with respect to the first drowsiness level.
- the amount of change in the drowsiness level with respect to the first drowsiness level is an index value indicating how the drowsiness level has changed with respect to the first drowsiness level.
- the amount of change in the drowsiness level with respect to the first drowsiness level is simply referred to as the amount of change.
- the occupant's left eye, right eye, and mouth are detected in the first face detection process by the face detection unit 11, and the occupant's left eye and right eye are detected in the second face detection process because the occupant wears a mask. , and an example in which the mouth is not detected will be used to explain the amount of change.
- the drowsiness estimation unit 12 detects the degree of eye opening, the degree of opening, etc. as the first face information by the face detection unit. 11.
- the drowsiness estimation unit 12 calculates the first drowsiness level using the degree of eye opening, the degree of opening of the eyes, and the like, and records the first drowsiness level in the storage unit of the drowsiness estimation device 10 .
- the drowsiness estimation unit 12 acquires the first drowsiness level from the face detection unit 11 or the storage unit. Then, the drowsiness estimation unit 12 acquires the second face information related to the face element detected by the second face detection process from the face detection unit 11 as the second face information.
- the face element detected in the second face detection process is the mouth
- the information acquired by the drowsiness estimation unit 12 as the second face information is the degree of opening and the like.
- the drowsiness estimation unit 12 calculates the amount of change. For example, if the drowsiness level calculated as the first drowsiness level is 5, and the degree of opening acquired as the second face information indicates that the occupant has taken an action considered to be drowsy, such as yawning, For example, the amount of change is calculated as +1. On the other hand, if the degree of mouth opening acquired as the second face information indicates that the occupant is not taking an action that is considered to cause drowsiness, such as yawning, the drowsiness estimation unit 12, for example, Calculated as 1.
- the drowsiness estimation unit 12 calculates the overall drowsiness level of the passenger from the first drowsiness level and the amount of change with respect to the first drowsiness level. For example, in the above example, if the first drowsiness level is 5 and the variation is +1, then the overall occupant drowsiness level is 6. Note that when the change amount is calculated as 0 by the drowsiness estimation unit 12, the overall drowsiness level of the occupant is equivalent to the first drowsiness level.
- the amount of change by the drowsiness estimation unit 12 may be calculated by, for example, setting a threshold value for a feature amount, which is face information, and comparing the set threshold value and the feature amount, in the same manner as in the drowsiness level calculation process. Just do it.
- a threshold value for a feature amount which is face information
- the drowsiness estimating unit 12 calculates the amount of change so that the overall drowsiness level increases when the degree of eye opening is smaller than the threshold. Just do it.
- the amount of change may be calculated by the drowsiness estimation unit 12 so that the overall drowsiness level becomes lower.
- the sleepiness estimating unit 12 may calculate the amount of change so that the sleepiness level increases when the state of the face is felt.
- the facial state of the occupant from whom the second facial information was obtained was more sleepy than the facial state of the occupant from whom the first facial information was obtained.
- the drowsiness estimation unit 12 may calculate the amount of change so that the drowsiness level is lowered.
- the drowsiness estimation unit 12 takes over the drowsiness level before part of the occupant's face elements cannot be detected, and determines the drowsiness of the occupant. is estimated, the drowsiness state of the occupant does not become unstable, and the reliability of the drowsiness estimation result can be ensured.
- FIG. 7 is a flow chart showing an operation example of the drowsiness estimation device 10 according to the second embodiment.
- steps that are the same as the processing of the drowsiness estimation device 10 according to Embodiment 1 are denoted by the same reference numerals as those shown in FIG. 5, and descriptions thereof are omitted or simplified.
- the drowsiness estimation device 10 starts operating, for example, when the vehicle information acquisition unit 2 acquires a signal indicating that the vehicle engine has started from the vehicle-side control device 200 .
- the drowsiness estimation device 10 controls the vehicle engine from the vehicle control device 200. When it acquires a signal indicating that the has stopped, the operation is terminated.
- the image acquisition unit 1 acquires a captured image from the imaging device 20 (ST101). Then, the face detection section 11 performs first face detection processing using the captured image (ST102).
- the facial element determination unit 5 identifies facial elements of the occupant detected by the first face detection process.
- the facial element determination unit 5 determines whether or not all the facial elements of the occupant have been detected by the first facial detection process.
- some face elements are shown as face elements A1 and other face elements are shown as face elements B1. That is, the face element determination unit 5 identifies the face elements of the occupant detected by the first face detection process, and determines whether or not both face element A1 and face element B1 are detected (ST201).
- the facial element determination unit 5 determines that both the facial element A1 and the facial element B1 have not been detected by the first face detection process, that is, at least one of the facial element A1 and the facial element B1 has not been detected ( ST201; NO)
- the determination result is output to the drowsiness estimation section 12.
- the operation of drowsiness estimation device 10 proceeds to the process of ST101.
- the facial element determination unit 5 determines that both the facial element A1 and the facial element B1 have been detected by the first face detection process, that is, all the facial elements among the plurality of facial elements have been detected ( ST201; YES)
- the determination result is output to the drowsiness estimation section 12.
- the drowsiness estimation unit 12 uses the first face information obtained by the first face detection process to perform the first A sleepiness level is calculated (ST202). Then, the drowsiness estimation unit 12 records the calculated first drowsiness level in the storage unit. Note that the first drowsiness level calculated by the drowsiness estimation unit 12 may be output to the vehicle-side control device 200 . Then, the operation of drowsiness estimation apparatus 10 proceeds to the process of ST107.
- the image acquisition unit 1 acquires the captured image from the imaging device 20 (ST107).
- the face detection section 11 uses the captured image to perform a second face detection process, which is a process performed after the first face detection process (ST108).
- the facial element determination unit 5 identifies the facial elements of the occupant detected by the second face detection process in the same manner as in the process of ST201, and determines whether both the facial elements A2 and B2 have been detected. Determine (ST203).
- the same face element as face element A1 is indicated as face element A2
- the same face element as face element B1 is indicated as face element A2.
- the facial element determination unit 5 determines that both the facial element A2 and the facial element B2 have been detected by the second face detection process, that is, all the facial elements among the plurality of facial elements have been detected (ST203; YES)
- the determination result is output to the drowsiness estimation unit 12 .
- the drowsiness estimation unit 12 acquires the determination result indicating that both the face element A2 and the face element B2 have been detected
- the drowsiness level is calculated using the second face information obtained by the second face detection process. is calculated (ST204).
- the drowsiness estimation unit 12 records the calculated drowsiness level in the storage unit.
- the second drowsiness level calculated by the drowsiness estimation unit 12 may be output to the vehicle-side control device 200 . Then, the operation of drowsiness estimation device 10 proceeds to the process of ST101.
- the facial element determination section 5 determines that at least one of the facial elements A2 and B2 has not been detected (ST203; NO), it is determined that some of the facial elements have not been detected. judge. Then, the operation of drowsiness estimation apparatus 10 proceeds to the process of ST205.
- the face element determination unit 5 outputs to the drowsiness estimation unit 12 the identification result indicating which face element is the face element detected by the second face detection processing.
- the face element determination section 5 checks whether or not the face element A2 is detected by the second face detection process (ST205).
- the face element determination unit 5 may confirm whether or not the face element A2 has been detected, for example, using the identification result of the face elements detected in the second face detection process in the process of ST203.
- the face element determination section 5 determines whether some of the plurality of face elements are detected. It outputs to the drowsiness estimation unit 12 a determination result indicating that no face element has been detected and a specific result indicating that a part of the face elements, that is, the face element A2 has been detected, among the plurality of face elements.
- the drowsiness estimation unit 12 calculates the occupant's second drowsiness level using the first drowsiness level and the second face information (ST207). For example, when the drowsiness estimation unit 12 acquires the degree of opening as the second face information, it calculates the amount of change using the degree of opening. Then, the drowsiness estimation unit 12 calculates a second drowsiness level from the first drowsiness level and the amount of change. The second drowsiness level calculated by the drowsiness estimation unit 12 may be output to the vehicle-side control device 200 .
- the operation of the drowsiness estimation device 10 proceeds to the process of ST205.
- the face element determination section 5 checks whether or not the face element B2 is detected by the second face detection process (ST206).
- the face element determination unit 5 determines whether or not the face element B2 has been detected using, for example, the identification result of the face element detected by the second face detection process in the process of ST203. You should check whether
- the face element determination unit 5 determines whether some of the plurality of face elements are detected.
- the drowsiness estimation unit 12 calculates the second drowsiness level of the passenger using the first drowsiness level and the second face information (ST208). For example, first, when the drowsiness estimation unit 12 acquires the degree of eye openness as the second face information, it calculates the amount of change using the degree of eye openness. Then, the drowsiness estimation unit 12 calculates a second drowsiness level from the first drowsiness level and the amount of change. The second drowsiness level calculated by the drowsiness estimation unit 12 may be output to the vehicle-side control device 200 .
- the face element determination unit 5 confirms that the face element B2 is not detected in the second face detection process (ST206; NO), that is, when the face element A2 and the face element B2 are detected in the second face detection process. is not detected, the operation of drowsiness estimation device 10 proceeds to the process of ST101. Note that when it is confirmed that neither the face element A2 nor the face element B2 is detected in the second face detection process, the drowsiness estimation unit 12 acquires the first drowsiness level, and sets the acquired first drowsiness level to the You may output to the vehicle side control apparatus 200 as 2 drowsiness levels.
- the third face detection process is performed after the second face detection process, and the face elements detected by the second face detection process and the face elements detected by the third face detection process are detected. are the same, the drowsiness estimation process after the third face detection process may estimate drowsiness using other face elements. That is, if some of the face elements of the occupant are not detected by the second face detection process and the third face detection process, the drowsiness estimation unit 12 detects them by the third face detection process. Drowsiness may be estimated using third face information obtained from other face elements.
- the drowsiness estimation unit 12 calculates the first drowsiness level using the first face information obtained from the eyes, nose, and mouth, and then uses the second face information obtained in the second face detection process. The amount of change is calculated and the second drowsiness level is calculated. Then, if the occupant is still wearing sunglasses, the drowsiness estimation unit 12 calculates the third drowsiness level using the third face information obtained from the nose and mouth in the third face detection process. In this way, in sleepiness estimation processing after some face elements cannot be detected, face elements detectable by face detection processing are used to estimate sleepiness, so the processing load of the sleepiness estimation device 10 can be reduced. .
- the drowsiness estimation unit 12 takes over the first drowsiness level before the part of the face element cannot be detected, The drowsiness of the occupant is estimated from the amount of change in drowsiness level for one drowsiness level. In this way, even if some face elements are not detected, the drowsiness state of the occupant does not become unstable, and the reliability of the drowsiness estimation result can be ensured.
- a drowsiness estimation apparatus 10 includes, as in Embodiment 1, an image acquisition unit 1 that acquires a captured image, and face detection that performs face detection processing for detecting a plurality of face elements of a passenger from the captured image. and a drowsiness estimation unit 12 for estimating the drowsiness of the passenger detected from the captured image.
- the drowsiness level estimated from the face information obtained by the first face detection process and the second face detection process detect It is different from the first embodiment in that the drowsiness of the occupant is estimated using face information obtained from other facial elements obtained.
- the same reference numerals are given to the same components as in the first embodiment, and the description thereof is omitted.
- the drowsiness estimating unit 12 of the present embodiment when some face elements are not detected among a plurality of face elements, performs the first Inherit 1 drowsiness level. Then, the drowsiness estimation unit 12 estimates the drowsiness of the occupant from the inherited first drowsiness level and the amount of change with respect to the first drowsiness level.
- the drowsiness estimation device 10 of the present embodiment acquires the amount of change with respect to the first drowsiness level from the learned model for inferring the amount of change in the drowsiness level.
- the learning device 300 related to the drowsiness estimation device 10 and the drowsiness estimation device 10 will be described below.
- FIG. 8 is a configuration diagram of a learning device 300 related to the drowsiness estimation device 10 according to Embodiment 3. As shown in FIG.
- the learning device 300 includes a data acquisition unit 31 , a model generation unit 32 and a trained model storage unit 40 .
- the data acquisition unit 31 acquires the face information and the amount of change from the face detection unit 11 as learning data.
- the model generation unit 32 learns the amount of change based on the learning data created based on the combination of the face information output from the data acquisition unit 31 and the amount of change. That is, a trained model for inferring the optimum amount of change from the face information of the drowsiness estimation device 10 and the amount of change is generated.
- the learning data is data in which the face information and the amount of change are associated with each other.
- the learning device 300 is used to learn the amount of change in the drowsiness estimation device 10.
- the learning device 300 is connected to the drowsiness estimation device 10 via a network and is a separate device from the drowsiness estimation device 10. good too.
- the learning device 300 may be built in the drowsiness estimation device 10 .
- learning device 300 may reside on a cloud server.
- Various known algorithms such as supervised learning, unsupervised learning, and reinforcement learning can be used as the learning algorithm used by the model generation unit 32 .
- supervised learning unsupervised learning
- reinforcement learning can be used as the learning algorithm used by the model generation unit 32 .
- a case where a neural network is applied will be described below.
- the model generation unit 32 learns the amount of change by so-called supervised learning according to the neural network model.
- supervised learning refers to a method of inferring a result from an input by giving a set of input and result (label) data to the learning device 300 to learn features in the learning data.
- FIG. 9 is an explanatory diagram showing a learning example of the neural network of the learning device 300 according to the third embodiment.
- a neural network consists of an input layer with multiple neurons, an intermediate layer (hidden layer) with multiple neurons, and an output layer with multiple neurons. Note that the intermediate layer may be one layer, or two or more layers.
- the neural network learns the amount of change by so-called supervised learning according to learning data created based on the combination of the face information and the amount of change acquired by the data acquisition unit 31 .
- the neural network learns by adjusting the weights W1 and W2 so that the result of inputting face information to the input layer and outputting from the output layer approaches the amount of change input as learning data. do.
- the model generation unit 32 generates and outputs a learned model by executing the learning as described above.
- the learned model storage unit 40 stores the learned model output from the model generation unit 32 .
- the process of generating a trained model will be referred to as a trained model generation process.
- FIG. 10 is a flow chart showing an example of a trained model generation process of the learning device 300 according to the third embodiment. Processing for learning by the learning device 300 will be described with reference to FIG. 10 .
- data acquisition section 31 acquires face information and a change amount (ST301). Although the face information and the amount of change are acquired at the same time, it is sufficient if the face information and the amount of change can be input in association with each other. good.
- the model generation unit 32 learns the amount of change by so-called supervised learning according to learning data created based on the combination of the face information acquired by the data acquisition unit 31 and the amount of change.
- a finished model is generated (ST302).
- the learned model storage unit 40 records the learned model generated by the model generation unit 32 (ST303).
- the model generation unit 32 generates a plurality of learned models by the method of generating a learned model described above, and records them in the learned model storage unit 40 .
- a plurality of trained models according to the present embodiment differ in the degree of contribution to the inference of the amount of change in face information according to the type of face information inputted.
- the degree of contribution of the amount of change to the inference is an index value that indicates the magnitude of the influence of the specific face information on the inference of the amount of change when the specific face information is input to the trained model. .
- FIG. 11 is an explanatory diagram showing the degree of contribution to inference of the amount of change in face information according to the third embodiment.
- FIG. 11A is an explanatory diagram for explaining a trained model in which the degree of eye opening contributes the most to the inference of the amount of change, and
- FIG. It is an explanatory view for explaining.
- face information related to some face elements cannot be used for drowsiness estimation processing.
- face information related to some face elements cannot be used for drowsiness estimation processing.
- face information such as the degree of eye opening, degree of mouth opening, head movement (face orientation), etc.
- face orientation head movement (face orientation), etc.
- the trained model storage unit 40 records a plurality of trained models having different degrees of contribution to the inference of the amount of change according to the type of face information. Below are some examples of trained models.
- the degree of eye opening, the speed of blinking, and the frequency of blinking which are face information about the eyes, contribute to the inference of the amount of change compared to the degree of opening, which is the face information about the mouth.
- the degree of opening which is face information about the mouth
- the degree of opening is more important for inferring the amount of change than the degree of eye opening, the speed of blinking, and the frequency of blinking, which are facial information about the eyes.
- An example of a trained model with a high degree of contribution is shown. That is, when the occupant wears sunglasses and the mouth of the occupant is detected by the face detection processing, but the eyes are not detected, the face information is sent to the trained model having the degree of contribution as shown in the example of FIG. 11B. can improve the reliability of the variation inference result.
- the learned face detection unit acquires the amount of change in the drowsiness level according to the type of other detected face elements.
- FIG. 12 is a configuration diagram showing a configuration example of the drowsiness estimation system 100 according to the third embodiment.
- drowsiness estimation unit 12 of drowsiness estimation device 10 is connected to learned model storage unit 40 .
- the drowsiness estimation unit 12 uses the learned model to infer the amount of change. That is, the drowsiness estimation unit 12 acquires face information from the face detection unit 11, and inputs the face information acquired from the face detection unit 11 to the learned model, thereby obtaining the amount of change inferred from the face information.
- the learning device 300 related to the drowsiness estimation device 10 has been described as using the learned model learned by the model generation unit 32 to output the amount of change.
- a learned model may be acquired from the outside, and the amount of change may be output based on this learned model.
- FIG. 13 is a flow chart showing an operation example of the drowsiness estimation device 10 according to the third embodiment.
- FIG. 13A shows a flowchart corresponding to the process of ST207 in FIG. 7, and
- FIG. 13B shows a flowchart corresponding to the process of ST208 in FIG.
- the face element determination unit 5 detects part of the face elements (face element B2) in the second face detection process, and other face elements (face element A2). ) is detected and identified, the drowsiness estimation process is shown. Then, in ST308 to ST311 shown in FIG. 13B, the face element determination unit 5 detects part of the face elements (face element A2) in the second face detection process, and detects other face elements (face element B2). ) is not detected. Note that both ST304 to ST307 and ST308 to ST311 are the processes after the face element determination section 5 determines that some of the plurality of face elements have not been detected in the second face detection process. be.
- drowsiness estimation section 12 acquires the second face information from face detection section 11 (ST304).
- the second face information acquired by the drowsiness estimation unit 12 is a second face information related to another face element (face element A2) different from some face elements (face element B2) not detected in the second face detection process. Information.
- the drowsiness estimation unit 12 inputs the second face information to the learned model recorded in the learned model storage unit 40 (ST305).
- the drowsiness estimation unit 12 determines that the other face element (face element A2) detected in the second face detection process is different from the face element A2 (for example, face element B2) by a change amount of
- the second face information is input to the trained model that contributes highly to the inference of .
- the trained model in which the face element A2, which is the first face element, has a high degree of contribution to the inference of the amount of change is shown as the first trained model.
- the first facial element is, for example, the eyes of the passenger
- the first learned model is, for example, the degree of contribution of facial information related to the eyes, as shown in FIG. It is a trained model whose contribution is greater.
- the drowsiness estimation unit 12 acquires the amount of change with respect to the first drowsiness level obtained by the learned model (ST306). Then, sleepiness estimation section 12 acquires the first sleepiness level from the storage section or the like, and calculates the second sleepiness level from the first sleepiness level and the amount of change obtained from the learned model (ST307).
- drowsiness estimation section 12 acquires the second face information from face detection section 11 (ST308).
- the second face information acquired by the drowsiness estimation unit 12 is a second face information related to another face element (face element B2) different from some face elements (face element A2) not detected in the second face detection process. Information.
- the drowsiness estimation unit 12 inputs the second face information to the learned model recorded in the learned model storage unit 40 (ST309).
- the drowsiness estimation unit 12 determines that the other face element (face element B2) detected in the second face detection process has a change amount of
- the second face information is input to the trained model that contributes highly to the inference of .
- a trained model in which the facial element B2, which is the second facial element, contributes highly to the inference of the amount of change is shown as the second trained model.
- the second facial element is, for example, the mouth of the passenger
- the second learned model is, for example, the degree of contribution of facial information related to the mouth, as shown in FIG. It is a trained model whose contribution is greater.
- the drowsiness estimation unit 12 acquires the amount of change with respect to the first drowsiness level obtained by the learned model (ST310).
- the drowsiness estimation unit 12 acquires the first drowsiness level from the storage unit or the like, and calculates the second drowsiness level from the first drowsiness level and the amount of change obtained by the learned model (ST311). In this way, even if some face elements are not detected among the plurality of face elements, the drowsiness state of the occupant does not become unstable, and the face elements detectable by the face detection processing are drowsy. Since it is possible to estimate the drowsiness of the occupant using a trained model that greatly affects the estimation, the reliability of the drowsiness estimation result can be improved.
- the third face detection process is performed after the second face detection process, and the face elements detected by the second face detection process and the face elements detected by the third face detection process are detected. are the same, the drowsiness estimation process after the third face detection process may estimate drowsiness using other face elements. That is, if some of the face elements of the occupant are not detected by the second face detection process and the third face detection process, the drowsiness estimation unit 12 detects them by the third face detection process. Drowsiness may be estimated using third face information obtained from other face elements.
- the drowsiness estimation unit 12 calculates the first drowsiness level using the first face information obtained from the eyes, nose, and mouth, and then uses the second face information obtained in the second face detection process. Along with calculating the amount of change, the second sleepiness level is calculated. Then, if the occupant is still wearing sunglasses, the drowsiness estimation unit 12 calculates the third drowsiness level using the third face information obtained from the nose and mouth in the third face detection process. In this way, in sleepiness estimation processing after some face elements cannot be detected, face elements detectable by face detection processing are used to estimate sleepiness, so the processing load of the sleepiness estimation device 10 can be reduced. .
- a learned model for inferring a drowsiness level from the face information is generated, the learned model is stored in the learned model storage unit 40, and the drowsiness estimation unit 12 stores the third face information in the learned model storage unit 40. to get the third sleepiness level.
- the trained model for inferring the third drowsiness level is a trained model that greatly affects the calculation of the drowsiness level of the third face information. In this way, even if some face elements cannot be detected, drowsiness can be estimated using a trained model in which face elements that can be detected by the face detection process have a large impact on drowsiness estimation. , the reliability of the drowsiness estimation result of the drowsiness estimation device 10 can be improved.
- supervised learning is applied to the learning algorithm used by the model generation unit 32
- the present invention is not limited to this.
- reinforcement learning unsupervised learning, semi-supervised learning, and the like as learning algorithms.
- the model generation unit 32 may learn the amount of change according to learning data created for a plurality of drowsiness estimation devices 10 .
- the model generation unit 32 may acquire learning data from a plurality of drowsiness estimation devices 10 used in the same area, or may acquire learning data from a plurality of drowsiness estimation devices 10 operating independently in different areas. The amount of change may be learned by using the learning data that is used. It is also possible to add or remove the drowsiness estimation device 10 that collects the data for learning from the targets on the way.
- the learning device 300 that has learned the amount of change for a certain drowsiness estimation device is applied to a different drowsiness estimation device, the change amount is re-learned and updated for the other drowsiness estimation device 10. good.
- the learning algorithm used in the model generation unit 32 deep learning that learns to extract the feature amount itself can be used, and other known methods such as gradient boosting, genetic programming , functional logic programming, support vector machines, and the like.
- image acquisition unit 2 vehicle information acquisition unit 3 feature point detection unit 4 feature amount calculation unit 5 face element determination unit 10 drowsiness estimation device 11 face detection unit 12 drowsiness estimation unit 20 imaging device 31 data Acquisition unit, 32 model generation unit, 40 learned model storage unit, 61 captured image, 71 face area, 81 left eye, 82 right eye, 83 nose, 84 mouth, 100 drowsiness estimation system, 200 vehicle control device, 300 learning Device, 501 Driver's seat, 502 Passenger seat, 503 Driver, 504 Passenger.
Landscapes
- Business, Economics & Management (AREA)
- Emergency Management (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
- Traffic Control Systems (AREA)
Abstract
Description
図1は、実施の形態1に係る眠気推定システム100の構成例を示すブロック図である。眠気推定システム100は、眠気推定装置10及び撮像装置20を備えており、眠気推定装置10及び撮像装置20は、車両に搭載される。また、眠気推定装置10は、眠気推定装置10が搭載された車両における、空調機器、音響機器、ナビゲーション装置、報知部等の車載機器、及びエンジン等を制御する車両側制御装置200と接続されている。
実施の形態2に係る眠気推定装置10は、実施の形態1と同様に、撮像画像を取得する画像取得部1と、撮像画像から乗員の複数の顔要素を検出する顔検出処理を行う顔検出部11と、撮像画像から検出された乗員の眠気を推定する眠気推定部12とを備える。本実施の形態では、第2顔検出処理によって一部の顔要素が検出されなかった場合、第1顔検出処理によって得られた第1顔情報から推定された眠気レベルと、第2顔検出処理によって検出された他の顔要素から得られた第2顔情報とを用いて乗員の眠気を推定する点が、実施の形態1と異なる。実施の形態1と同じ構成要素には同じ符号を付し、その説明を省略する。
実施の形態3に係る眠気推定装置10は、実施の形態1と同様に、撮像画像を取得する画像取得部1と、撮像画像から乗員の複数の顔要素を検出する顔検出処理を行う顔検出部11と、撮像画像から検出された乗員の眠気を推定する眠気推定部12とを備える。本実施の形態では、第2顔検出処理によって一部の顔要素が検出されなかった場合、第1顔検出処理によって得られた顔情報から推定された眠気レベルと、第2顔検出処理によって検出された他の顔要素から得られた顔情報とを用いて乗員の眠気を推定する点が、実施の形態1と異なる。実施の形態1と同じ構成要素には同じ符号を付し、その説明を省略する。
Claims (10)
- 車内の乗員を撮像する撮像装置から、撮像画像を取得する画像取得部と、
前記乗員の複数の顔要素を、前記撮像画像から検出する顔検出処理を行う顔検出部と、
前記撮像画像から検出された前記乗員の眠気を推定する眠気推定部と、を備え、
前記顔検出部は、前記顔検出処理として、第1顔検出処理と、前記第1顔検出処理の後に第2顔検出処理とを行い、
前記眠気推定部は、前記第2顔検出処理により、前記乗員の複数の顔要素のうち、一部の顔要素が検出されなかった場合、前記第1顔検出処理によって検出された、前記一部の顔要素から得た第1顔情報と、前記第2顔検出処理によって検出された、前記一部の顔要素と異なる他の顔要素から得た第2顔情報とを用いて、前記乗員の眠気を推定する
ことを特徴とする眠気推定装置。 - 前記眠気推定部は、前記第2顔検出処理により、前記乗員の複数の顔要素のうち、前記一部の顔要素が検出されなかった場合、
前記第1顔情報として、前記第1顔検出処理によって検出された前記一部の顔要素から得た特徴量と、前記第2顔情報として、前記第2顔検出処理によって検出された前記他の顔要素から得た特徴量とを用いて、前記乗員の眠気を推定する
ことを特徴とする請求項1に記載の眠気推定装置。 - 前記眠気推定部は、前記第2顔検出処理により、前記乗員の複数の顔要素のうち、前記一部の顔要素が検出されなかった場合、
前記第1顔情報として、前記第1顔検出処理によって検出された前記一部の顔要素及び前記他の顔要素から得た特徴量と、前記第2顔情報として、前記第2顔検出処理によって検出された前記他の要素から得た特徴量とを用いて、前記乗員の眠気を推定する
ことを特徴とする請求項1に記載の眠気推定装置。 - 前記眠気推定部は、前記第2顔検出処理により、前記乗員の複数の顔要素のうち、前記一部の顔要素が検出されなかった場合、
前記第1顔情報を用いて算出された第1眠気レベルと、前記第2顔検出処理によって検出された前記他の顔要素から得た前記第2顔情報を用いて算出された、前記第1眠気レベルに対する眠気レベルの変化量とを用いて、前記乗員の眠気を推定する
ことを特徴とする請求項1から請求項3のいずれか一項に記載の眠気推定装置。 - 前記眠気推定部は、前記第2顔検出処理により、前記乗員の複数の顔要素のうち、前記一部の顔要素が検出されなかった場合、
前記第1顔情報を用いて算出された第1眠気レベルと、前記第2顔検出処理によって検出された前記他の顔要素から得られた前記第2顔情報から前記第1眠気レベルに対する眠気レベルの変化量を推論するための学習済モデルが記録された学習済モデル記憶部より取得した前記眠気レベルの変化量とを用いて、前記乗員の眠気を推定する
ことを特徴とする請求項1から請求項3のいずれか一項に記載の眠気推定装置。 - 前記学習済モデル記憶部には、複数の学習済モデルが記録され、
前記眠気推定部は、前記第2顔検出処理によって検出された前記他の顔要素の種別に応じて、前記眠気レベルの変化量を取得する前記学習済モデルを、前記複数の学習済モデルから選択する
ことを特徴とする請求項5に記載の眠気推定装置。 - 前記学習済モデル記憶部には、前記乗員の顔要素のうち、第1の顔要素に関する顔情報が、前記乗員の顔要素のうち、前記第1の顔要素と異なる第2の顔要素に関する顔情報に比して、前記眠気レベルの変化量の推論に対する影響が大きい第1学習済モデルと、前記第2の顔要素に関する顔情報が、前記第1の顔要素に関する顔情報に比して、前記眠気レベルの変化量の推論に対する影響が大きい第2学習済モデルとが記録され、
前記眠気推定部は、前記第2顔検出処理によって検出された前記他の顔要素が前記第1の顔要素である場合、前記眠気レベルの変化量を前記第1学習済モデルから取得し、前記第2顔検出処理によって検出された前記他の顔要素が前記第2の顔要素である場合、前記眠気レベルの変化量を前記第2学習済モデルから取得する
ことを特徴とする請求項6に記載の眠気推定装置。 - 前記第1の顔要素は、眼であり、
前記第2の顔要素は、口である
ことを特徴とする請求項7に記載の眠気推定装置。 - 前記眠気推定部は、前記第2顔検出処理の後に、第3顔検出処理を行い、前記第2顔検出処理及び前記第3顔検出処理により、前記乗員の複数の顔要素のうち、前記一部の顔要素が検出されなかった場合、前記第3顔検出処理によって検出された前記他の顔要素から得た第3顔情報を用いて、前記乗員の眠気を推定する
ことを特徴とする請求項1から請求項8のいずれか一項に記載の眠気推定装置。 - 車両に搭載され、車内の乗員を撮像する撮像装置と、
前記撮像装置から、撮像画像を取得する画像取得部と、
前記乗員の複数の顔要素を、前記撮像画像から検出する顔検出処理を行う顔検出部と、
前記撮像画像から検出された前記乗員の眠気を推定する眠気推定部と、を備え、
前記顔検出部は、前記顔検出処理として、第1顔検出処理と、前記第1顔検出処理の後に第2顔検出処理とを行い、
前記眠気推定部は、前記第2顔検出処理により、前記乗員の複数の顔要素のうち、一部の顔要素が検出されなかった場合、前記第1顔検出処理によって検出された、前記一部の顔要素から得た第1顔情報と、前記第2顔検出処理によって検出された、前記一部の顔要素と異なる他の顔要素から得た第2顔情報とを用いて、前記乗員の眠気を推定する
ことを特徴とする眠気推定システム。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2023518590A JP7325687B2 (ja) | 2021-05-07 | 2021-05-07 | 眠気推定装置及び眠気推定システム |
PCT/JP2021/017544 WO2022234662A1 (ja) | 2021-05-07 | 2021-05-07 | 眠気推定装置及び眠気推定システム |
DE112021007211.0T DE112021007211T5 (de) | 2021-05-07 | 2021-05-07 | Müdigkeit-Folgerungsvorrichtung und Müdigkeit-Folgerungssystem |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2021/017544 WO2022234662A1 (ja) | 2021-05-07 | 2021-05-07 | 眠気推定装置及び眠気推定システム |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022234662A1 true WO2022234662A1 (ja) | 2022-11-10 |
Family
ID=83932702
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2021/017544 WO2022234662A1 (ja) | 2021-05-07 | 2021-05-07 | 眠気推定装置及び眠気推定システム |
Country Status (3)
Country | Link |
---|---|
JP (1) | JP7325687B2 (ja) |
DE (1) | DE112021007211T5 (ja) |
WO (1) | WO2022234662A1 (ja) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007257043A (ja) * | 2006-03-20 | 2007-10-04 | Nissan Motor Co Ltd | 乗員状態推定装置および乗員状態推定方法 |
JP2010049337A (ja) * | 2008-08-19 | 2010-03-04 | Toyota Motor Corp | 瞬き状態検出装置 |
JP2010097379A (ja) * | 2008-10-16 | 2010-04-30 | Denso Corp | ドライバモニタリング装置およびドライバモニタリング装置用のプログラム |
JP2014095987A (ja) * | 2012-11-08 | 2014-05-22 | Denso Corp | 車載機および車両安全制御システム |
JP2019016178A (ja) * | 2017-07-07 | 2019-01-31 | マツダ株式会社 | 居眠り運転警報装置 |
JP2020194224A (ja) * | 2019-05-24 | 2020-12-03 | 日本電産モビリティ株式会社 | 運転者判定装置、運転者判定方法、および運転者判定プログラム |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4016694B2 (ja) | 2002-04-02 | 2007-12-05 | 日産自動車株式会社 | 顔状態検出装置及び方法 |
-
2021
- 2021-05-07 DE DE112021007211.0T patent/DE112021007211T5/de active Pending
- 2021-05-07 WO PCT/JP2021/017544 patent/WO2022234662A1/ja active Application Filing
- 2021-05-07 JP JP2023518590A patent/JP7325687B2/ja active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007257043A (ja) * | 2006-03-20 | 2007-10-04 | Nissan Motor Co Ltd | 乗員状態推定装置および乗員状態推定方法 |
JP2010049337A (ja) * | 2008-08-19 | 2010-03-04 | Toyota Motor Corp | 瞬き状態検出装置 |
JP2010097379A (ja) * | 2008-10-16 | 2010-04-30 | Denso Corp | ドライバモニタリング装置およびドライバモニタリング装置用のプログラム |
JP2014095987A (ja) * | 2012-11-08 | 2014-05-22 | Denso Corp | 車載機および車両安全制御システム |
JP2019016178A (ja) * | 2017-07-07 | 2019-01-31 | マツダ株式会社 | 居眠り運転警報装置 |
JP2020194224A (ja) * | 2019-05-24 | 2020-12-03 | 日本電産モビリティ株式会社 | 運転者判定装置、運転者判定方法、および運転者判定プログラム |
Also Published As
Publication number | Publication date |
---|---|
JPWO2022234662A1 (ja) | 2022-11-10 |
DE112021007211T5 (de) | 2024-01-04 |
JP7325687B2 (ja) | 2023-08-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6695503B2 (ja) | 車両の運転者の状態を監視するための方法及びシステム | |
US20170004354A1 (en) | Determination device, determination method, and non-transitory storage medium | |
JP2008146356A (ja) | 視線方向推定装置及び視線方向推定方法 | |
JP2018508870A (ja) | 車両の運転者の瞬間睡眠を検知するための方法および装置 | |
US10423846B2 (en) | Method for identifying a driver change in a motor vehicle | |
JP5511987B2 (ja) | 車両用物体衝突警報システムおよび車両用物体衝突警報方法 | |
JP4412253B2 (ja) | 覚醒度推定装置及び方法 | |
JP4325271B2 (ja) | 状態検出装置及び状態検出システム | |
JP7325687B2 (ja) | 眠気推定装置及び眠気推定システム | |
JP6687300B2 (ja) | 異常検知装置及び異常検知方法 | |
JP2021037216A (ja) | 閉眼判定装置 | |
WO2022113275A1 (ja) | 睡眠検出装置及び睡眠検出システム | |
JP7267467B2 (ja) | 注意方向判定装置および注意方向判定方法 | |
US11161470B2 (en) | Occupant observation device | |
US10945651B2 (en) | Arousal level determination device | |
JP2011125620A (ja) | 生体状態検出装置 | |
JP2021129700A (ja) | 基準値決定装置及び基準値決定方法 | |
WO2022254501A1 (ja) | 個人認証装置及び個人認証システム | |
WO2021024905A1 (ja) | 画像処理装置、モニタリング装置、制御システム、画像処理方法、コンピュータプログラム、及び記憶媒体 | |
US20200290544A1 (en) | Occupant observation device | |
US20220346684A1 (en) | Driver availability detection device and driver availability detection method | |
JP7019394B2 (ja) | 視認対象検知装置、視認対象検知方法、およびプログラム | |
JP6698966B2 (ja) | 誤検出判定装置及び誤検出判定方法 | |
JP7446492B2 (ja) | 車両監視装置、車両監視システム、及び車両監視方法 | |
WO2024079779A1 (ja) | 乗員状態判定装置、乗員状態判定システム、乗員状態判定方法及びプログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21939854 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2023518590 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 112021007211 Country of ref document: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21939854 Country of ref document: EP Kind code of ref document: A1 |