WO2022185596A1 - Estimation system, estimation method, and program - Google Patents

Estimation system, estimation method, and program Download PDF

Info

Publication number
WO2022185596A1
WO2022185596A1 PCT/JP2021/037911 JP2021037911W WO2022185596A1 WO 2022185596 A1 WO2022185596 A1 WO 2022185596A1 JP 2021037911 W JP2021037911 W JP 2021037911W WO 2022185596 A1 WO2022185596 A1 WO 2022185596A1
Authority
WO
WIPO (PCT)
Prior art keywords
person
imaging
imaging means
unit
wavelength band
Prior art date
Application number
PCT/JP2021/037911
Other languages
French (fr)
Japanese (ja)
Inventor
元貴 吉岡
博之 古屋
Original Assignee
パナソニックIpマネジメント株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by パナソニックIpマネジメント株式会社 filed Critical パナソニックIpマネジメント株式会社
Publication of WO2022185596A1 publication Critical patent/WO2022185596A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J1/00Photometry, e.g. photographic exposure meter
    • G01J1/02Details
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J1/00Photometry, e.g. photographic exposure meter
    • G01J1/42Photometry, e.g. photographic exposure meter using electric radiation detectors
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • the present disclosure relates to an estimation system, an estimation method, and a program for estimating a person's state or recognizing a person.
  • Patent Document 1 a technique for recognizing a person using a plurality of image data captured at different times. For example, even if part of a person's face is covered by sunglasses or other wearables, multiple image data captured at different times may be captured depending on the ever-changing position and orientation of the person's face. , a part of the person's face that is hidden by the wearer may be displayed. Therefore, by using a plurality of image data captured at different times, it is possible to recognize the person wearing the object.
  • a part of a person's face that is blocked by a wearable object is not displayed in any of the plurality of image data captured at different times. In some cases, it may not be possible to estimate the state of the person wearing the object or recognize the person wearing the object with high accuracy.
  • the present disclosure provides an estimation system and the like capable of estimating the state of a person wearing a wearable item or recognizing a person wearing the wearable item with high accuracy.
  • An estimation system for estimating a state of a person or recognizing the person, comprising first imaging means in a first wavelength band and second imaging means in a second wavelength band an imaging unit having imaging means; a detection unit that detects the presence or absence of an object worn by the person based on an image obtained by imaging the person by the imaging unit; an image obtained by imaging the person by the imaging unit using the determined imaging means; and an estimating unit that estimates the state of the person or recognizes the person based on.
  • the estimation system and the like it is possible to highly accurately estimate the state of the person wearing the wearable item or recognize the person wearing the wearable item.
  • FIG. 1 is a configuration diagram showing an example of an estimation system according to an embodiment.
  • FIG. 2 is a diagram for explaining the first wavelength band and the second wavelength band.
  • FIG. 3 is a flow chart showing an example of the operation of the estimation system according to the embodiment.
  • FIG. 4 is a diagram for explaining the operation flow of the estimation system according to the embodiment.
  • FIG. 5 is a diagram illustrating an example of a presentation unit in the embodiment;
  • FIG. 6A is a diagram for explaining imaging means determined based on the presence or absence of a wearable object and the content of specific information.
  • FIG. 6B is a diagram for explaining imaging means determined based on the presence or absence of a wearable object and the content of specific information.
  • FIG. 6C is a diagram for explaining imaging means determined based on the presence or absence of a wearable object and the content of specific information.
  • FIG. 7 is a diagram illustrating an example of a display unit according to the embodiment;
  • FIG. 1 is a configuration diagram showing an example of the estimation system 1 according to the embodiment.
  • the estimation system 1 is a system for estimating the state of a person or recognizing a person, and includes an imaging unit 10, a detection unit 20, a determination unit 30, an estimation unit 40, a first recognition model 41, a second recognition model 42, A presentation unit 50 and a display unit 60 are provided.
  • the state of a person is, for example, the state of a person's facial expression (emotions, drowsiness, or the like) or the state of a person's line of sight.
  • the state of a person's face is estimated.
  • the estimation system 1 includes a processor, memory, and the like.
  • the memory is ROM (Read Only Memory), RAM (Random Access Memory), etc., and can store programs executed by the processor.
  • the detection unit 20, the determination unit 30, and the estimation unit 40 are implemented by a processor or the like that executes a program stored in memory.
  • the first recognition model 41 and the second recognition model 42 are stored in memory.
  • the memory in which the program is stored, the memory in which the first recognition model 41 is stored, and the memory in which the second recognition model 42 is stored may be the same memory or different memories.
  • the imaging unit 10 is a component having first imaging means in the first wavelength band and second imaging means in the second wavelength band.
  • the first wavelength band and the second wavelength band are different wavelength bands. A part of the first wavelength band and a part of the second wavelength band may overlap.
  • the first wavelength band and the second wavelength band will be described with reference to FIG.
  • FIG. 2 is a diagram for explaining the first wavelength band and the second wavelength band.
  • the first wavelength band is the visible wavelength band
  • the second wavelength band is the infrared wavelength band. Therefore, an RGB image is obtained by the first imaging means in the visible light wavelength band, and an IR image is obtained by the second imaging means in the infrared light wavelength band.
  • RGB image facial skin wrinkles, facial expressions, contours, eye color and hair color are clearly visible, but when a person wears a wearable object such as sunglasses or a mask, the wearable object is clearly visible. The part occluded by is no longer visible.
  • the first imaging means is imaging means by a first camera that performs imaging in the first wavelength band
  • the second imaging means is imaging means by a second camera that performs imaging in the second wavelength band, good too.
  • the imaging unit 10 may be realized by two cameras, a first camera and a second camera.
  • the first imaging means is imaging means using a second cut filter that cuts light in the second wavelength band
  • the second imaging means uses a first cut filter that cuts light in the first wavelength band.
  • It may also be an image pickup means. That is, the imaging unit 10 may be realized by, for example, one camera that can use the first cut filter and the second cut filter selectively.
  • the detection unit 20 detects the presence or absence of an object worn by the person based on the image obtained by the imaging unit 10 capturing the image of the person.
  • the wearable item is a wearable item worn on a person's face, and specifically includes sunglasses or a mask.
  • An image used for detecting the presence or absence of an object worn by a person is, for example, an RGB image obtained by imaging a person by the imaging unit 10 using the first imaging means in the first wavelength band.
  • a method for detecting the presence or absence of an object worn by a person is not particularly limited, but for example, a learned model learned by machine learning or the like may be used.
  • the determination unit 30 determines the imaging means to be used for imaging from the first imaging means and the second imaging means based on the presence or absence of the detected wearing object of the person.
  • the imaging is imaging for obtaining an image used when the estimating unit 40 performs estimation or recognition, as will be described later.
  • the determining unit 30 determines the imaging means to be used for imaging to be the second imaging means when it is detected that the person is wearing the object, and when it is detected that the person is not wearing the object, imaging is performed. is determined to be the first imaging means.
  • the first recognition model 41 is a recognition model for recognizing an image (RGB image) obtained by imaging a person by the imaging unit 10 using the first imaging means.
  • the first recognition model 41 is a trained model trained by machine learning or the like using an image obtained by the first imaging means. By inputting the image obtained by the first imaging means into the first recognition model 41, information indicating the state of the person appearing in the image or the recognition result of the person appearing in the image is output.
  • the second recognition model 42 is a recognition model for recognizing an image (IR image) obtained by imaging a person with the imaging unit 10 using the second imaging means.
  • the second recognition model 42 is a trained model trained by machine learning using the image obtained by the second imaging means. By inputting the image obtained by the second imaging means into the second recognition model 42, information indicating the state of the person in the image or the recognition result of the person in the image is output.
  • the estimating unit 40 estimates the state of the person or recognizes the person based on the image obtained by imaging the person with the imaging unit 10 using the determined imaging means. For example, when the imaging means used for imaging is determined to be the first imaging means, the estimating section 40 uses the first recognition model 41 as an image obtained by imaging a person by the imaging section 10 using the first imaging means. By using and recognizing, estimation of a person's state or recognition of a person is performed. Further, for example, when the imaging means used for imaging is determined to be the second imaging means, the estimating section 40 converts an image obtained by imaging a person by the imaging section 10 using the second imaging means into the second recognition model. By recognizing using 42, estimation of a person's state or recognition of a person is performed.
  • the estimating unit 40 estimates the state of a person's facial expression or the state of a person's line of sight. Also, for example, the estimation unit 40 recognizes a person (specifically, identifies an individual). The estimating unit 40 outputs the result of estimating the state of the person or the result of recognizing the person. For example, the estimating unit 40 outputs the result of estimating the state of the person or the result of recognizing the person to the user's mobile terminal, personal computer, or the like.
  • the presentation unit 50 is a component that presents specific information.
  • the presentation unit 50 is signage, or a display of a mobile terminal or a personal computer.
  • the presentation unit 50 may be a display or the like that the communication robot has.
  • the contents of the specific information are, for example, product menus, store maps, product suggestions, and the like. Details of the presentation unit 50 will be described later.
  • the determination unit 30 acquires information indicating the content of specific information from the presentation unit 50, and based on the content of the specific information presented by the presentation unit 50 in addition to the presence or absence of an object worn by the person,
  • the imaging means used for imaging may be determined from one of the first imaging means and the second imaging means.
  • the display unit 60 displays the recognition result of the person recognized using the first recognition model 41 and the recognition result of the person recognized using the second recognition model 42 in comparison.
  • the display unit 60 is a display of a mobile terminal or a personal computer. Details of the display unit 60 will be described later.
  • the first imaging means will also be referred to as RGB imaging means
  • the second imaging means will also be referred to as IR imaging means.
  • FIG. 3 is a flow chart showing an example of the operation of the estimation system 1 according to the embodiment.
  • FIG. 4 is a diagram for explaining the operation flow of the estimation system 1 according to the embodiment.
  • the detection unit 20 detects the presence or absence of a person's wearing object based on the image obtained by imaging the person with the imaging unit 10 (step S11). For example, as shown in the left side of FIG. 4, when an image is obtained in which a person wearing sunglasses is captured, the detection unit 20 detects the sunglasses, as shown in the center of FIG. detect that there is
  • the determination unit 30 determines the imaging means to be used for imaging from the RGB imaging means and the IR imaging means based on the detected presence or absence of the person's wearing object (step S12). For example, as shown in the center of FIG. 4, when it is detected that there is an object worn by a person, the determination unit 30 determines the IR imaging means as the imaging means to be used for imaging.
  • the estimation unit 40 estimates the state of the person or recognizes the person based on the image obtained by imaging the person with the imaging unit 10 using the determined imaging means (step S13). For example, when the imaging means is determined to be the IR imaging means, the imaging unit 10 uses the IR imaging means to image a person, and as shown on the left side of FIG. An IR image can be acquired that shows occluded parts of the face (eg around the eyes under sunglasses). Thereby, the estimation unit 40 can estimate a person's state such as a person's line of sight or a person's emotion, and recognize a person.
  • the determination unit 30 determines the RGB imaging means as the imaging means used for imaging.
  • the estimation unit 40 may estimate the state of the person or recognize the person based on the RGB image used when detecting the presence or absence of the object worn by the person.
  • the image capturing unit 10 captures an image of the person again using the RGB image capturing means, and the estimating unit 40 estimates the state of the person or recognizes the person based on the RGB image obtained by the image capturing. good too.
  • RGB images often have clearer facial skin wrinkles, facial expressions, contours, eye color, hair color, etc. than IR images. can be used to estimate a person's state or recognize a person.
  • the determination unit 30 determines the imaging means to be used for imaging from the RGB imaging means and the IR imaging means based on the presence or absence of the object worn by the detected person and the content of the specific information presented by the presentation unit 50. A specific example for doing so will be described with reference to FIGS. 5 to 6C.
  • FIG. 5 is a diagram showing an example of the presentation unit 50 in the embodiment.
  • FIGS. 6A to 6C are diagrams for explaining imaging means determined based on the presence or absence of a wearable object and the contents of specific information.
  • the presentation unit 50 is, for example, a signage, and presents a product menu or product suggestions as specific information.
  • the imaging unit 10 is installed in the presentation unit 50 and images a person viewing the presentation unit 50 .
  • FIG. 6A is an example of an image captured by a person looking at the presentation unit 50 when there is an item worn by the person and the content of the specific information to be presented is a product menu.
  • the determining unit 30 determines that there is an item to be worn by the person (specifically, the person is wearing sunglasses as the item to be worn) and the content of the specific information is a product menu.
  • the IR imaging means is determined as the imaging means used for imaging. This is because, when the content of the specific information is a product menu, it is possible to determine which product the person is paying attention to by seeing the line of sight of the person through the sunglasses through the IR imaging means.
  • FIG. 6B is an example of an image captured by a person looking at the presentation unit 50 when there is no item worn by the person and the content of the specific information to be presented is a product menu.
  • the determining unit 30 determines that there is no wearable item for the person (specifically, the person does not wear sunglasses as the wearable item) and the content of the specific information is a product menu.
  • the RGB imaging means is determined as the imaging means used for imaging. Since the line of sight of the person is known without wearing sunglasses and without using the IR imaging means, a clear image can be obtained by the RGB imaging means.
  • FIG. 6C is an example of an image captured by a person looking at the presentation unit 50 when there is an object worn by the person and the content of the specific information to be presented is a product proposal.
  • the imaging means is determined to be the RGB imaging means. If the content of the specific information is a proposal for a product, even if the person's line of sight cannot be seen because the sunglasses are not transmitted by the RGB imaging means, the person's emotions can be seen from the appearance around the person's mouth, and the product is proposed. This is because it is possible to determine the degree of interest in.
  • FIG. 7 a specific example in which the display unit 60 compares and displays the recognition result of a person recognized using the first recognition model and the recognition result of a person recognized using the second recognition model is shown in FIG. 7 will be used for explanation.
  • FIG. 7 is a diagram showing an example of the display section 60 in the embodiment.
  • the display unit 60 is a display or the like, and displays a person imaged by the imaging unit 10 and the recognition result of the person.
  • the recognition result the recognition result of the person recognized using the first recognition model and the recognition result of the person recognized using the second recognition model are displayed in contrast.
  • the person shown on the left side of FIG. 7 is wearing a wearable object, and the recognition result when the IR image is recognized using the second recognition model is displayed.
  • the recognition result when the RGB image is recognized using the first recognition model is also displayed in contrast.
  • the user of the estimation system 1 or the like who sees the display on the display unit 60 can finally judge the recognition result.
  • the person shown on the right side of FIG. 7 is not wearing a wearable object, in this case, only the recognition result when the RGB image is recognized using the first recognition model may be displayed.
  • the estimation system 1 is a system for estimating a state of a person or recognizing a person, and includes an imaging unit 10 having a first imaging means in a first wavelength band and a second imaging means in a second wavelength band; , a detection unit 20 for detecting the presence or absence of an object worn by a person based on an image obtained by imaging a person by the imaging unit 10; A determining unit 30 that determines an imaging means to be used for imaging from among the two imaging means; and an estimating unit 40 for recognizing.
  • a person when a person is wearing a wearable object, in a wavelength band (for example, an infrared wavelength band) that can be imaged through the wearable object out of the first wavelength band and the second wavelength band A person can be imaged by the imaging means.
  • a wavelength band for example, an infrared wavelength band
  • the image showing the portion shielded by the wearable item can be used for estimating the state of the person or recognizing the person, it is possible to estimate the state of the person wearing the wearable item or recognize the person wearing the wearable item. can be performed with high precision.
  • the imaging means in the wavelength band for example, visible light wavelength band
  • the imaging means in the wavelength band that can be imaged without penetrating the wearable object out of the first wavelength band and the second wavelength band
  • a person can be imaged by Imaging means in a wavelength band that can transmit images through a wearable object tends to obscure facial skin wrinkles, facial expressions, contours, eye color, hair color, and the like.
  • the images can be used for human state estimation or human recognition.
  • the first imaging means may be imaging means by a first camera that performs imaging in the first wavelength band
  • the second imaging means is imaging means by a second camera that performs imaging in the second wavelength band.
  • the imaging unit 10 is provided with the first camera that performs imaging in the first wavelength band and the second camera that performs imaging in the second wavelength band, thereby realizing the first imaging means and the second imaging means.
  • the first imaging means may be imaging means using a second cut filter that cuts light in the second wavelength band, and the second imaging means includes the first cut filter that cuts light in the first wavelength band. It may be the imaging means used.
  • the imaging unit 10 may include, for example, one camera, and the first cut filter and the second cut filter may be selectively used to implement the first imaging means and the second imaging means.
  • the estimation system 1 may further include a presentation unit 50 that presents specific information.
  • the imaging means used for imaging may be determined from among the means and the second imaging means.
  • the portion shielded by the wearable object is effectively used for estimation or recognition.
  • the portion occluded by the wearable is less important for estimation or recognition. Therefore, it is possible to determine an imaging means suitable for imaging for obtaining an image used for estimating a person's state or recognizing a person, depending on the content of specific information.
  • the estimation system 1 further includes a first recognition model 41 for recognizing an image obtained by imaging a person by the imaging unit 10 using the first imaging means, and a human model 41 by the imaging unit 10 using the second imaging means. and a second recognition model 42 for recognizing an image obtained by imaging.
  • the imaging means used for imaging is determined to be the first imaging means
  • the estimating section 40 uses the first recognition model 41 to obtain the image obtained by imaging the person by the imaging section 10 using the first imaging means.
  • the imaging means used for imaging is determined to be the second imaging means
  • the person is imaged by the imaging unit 10 using the second imaging means.
  • the second recognition model 42 the state of the person may be estimated or the person may be recognized.
  • the first recognition model 41 suitable for image recognition by the first imaging means and the second recognition model 42 suitable for image recognition by the second imaging means it is possible to estimate the state of a person or can be recognized more accurately.
  • the estimation system 1 further includes a display unit 60 for displaying the recognition result of the person recognized using the first recognition model 41 and the recognition result of the person recognized using the second recognition model 42 in comparison.
  • each recognition result can be visually compared.
  • the first wavelength band may be a visible light wavelength band
  • the second wavelength band may be an infrared light wavelength band.
  • the determining unit 30 may determine the imaging means to be the second imaging means when it is detected that there is an object worn by a person.
  • the person when a person is wearing a wearable object, the person can be imaged by the second imaging means in the infrared wavelength band.
  • estimation system 1 includes the presentation unit 50
  • estimation system 1 does not have to include the presentation unit 50 .
  • the estimation system 1 includes the first recognition model 41 and the second recognition model 42, but the estimation system 1 includes the first recognition model 41 and the second recognition model 42. It doesn't have to be.
  • the estimation system 1 includes the display unit 60
  • the estimation system 1 does not have to include the display unit 60 .
  • the present disclosure can be realized not only as the estimation system 1, but also as an estimation method including steps (processes) performed by each component constituting the estimation system 1.
  • the estimation method is a method by the estimation system 1 for estimating the state of a person or recognizing the person, and the estimation system 1 includes a first imaging means in a first wavelength band and a second imaging in a second wavelength band.
  • An imaging unit 10 having means is provided.
  • the estimation method includes a detection step (step S11) of detecting the presence or absence of an object worn by a person based on an image obtained by imaging a person by the imaging unit 10; Based on the presence or absence of, the determination step (step S12) of determining the imaging means from the first imaging means and the second imaging means, and the imaging unit 10 using the determined imaging means obtained by imaging the person and an estimation step (step S13) of estimating the state of the person or recognizing the person based on the image.
  • the steps in the estimation method may be executed by a computer (computer system).
  • the present disclosure can be realized as a program for causing a computer to execute the steps included in the estimation method.
  • the present disclosure can be implemented as a non-temporary computer-readable recording medium such as a CD-ROM on which the program is recorded.
  • each step is executed by executing the program using hardware resources such as the CPU, memory, and input/output circuits of the computer.
  • hardware resources such as the CPU, memory, and input/output circuits of the computer.
  • each step is executed by the CPU acquiring data from a memory, an input/output circuit, or the like, performing an operation, or outputting the operation result to the memory, an input/output circuit, or the like.
  • each component included in the estimation system 1 of the above embodiment may be implemented as a dedicated or general-purpose circuit.
  • each component included in the estimation system 1 of the above embodiment may be implemented as an LSI (Large Scale Integration), which is an integrated circuit (IC).
  • LSI Large Scale Integration
  • IC integrated circuit
  • integrated circuits are not limited to LSIs, and may be realized by dedicated circuits or general-purpose processors.
  • a programmable FPGA (Field Programmable Gate Array) or a reconfigurable processor capable of reconfiguring connections and settings of circuit cells inside the LSI may be used.
  • the present disclosure can be applied, for example, to a system that performs control according to the state of a person, a system that recognizes a person, or the like.
  • estimation system 10 imaging unit 20 detection unit 30 determination unit 40 estimation unit 41 first recognition model 42 second recognition model 50 presentation unit 60 display unit

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Engineering & Computer Science (AREA)
  • Analytical Chemistry (AREA)
  • Chemical & Material Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biochemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

This estimation system (1) is for estimating the state of a person or recognizing a person and comprises: an imaging unit (10) comprising a first imaging means using a first wavelength band and a second imaging means using a second wavelength band; a detection unit (20) for using an image, which has been obtained by using the imaging unit (10) to image a person, to detect whether there is an object being worn by the person; a determination unit (30) for determining whether to use the first imaging means or second imaging means for imaging on the basis of whether an object being worn by the person was detected; and an estimation unit (40) for estimating the state of the person or recognizing the person on the basis of an image obtained by imaging a person by the imaging unit (10) using the determined imaging means.

Description

推定システム、推定方法およびプログラムEstimation system, estimation method and program
 本開示は、人の状態の推定または人の認識を行うための推定システム、推定方法およびプログラムに関する。 The present disclosure relates to an estimation system, an estimation method, and a program for estimating a person's state or recognizing a person.
 従来、撮像された時刻が異なる複数の画像データを用いて人の認識を行う技術が開示されている(例えば特許文献1)。例えば、人の顔の一部がサングラスなどの装着物によって遮蔽されている場合であっても、刻々と変化する人の顔の位置および向きによっては、撮像された時刻が異なる複数の画像データに、装着物によって遮蔽された人の顔の一部が表示されている場合がある。このため、撮像された時刻が異なる複数の画像データを用いることで、装着物を装着した人の認識を行うことができる。 Conventionally, a technique for recognizing a person using a plurality of image data captured at different times has been disclosed (for example, Patent Document 1). For example, even if part of a person's face is covered by sunglasses or other wearables, multiple image data captured at different times may be captured depending on the ever-changing position and orientation of the person's face. , a part of the person's face that is hidden by the wearer may be displayed. Therefore, by using a plurality of image data captured at different times, it is possible to recognize the person wearing the object.
特開2013-196034号公報JP 2013-196034 A
 しかしながら、撮像された時刻が異なる複数の画像データのいずれにも、装着物によって遮蔽された人の顔の一部が表示されていない場合もあり、上記特許文献1に開示された技術では、装着物を装着した人の状態の推定または装着物を装着した人の認識を高精度に行うことができない場合がある。 However, in some cases, a part of a person's face that is blocked by a wearable object is not displayed in any of the plurality of image data captured at different times. In some cases, it may not be possible to estimate the state of the person wearing the object or recognize the person wearing the object with high accuracy.
 そこで、本開示は、装着物を装着した人の状態の推定または装着物を装着した人の認識を高精度に行うことができる推定システムなどを提供する。 Therefore, the present disclosure provides an estimation system and the like capable of estimating the state of a person wearing a wearable item or recognizing a person wearing the wearable item with high accuracy.
 本開示の一態様に係る推定システムは、人の状態の推定または前記人の認識を行うための推定システムであって、第一波長帯での第一撮像手段および第二波長帯での第二撮像手段を有する撮像部と、前記撮像部による前記人の撮像により得られた画像に基づいて、前記人の装着物の有無を検知する検知部と、検知された前記人の装着物の有無に基づいて、前記第一撮像手段および前記第二撮像手段のうちから撮像に用いる撮像手段を決定する決定部と、決定された撮像手段を用いた前記撮像部による前記人の撮像により得られた画像に基づいて、前記人の状態の推定または前記人の認識を行う推定部と、を備える。 An estimation system according to one aspect of the present disclosure is an estimation system for estimating a state of a person or recognizing the person, comprising first imaging means in a first wavelength band and second imaging means in a second wavelength band an imaging unit having imaging means; a detection unit that detects the presence or absence of an object worn by the person based on an image obtained by imaging the person by the imaging unit; an image obtained by imaging the person by the imaging unit using the determined imaging means; and an estimating unit that estimates the state of the person or recognizes the person based on.
 本開示の一態様に係る推定システムなどによれば、装着物を装着した人の状態の推定または装着物を装着した人の認識を高精度に行うことができる。 According to the estimation system and the like according to one aspect of the present disclosure, it is possible to highly accurately estimate the state of the person wearing the wearable item or recognize the person wearing the wearable item.
図1は、実施の形態における推定システムの一例を示す構成図である。FIG. 1 is a configuration diagram showing an example of an estimation system according to an embodiment. 図2は、第一波長帯および第二波長帯を説明するための図である。FIG. 2 is a diagram for explaining the first wavelength band and the second wavelength band. 図3は、実施の形態における推定システムの動作の一例を示すフローチャートである。FIG. 3 is a flow chart showing an example of the operation of the estimation system according to the embodiment. 図4は、実施の形態における推定システムの動作の流れを説明するための図である。FIG. 4 is a diagram for explaining the operation flow of the estimation system according to the embodiment. 図5は、実施の形態における提示部の一例を示す図である。FIG. 5 is a diagram illustrating an example of a presentation unit in the embodiment; 図6Aは、装着物の有無および特定の情報の内容に基づいて決定される撮像手段を説明するための図である。FIG. 6A is a diagram for explaining imaging means determined based on the presence or absence of a wearable object and the content of specific information. 図6Bは、装着物の有無および特定の情報の内容に基づいて決定される撮像手段を説明するための図である。FIG. 6B is a diagram for explaining imaging means determined based on the presence or absence of a wearable object and the content of specific information. 図6Cは、装着物の有無および特定の情報の内容に基づいて決定される撮像手段を説明するための図である。FIG. 6C is a diagram for explaining imaging means determined based on the presence or absence of a wearable object and the content of specific information. 図7は、実施の形態における表示部の一例を示す図である。FIG. 7 is a diagram illustrating an example of a display unit according to the embodiment;
 (実施の形態)
 以下、実施の形態における推定システムについて図面を参照しながら説明する。
(Embodiment)
An estimation system according to an embodiment will be described below with reference to the drawings.
 図1は、実施の形態における推定システム1の一例を示す構成図である。 FIG. 1 is a configuration diagram showing an example of the estimation system 1 according to the embodiment.
 推定システム1は、人の状態の推定または人の認識を行うためのシステムであり、撮像部10、検知部20、決定部30、推定部40、第一認識モデル41、第二認識モデル42、提示部50および表示部60を備える。人の状態とは、例えば、人の表情の状態(喜怒哀楽または眠気など)または人の視線の状態などであり、ここでは、人の顔の状態が推定される。推定システム1は、プロセッサ及びメモリなどを含む。メモリは、ROM(Read Only Memory)及びRAM(Random Access Memory)などであり、プロセッサにより実行されるプログラムを記憶することができる。検知部20、決定部30および推定部40は、メモリに記憶されたプログラムを実行するプロセッサなどによって実現される。第一認識モデル41および第二認識モデル42はメモリに記憶される。プログラムが記憶されたメモリ、第一認識モデル41が記憶されたメモリおよび第二認識モデル42が記憶されたメモリは、それぞれ同じメモリであってもよいし、異なるメモリであってもよい。 The estimation system 1 is a system for estimating the state of a person or recognizing a person, and includes an imaging unit 10, a detection unit 20, a determination unit 30, an estimation unit 40, a first recognition model 41, a second recognition model 42, A presentation unit 50 and a display unit 60 are provided. The state of a person is, for example, the state of a person's facial expression (emotions, drowsiness, or the like) or the state of a person's line of sight. Here, the state of a person's face is estimated. The estimation system 1 includes a processor, memory, and the like. The memory is ROM (Read Only Memory), RAM (Random Access Memory), etc., and can store programs executed by the processor. The detection unit 20, the determination unit 30, and the estimation unit 40 are implemented by a processor or the like that executes a program stored in memory. The first recognition model 41 and the second recognition model 42 are stored in memory. The memory in which the program is stored, the memory in which the first recognition model 41 is stored, and the memory in which the second recognition model 42 is stored may be the same memory or different memories.
 撮像部10は、第一波長帯での第一撮像手段および第二波長帯での第二撮像手段を有する構成要素である。第一波長帯と第二波長帯とは異なる波長帯である。なお、第一波長帯の一部と第二波長帯の一部とが重複していてもよい。ここで、第一波長帯および第二波長帯について図2を用いて説明する。 The imaging unit 10 is a component having first imaging means in the first wavelength band and second imaging means in the second wavelength band. The first wavelength band and the second wavelength band are different wavelength bands. A part of the first wavelength band and a part of the second wavelength band may overlap. Here, the first wavelength band and the second wavelength band will be described with reference to FIG.
 図2は、第一波長帯および第二波長帯を説明するための図である。 FIG. 2 is a diagram for explaining the first wavelength band and the second wavelength band.
 例えば、第一波長帯は可視光波長帯であり、第二波長帯は赤外光波長帯である。このため、可視光波長帯での第一撮像手段によってRGB画像が得られ、赤外光波長帯での第二撮像手段によってIR画像が得られる。RGB画像では、顔の肌のしわ、表情、輪郭、目の色および髪の色などが明瞭になっているが、人がサングラスまたはマスクなどの装着物を装着している場合には、装着物によって遮蔽された部分が見えなくなっている。一方で、IR画像では、人がサングラスまたはマスクなどの装着物を装着している場合であっても、装着物によって遮蔽された部分が見えるが、顔の肌のしわ、表情、輪郭、目の色および髪の色などが不明瞭になっている。 For example, the first wavelength band is the visible wavelength band, and the second wavelength band is the infrared wavelength band. Therefore, an RGB image is obtained by the first imaging means in the visible light wavelength band, and an IR image is obtained by the second imaging means in the infrared light wavelength band. In the RGB image, facial skin wrinkles, facial expressions, contours, eye color and hair color are clearly visible, but when a person wears a wearable object such as sunglasses or a mask, the wearable object is clearly visible. The part occluded by is no longer visible. On the other hand, in an IR image, even if a person is wearing a wearable object such as sunglasses or a mask, the portion occluded by the wearable can be seen, but the facial skin wrinkles, expressions, contours, and eyes can be seen. Color and hair color are obscured.
 例えば、第一撮像手段は、第一波長帯での撮像を行う第一カメラによる撮像手段であり、第二撮像手段は、第二波長帯での撮像を行う第二カメラによる撮像手段であってもよい。つまり、撮像部10は、第一カメラおよび第二カメラの2つのカメラによって実現されてもよい。あるいは、第一撮像手段は、第二波長帯の光をカットする第二カットフィルタを用いた撮像手段であり、第二撮像手段は、第一波長帯の光をカットする第一カットフィルタを用いた撮像手段であってもよい。つまり、撮像部10は、第一カットフィルタおよび第二カットフィルタを使い分けて用いることができる、例えば1つのカメラによって実現されてもよい。 For example, the first imaging means is imaging means by a first camera that performs imaging in the first wavelength band, and the second imaging means is imaging means by a second camera that performs imaging in the second wavelength band, good too. That is, the imaging unit 10 may be realized by two cameras, a first camera and a second camera. Alternatively, the first imaging means is imaging means using a second cut filter that cuts light in the second wavelength band, and the second imaging means uses a first cut filter that cuts light in the first wavelength band. It may also be an image pickup means. That is, the imaging unit 10 may be realized by, for example, one camera that can use the first cut filter and the second cut filter selectively.
 検知部20は、撮像部10による人の撮像により得られた画像に基づいて、人の装着物の有無を検知する。例えば、装着物は、人の顔に装着される装着物であり、具体的には、サングラスまたはマスクなどである。人の装着物の有無の検知に用いられる画像は、例えば、第一波長帯での第一撮像手段を用いた撮像部10による人の撮像により得られたRGB画像である。人の装着物の有無の検知方法は特に限定されないが、例えば、機械学習などにより学習された学習済みモデルが用いられてもよい。 The detection unit 20 detects the presence or absence of an object worn by the person based on the image obtained by the imaging unit 10 capturing the image of the person. For example, the wearable item is a wearable item worn on a person's face, and specifically includes sunglasses or a mask. An image used for detecting the presence or absence of an object worn by a person is, for example, an RGB image obtained by imaging a person by the imaging unit 10 using the first imaging means in the first wavelength band. A method for detecting the presence or absence of an object worn by a person is not particularly limited, but for example, a learned model learned by machine learning or the like may be used.
 決定部30は、検知された人の装着物の有無に基づいて、第一撮像手段および第二撮像手段のうちから撮像に用いる撮像手段を決定する。当該撮像は、後述するように、推定部40が推定または認識を行う際に用いる画像を得るための撮像である。決定部30は、人が装着物を装着していると検知された場合、撮像に用いる撮像手段を第二撮像手段に決定し、人が装着物を装着していないと検知された場合、撮像に用いる撮像手段を第一撮像手段に決定する。 The determination unit 30 determines the imaging means to be used for imaging from the first imaging means and the second imaging means based on the presence or absence of the detected wearing object of the person. The imaging is imaging for obtaining an image used when the estimating unit 40 performs estimation or recognition, as will be described later. The determining unit 30 determines the imaging means to be used for imaging to be the second imaging means when it is detected that the person is wearing the object, and when it is detected that the person is not wearing the object, imaging is performed. is determined to be the first imaging means.
 第一認識モデル41は、第一撮像手段を用いた撮像部10による人の撮像により得られた画像(RGB画像)を認識するための認識モデルである。第一認識モデル41は、第一撮像手段により得られた画像を用いて機械学習などに学習された学習済みモデルである。第一撮像手段により得られた画像が第一認識モデル41に入力されることで、当該画像に写る人の状態を示す情報または当該画像に写る人の認識結果が出力される。 The first recognition model 41 is a recognition model for recognizing an image (RGB image) obtained by imaging a person by the imaging unit 10 using the first imaging means. The first recognition model 41 is a trained model trained by machine learning or the like using an image obtained by the first imaging means. By inputting the image obtained by the first imaging means into the first recognition model 41, information indicating the state of the person appearing in the image or the recognition result of the person appearing in the image is output.
 第二認識モデル42は、第二撮像手段を用いた撮像部10による人の撮像により得られた画像(IR画像)を認識するための認識モデルである。第二認識モデル42は、第二撮像手段により得られた画像を用いて機械学習などに学習された学習済みモデルである。第二撮像手段により得られた画像が第二認識モデル42に入力されることで、当該画像に写る人の状態を示す情報または当該画像に写る人の認識結果が出力される。 The second recognition model 42 is a recognition model for recognizing an image (IR image) obtained by imaging a person with the imaging unit 10 using the second imaging means. The second recognition model 42 is a trained model trained by machine learning using the image obtained by the second imaging means. By inputting the image obtained by the second imaging means into the second recognition model 42, information indicating the state of the person in the image or the recognition result of the person in the image is output.
 推定部40は、決定された撮像手段を用いた撮像部10による人の撮像により得られた画像に基づいて、人の状態の推定または人の認識を行う。例えば、推定部40は、撮像に用いる撮像手段が第一撮像手段に決定された場合、第一撮像手段を用いた撮像部10による人の撮像により得られた画像を、第一認識モデル41を用いて認識することで、人の状態の推定または人の認識を行う。また、例えば、推定部40は、撮像に用いる撮像手段が第二撮像手段に決定された場合、第二撮像手段を用いた撮像部10による人の撮像により得られた画像を、第二認識モデル42を用いて認識することで、人の状態の推定または人の認識を行う。例えば、推定部40は、人の表情の状態または人の視線の状態を推定する。また、例えば、推定部40は、人を認識する(具体的には個人を特定する)。推定部40は、人の状態の推定結果または人の認識結果を出力する。例えば、推定部40は、人の状態の推定結果または人の認識結果を、ユーザの携帯端末またはパソコンなどに出力する。 The estimating unit 40 estimates the state of the person or recognizes the person based on the image obtained by imaging the person with the imaging unit 10 using the determined imaging means. For example, when the imaging means used for imaging is determined to be the first imaging means, the estimating section 40 uses the first recognition model 41 as an image obtained by imaging a person by the imaging section 10 using the first imaging means. By using and recognizing, estimation of a person's state or recognition of a person is performed. Further, for example, when the imaging means used for imaging is determined to be the second imaging means, the estimating section 40 converts an image obtained by imaging a person by the imaging section 10 using the second imaging means into the second recognition model. By recognizing using 42, estimation of a person's state or recognition of a person is performed. For example, the estimating unit 40 estimates the state of a person's facial expression or the state of a person's line of sight. Also, for example, the estimation unit 40 recognizes a person (specifically, identifies an individual). The estimating unit 40 outputs the result of estimating the state of the person or the result of recognizing the person. For example, the estimating unit 40 outputs the result of estimating the state of the person or the result of recognizing the person to the user's mobile terminal, personal computer, or the like.
 提示部50は、特定の情報を提示する構成要素である。例えば、提示部50は、サイネージ、または、携帯端末もしくはパソコンのディスプレイなどである。また、提示部50は、コミュニケーションロボットが有するディスプレイなどであってもよい。特定の情報の内容は、例えば、商品のメニュー、店舗地図または商品の提案などである。提示部50の詳細については後述する。例えば、決定部30は、提示部50から特定の情報の内容を示す情報を取得し、人の装着物の有無に加えて、提示部50が提示する特定の情報の内容にも基づいて、第一撮像手段および第二撮像手段のうちから撮像に用いる撮像手段を決定してもよい。 The presentation unit 50 is a component that presents specific information. For example, the presentation unit 50 is signage, or a display of a mobile terminal or a personal computer. Also, the presentation unit 50 may be a display or the like that the communication robot has. The contents of the specific information are, for example, product menus, store maps, product suggestions, and the like. Details of the presentation unit 50 will be described later. For example, the determination unit 30 acquires information indicating the content of specific information from the presentation unit 50, and based on the content of the specific information presented by the presentation unit 50 in addition to the presence or absence of an object worn by the person, The imaging means used for imaging may be determined from one of the first imaging means and the second imaging means.
 表示部60は、第一認識モデル41を用いて認識された人の認識結果および第二認識モデル42を用いて認識された人の認識結果を対比して表示する。例えば、表示部60は、携帯端末もしくはパソコンのディスプレイなどである。表示部60の詳細については後述する。 The display unit 60 displays the recognition result of the person recognized using the first recognition model 41 and the recognition result of the person recognized using the second recognition model 42 in comparison. For example, the display unit 60 is a display of a mobile terminal or a personal computer. Details of the display unit 60 will be described later.
 次に、推定システム1の動作の詳細について図3および図4を用いて説明する。以下、第一撮像手段をRGB撮像手段とも呼び、第二撮像手段をIR撮像手段とも呼ぶ。 Next, the details of the operation of the estimation system 1 will be explained using FIGS. 3 and 4. FIG. Hereinafter, the first imaging means will also be referred to as RGB imaging means, and the second imaging means will also be referred to as IR imaging means.
 図3は、実施の形態における推定システム1の動作の一例を示すフローチャートである。 FIG. 3 is a flow chart showing an example of the operation of the estimation system 1 according to the embodiment.
 図4は、実施の形態における推定システム1の動作の流れを説明するための図である。 FIG. 4 is a diagram for explaining the operation flow of the estimation system 1 according to the embodiment.
 まず、検知部20は、撮像部10による人の撮像により得られた画像に基づいて、人の装着物の有無を検知する(ステップS11)。例えば、図4の左側に示されるように、サングラスを装着した人が写る画像が取得された場合、検知部20は、図4の中央に示されるように、サングラスを検知し、人の装着物があると検知する。 First, the detection unit 20 detects the presence or absence of a person's wearing object based on the image obtained by imaging the person with the imaging unit 10 (step S11). For example, as shown in the left side of FIG. 4, when an image is obtained in which a person wearing sunglasses is captured, the detection unit 20 detects the sunglasses, as shown in the center of FIG. detect that there is
 次に、決定部30は、検知された人の装着物の有無に基づいて、RGB撮像手段およびIR撮像手段のうちから撮像に用いる撮像手段を決定する(ステップS12)。例えば、図4の中央に示されるように、人の装着物があると検知された場合、決定部30は、撮像に用いる撮像手段をIR撮像手段に決定する。 Next, the determination unit 30 determines the imaging means to be used for imaging from the RGB imaging means and the IR imaging means based on the detected presence or absence of the person's wearing object (step S12). For example, as shown in the center of FIG. 4, when it is detected that there is an object worn by a person, the determination unit 30 determines the IR imaging means as the imaging means to be used for imaging.
 そして、推定部40は、決定された撮像手段を用いた撮像部10による人の撮像により得られた画像に基づいて、人の状態の推定または人の認識を行う(ステップS13)。例えば、撮像手段がIR撮像手段に決定された場合、撮像部10は、IR撮像手段を用いて人の撮像を行い、図4の左側に示されるように、装着物が透過されて装着物に遮蔽された顔の部分(例えばサングラスの下にある目周辺)が写ったIR画像を取得することができる。これにより、推定部40は、人の視線や人の感情などの人の状態を推定したり、人を認識したりすることができる。 Then, the estimation unit 40 estimates the state of the person or recognizes the person based on the image obtained by imaging the person with the imaging unit 10 using the determined imaging means (step S13). For example, when the imaging means is determined to be the IR imaging means, the imaging unit 10 uses the IR imaging means to image a person, and as shown on the left side of FIG. An IR image can be acquired that shows occluded parts of the face (eg around the eyes under sunglasses). Thereby, the estimation unit 40 can estimate a person's state such as a person's line of sight or a person's emotion, and recognize a person.
 なお、人の装着物がないと検知された場合、決定部30は、撮像に用いる撮像手段をRGB撮像手段に決定する。この場合、例えば、推定部40は、人の装着物の有無の検知の際に用いられたRGB画像に基づいて、人の状態の推定または人の認識を行ってもよい。あるいは、再度、撮像部10は、RGB撮像手段を用いて人の撮像を行い、推定部40は、当該撮像により得られたRGB画像に基づいて、人の状態の推定または人の認識を行ってもよい。RGB画像は、IR画像よりも、顔の肌のしわ、表情、輪郭、目の色および髪の色などが明瞭であることが多く、人の装着物がない場合には、明瞭なRGB画像を用いて人の状態の推定または人の認識を行うことができる。 It should be noted that, when it is detected that there is no object worn by a person, the determination unit 30 determines the RGB imaging means as the imaging means used for imaging. In this case, for example, the estimation unit 40 may estimate the state of the person or recognize the person based on the RGB image used when detecting the presence or absence of the object worn by the person. Alternatively, the image capturing unit 10 captures an image of the person again using the RGB image capturing means, and the estimating unit 40 estimates the state of the person or recognizes the person based on the RGB image obtained by the image capturing. good too. RGB images often have clearer facial skin wrinkles, facial expressions, contours, eye color, hair color, etc. than IR images. can be used to estimate a person's state or recognize a person.
 次に、決定部30は、検知された人の装着物の有無および提示部50が提示する特定の情報の内容に基づいて、RGB撮像手段およびIR撮像手段のうちから撮像に用いる撮像手段を決定する際の具体例について、図5から図6Cを用いて説明する。 Next, the determination unit 30 determines the imaging means to be used for imaging from the RGB imaging means and the IR imaging means based on the presence or absence of the object worn by the detected person and the content of the specific information presented by the presentation unit 50. A specific example for doing so will be described with reference to FIGS. 5 to 6C.
 図5は、実施の形態における提示部50の一例を示す図である。 FIG. 5 is a diagram showing an example of the presentation unit 50 in the embodiment.
 図6Aから図6Cは、装着物の有無および特定の情報の内容に基づいて決定される撮像手段を説明するための図である。 FIGS. 6A to 6C are diagrams for explaining imaging means determined based on the presence or absence of a wearable object and the contents of specific information.
 図5に示されるように、提示部50は、例えばサイネージであり、特定の情報として商品のメニューまたは商品の提案などが提示される。例えば、撮像部10は、提示部50に設置され、提示部50を見る人を撮像する。 As shown in FIG. 5, the presentation unit 50 is, for example, a signage, and presents a product menu or product suggestions as specific information. For example, the imaging unit 10 is installed in the presentation unit 50 and images a person viewing the presentation unit 50 .
 図6Aは、人の装着物があり、提示される特定の情報の内容が商品のメニューの場合に、提示部50を見る人の撮像により得られた画像の一例である。図6Aに示されるように、決定部30は、人の装着物があり(具体的には人が装着物としてサングラスを装着しており)、かつ、特定の情報の内容が商品のメニューである場合、撮像に用いる撮像手段をIR撮像手段に決定する。特定の情報の内容が商品のメニューの場合、IR撮像手段によってサングラスが透過されて人の視線がわかることで、人がどの商品に注目しているかを判定することができるためである。 FIG. 6A is an example of an image captured by a person looking at the presentation unit 50 when there is an item worn by the person and the content of the specific information to be presented is a product menu. As shown in FIG. 6A, the determining unit 30 determines that there is an item to be worn by the person (specifically, the person is wearing sunglasses as the item to be worn) and the content of the specific information is a product menu. In this case, the IR imaging means is determined as the imaging means used for imaging. This is because, when the content of the specific information is a product menu, it is possible to determine which product the person is paying attention to by seeing the line of sight of the person through the sunglasses through the IR imaging means.
 図6Bは、人の装着物がなく、提示される特定の情報の内容が商品のメニューの場合に、提示部50を見る人の撮像により得られた画像の一例である。図6Bに示されるように、決定部30は、人の装着物がなく(具体的には人が装着物としてサングラスを装着しておらず)、かつ、特定の情報の内容が商品のメニューである場合、撮像に用いる撮像手段をRGB撮像手段に決定する。人がサングラスを装着しておらず、IR撮像手段を用いなくても人の視線がわかるため、RGB撮像手段によって明瞭な画像を取得することができる。 FIG. 6B is an example of an image captured by a person looking at the presentation unit 50 when there is no item worn by the person and the content of the specific information to be presented is a product menu. As shown in FIG. 6B, the determining unit 30 determines that there is no wearable item for the person (specifically, the person does not wear sunglasses as the wearable item) and the content of the specific information is a product menu. In one case, the RGB imaging means is determined as the imaging means used for imaging. Since the line of sight of the person is known without wearing sunglasses and without using the IR imaging means, a clear image can be obtained by the RGB imaging means.
 図6Cは、人の装着物があり、提示される特定の情報の内容が商品の提案の場合に、提示部50を見る人の撮像により得られた画像の一例である。図6Cに示されるように、人の装着物があり(具体的には人が装着物としてサングラスを装着しており)、かつ、特定の情報の内容が商品の提案である場合、撮像に用いる撮像手段をRGB撮像手段に決定する。特定の情報の内容が商品の提案の場合、RGB撮像手段によってサングラスが透過されず人の視線がわからなくても、人の口周辺の様子から人の喜怒哀楽などがわかり、提案された商品への興味の度合いを判定することができるためである。 FIG. 6C is an example of an image captured by a person looking at the presentation unit 50 when there is an object worn by the person and the content of the specific information to be presented is a product proposal. As shown in FIG. 6C, when there is an object worn by a person (specifically, the person is wearing sunglasses as an object) and the content of the specific information is a product proposal, it is used for imaging. The imaging means is determined to be the RGB imaging means. If the content of the specific information is a proposal for a product, even if the person's line of sight cannot be seen because the sunglasses are not transmitted by the RGB imaging means, the person's emotions can be seen from the appearance around the person's mouth, and the product is proposed. This is because it is possible to determine the degree of interest in.
 次に、表示部60が、第一認識モデルを用いて認識された人の認識結果および第二認識モデルを用いて認識された人の認識結果を対比して表示する際の具体例について、図7を用いて説明する。 Next, a specific example in which the display unit 60 compares and displays the recognition result of a person recognized using the first recognition model and the recognition result of a person recognized using the second recognition model is shown in FIG. 7 will be used for explanation.
 図7は、実施の形態における表示部60の一例を示す図である。 FIG. 7 is a diagram showing an example of the display section 60 in the embodiment.
 例えば、表示部60は、ディスプレイなどであり、撮像部10により撮像された人と当該人の認識結果とが表示される。また、認識結果として、第一認識モデルを用いて認識された人の認識結果および第二認識モデルを用いて認識された人の認識結果が対比して表示される。図7の左側に示される人は、装着物を装着しており、IR画像が第二認識モデルを用いて認識されたときの認識結果が表示される。ただし、IR画像は不明瞭な場合があり、認識結果の精度が不十分な場合もあるため、RGB画像が第一認識モデルを用いて認識されたときの認識結果も対比して表示される。これにより、表示部60での表示を見た推定システム1のユーザなどが、最終的に認識結果を判断することができる。なお、図7の右側に示される人は装着物を装着していないため、この場合には、RGB画像が第一認識モデルを用いて認識されたときの認識結果のみが表示されてもよい。 For example, the display unit 60 is a display or the like, and displays a person imaged by the imaging unit 10 and the recognition result of the person. In addition, as the recognition result, the recognition result of the person recognized using the first recognition model and the recognition result of the person recognized using the second recognition model are displayed in contrast. The person shown on the left side of FIG. 7 is wearing a wearable object, and the recognition result when the IR image is recognized using the second recognition model is displayed. However, since the IR image may be unclear and the accuracy of the recognition result may be insufficient, the recognition result when the RGB image is recognized using the first recognition model is also displayed in contrast. As a result, the user of the estimation system 1 or the like who sees the display on the display unit 60 can finally judge the recognition result. In addition, since the person shown on the right side of FIG. 7 is not wearing a wearable object, in this case, only the recognition result when the RGB image is recognized using the first recognition model may be displayed.
 (まとめ)
 推定システム1は、人の状態の推定または人の認識を行うためのシステムであって、第一波長帯での第一撮像手段および第二波長帯での第二撮像手段を有する撮像部10と、撮像部10による人の撮像により得られた画像に基づいて、人の装着物の有無を検知する検知部20と、検知された人の装着物の有無に基づいて、第一撮像手段および第二撮像手段のうちから撮像に用いる撮像手段を決定する決定部30と、決定された撮像手段を用いた撮像部10による人の撮像により得られた画像に基づいて、人の状態の推定または人の認識を行う推定部40と、を備える。
(summary)
The estimation system 1 is a system for estimating a state of a person or recognizing a person, and includes an imaging unit 10 having a first imaging means in a first wavelength band and a second imaging means in a second wavelength band; , a detection unit 20 for detecting the presence or absence of an object worn by a person based on an image obtained by imaging a person by the imaging unit 10; A determining unit 30 that determines an imaging means to be used for imaging from among the two imaging means; and an estimating unit 40 for recognizing.
 これによれば、人が装着物を装着している場合には、第一波長帯および第二波長帯のうち装着物を透過して撮像可能な波長帯(例えば赤外光波長帯)での撮像手段により人の撮像を行うことができる。この場合、装着物によって遮蔽された部分が写った画像を人の状態の推定または人の認識に用いることができるため、装着物を装着した人の状態の推定または装着物を装着した人の認識を高精度に行うことができる。一方で、人が装着物を装着していない場合には、第一波長帯および第二波長帯のうち装着物を透過せずに撮像可能な波長帯(例えば可視光波長帯)での撮像手段により人の撮像を行うことができる。装着物を透過して撮像可能な波長帯での撮像手段では、顔の肌のしわ、表情、輪郭、目の色および髪の色などが不明瞭となりやすいことから、この場合には、明瞭な画像を人の状態の推定または人の認識に用いることができる。 According to this, when a person is wearing a wearable object, in a wavelength band (for example, an infrared wavelength band) that can be imaged through the wearable object out of the first wavelength band and the second wavelength band A person can be imaged by the imaging means. In this case, since the image showing the portion shielded by the wearable item can be used for estimating the state of the person or recognizing the person, it is possible to estimate the state of the person wearing the wearable item or recognize the person wearing the wearable item. can be performed with high precision. On the other hand, when the person is not wearing the wearable object, the imaging means in the wavelength band (for example, visible light wavelength band) that can be imaged without penetrating the wearable object out of the first wavelength band and the second wavelength band A person can be imaged by Imaging means in a wavelength band that can transmit images through a wearable object tends to obscure facial skin wrinkles, facial expressions, contours, eye color, hair color, and the like. The images can be used for human state estimation or human recognition.
 第一撮像手段は、第一波長帯での撮像を行う第一カメラによる撮像手段であってもよく、第二撮像手段は、第二波長帯での撮像を行う第二カメラによる撮像手段であってもよい。 The first imaging means may be imaging means by a first camera that performs imaging in the first wavelength band, and the second imaging means is imaging means by a second camera that performs imaging in the second wavelength band. may
 このように、撮像部10が第一波長帯での撮像を行う第一カメラおよび第二波長帯での撮像を行う第二カメラをそれぞれ備えることで、第一撮像手段および第二撮像手段が実現されてもよい。 In this way, the imaging unit 10 is provided with the first camera that performs imaging in the first wavelength band and the second camera that performs imaging in the second wavelength band, thereby realizing the first imaging means and the second imaging means. may be
 第一撮像手段は、第二波長帯の光をカットする第二カットフィルタを用いた撮像手段であってもよく、第二撮像手段は、第一波長帯の光をカットする第一カットフィルタを用いた撮像手段であってもよい。 The first imaging means may be imaging means using a second cut filter that cuts light in the second wavelength band, and the second imaging means includes the first cut filter that cuts light in the first wavelength band. It may be the imaging means used.
 このように、撮像部10が例えば1つのカメラを備え、第一カットフィルタおよび第二カットフィルタが使い分けられることで、第一撮像手段および第二撮像手段が実現されてもよい。 In this way, the imaging unit 10 may include, for example, one camera, and the first cut filter and the second cut filter may be selectively used to implement the first imaging means and the second imaging means.
 推定システム1は、さらに、特定の情報を提示する提示部50を備えていてもよく、決定部30は、検知された人の装着物の有無および特定の情報の内容に基づいて、第一撮像手段および第二撮像手段のうちから撮像に用いる撮像手段を決定してもよい。 The estimation system 1 may further include a presentation unit 50 that presents specific information. The imaging means used for imaging may be determined from among the means and the second imaging means.
 提示部50を介して特定の情報が提供された人の状態の推定または人の認識を行うときに、特定の情報の内容によっては、装着物で遮蔽された部分を推定または認識に有効に使うことができる場合と、推定または認識に対して装着物で遮蔽された部分があまり重要でない場合とがある。そこで、特定の情報の内容にも応じて、人の状態の推定または人の認識に用いられる画像を得るための撮像に適した撮像手段を決定することができる。 When estimating the state of a person or recognizing a person to whom specific information is provided via the presentation unit 50, depending on the content of the specific information, the portion shielded by the wearable object is effectively used for estimation or recognition. In other cases, the portion occluded by the wearable is less important for estimation or recognition. Therefore, it is possible to determine an imaging means suitable for imaging for obtaining an image used for estimating a person's state or recognizing a person, depending on the content of specific information.
 推定システム1は、さらに、第一撮像手段を用いた撮像部10による人の撮像により得られた画像を認識するための第一認識モデル41と、第二撮像手段を用いた撮像部10による人の撮像により得られた画像を認識するための第二認識モデル42と、を備えていてもよい。推定部40は、撮像に用いる撮像手段が第一撮像手段に決定された場合、第一撮像手段を用いた撮像部10による人の撮像により得られた画像を、第一認識モデル41を用いて認識することで、人の状態の推定または人の認識を行ってもよく、撮像に用いる撮像手段が第二撮像手段に決定された場合、第二撮像手段を用いた撮像部10による人の撮像により得られた画像を、第二認識モデル42を用いて認識することで、人の状態の推定または人の認識を行ってもよい。 The estimation system 1 further includes a first recognition model 41 for recognizing an image obtained by imaging a person by the imaging unit 10 using the first imaging means, and a human model 41 by the imaging unit 10 using the second imaging means. and a second recognition model 42 for recognizing an image obtained by imaging. When the imaging means used for imaging is determined to be the first imaging means, the estimating section 40 uses the first recognition model 41 to obtain the image obtained by imaging the person by the imaging section 10 using the first imaging means. By recognizing the person, the state of the person may be estimated or the person may be recognized. When the imaging means used for imaging is determined to be the second imaging means, the person is imaged by the imaging unit 10 using the second imaging means. By recognizing the image obtained by using the second recognition model 42, the state of the person may be estimated or the person may be recognized.
 これによれば、第一撮像手段による画像の認識に適した第一認識モデル41および第二撮像手段による画像の認識に適した第二認識モデル42を用いることで、人の状態の推定または人の認識をより正確に行うことができる。 According to this, by using the first recognition model 41 suitable for image recognition by the first imaging means and the second recognition model 42 suitable for image recognition by the second imaging means, it is possible to estimate the state of a person or can be recognized more accurately.
 推定システム1は、さらに、第一認識モデル41を用いて認識された人の認識結果および第二認識モデル42を用いて認識された人の認識結果を対比して表示する表示部60を備えていてもよい。 The estimation system 1 further includes a display unit 60 for displaying the recognition result of the person recognized using the first recognition model 41 and the recognition result of the person recognized using the second recognition model 42 in comparison. may
 これによれば、各認識結果を視覚的に比較することができる。 According to this, each recognition result can be visually compared.
 第一波長帯は、可視光波長帯であってもよく、第二波長帯は、赤外光波長帯であってもよい。決定部30は、人の装着物があると検知された場合、撮像手段を第二撮像手段に決定してもよい。 The first wavelength band may be a visible light wavelength band, and the second wavelength band may be an infrared light wavelength band. The determining unit 30 may determine the imaging means to be the second imaging means when it is detected that there is an object worn by a person.
 このように、人が装着物を装着している場合には、赤外光波長帯での第二撮像手段により人の撮像を行うことができる。 In this way, when a person is wearing a wearable object, the person can be imaged by the second imaging means in the infrared wavelength band.
 (その他の実施の形態)
 以上のように、本開示に係る技術の例示として実施の形態を説明した。しかしながら、本開示に係る技術は、これに限定されず、適宜、変更、置き換え、付加、省略などを行った実施の形態にも適用可能である。例えば、以下のような変形例も本開示の一実施の形態に含まれる。
(Other embodiments)
As described above, the embodiment has been described as an example of the technology according to the present disclosure. However, the technology according to the present disclosure is not limited to this, and can also be applied to embodiments in which changes, replacements, additions, omissions, etc. are made as appropriate. For example, the following modifications are also included in one embodiment of the present disclosure.
 例えば、上記実施の形態では、推定システム1が提示部50を備える例について説明したが、推定システム1は、提示部50を備えていなくてもよい。 For example, in the above embodiment, an example in which the estimation system 1 includes the presentation unit 50 has been described, but the estimation system 1 does not have to include the presentation unit 50 .
 例えば、上記実施の形態では、推定システム1が第一認識モデル41および第二認識モデル42を備える例について説明したが、推定システム1は、第一認識モデル41および第二認識モデル42を備えていなくてもよい。 For example, in the above embodiment, the estimation system 1 includes the first recognition model 41 and the second recognition model 42, but the estimation system 1 includes the first recognition model 41 and the second recognition model 42. It doesn't have to be.
 例えば、上記実施の形態では、推定システム1が表示部60を備える例について説明したが、推定システム1は、表示部60を備えていなくてもよい。 For example, in the above embodiment, an example in which the estimation system 1 includes the display unit 60 has been described, but the estimation system 1 does not have to include the display unit 60 .
 なお、本開示は、推定システム1として実現できるだけでなく、推定システム1を構成する各構成要素が行うステップ(処理)を含む推定方法として実現できる。 Note that the present disclosure can be realized not only as the estimation system 1, but also as an estimation method including steps (processes) performed by each component constituting the estimation system 1.
 推定方法は、人の状態の推定または人の認識を行うための推定システム1による方法であって、推定システム1は第一波長帯での第一撮像手段および第二波長帯での第二撮像手段を有する撮像部10を備える。図3に示されるように、推定方法は、撮像部10による人の撮像により得られた画像に基づいて、人の装着物の有無を検知する検知ステップ(ステップS11)と、検知された装着物の有無に基づいて、第一撮像手段および第二撮像手段のうちから撮像手段を決定する決定ステップ(ステップS12)と、決定された撮像手段を用いた撮像部10による人の撮像により得られた画像に基づいて、人の状態の推定または人の認識を行う推定ステップ(ステップS13)と、を含む。 The estimation method is a method by the estimation system 1 for estimating the state of a person or recognizing the person, and the estimation system 1 includes a first imaging means in a first wavelength band and a second imaging in a second wavelength band. An imaging unit 10 having means is provided. As shown in FIG. 3, the estimation method includes a detection step (step S11) of detecting the presence or absence of an object worn by a person based on an image obtained by imaging a person by the imaging unit 10; Based on the presence or absence of, the determination step (step S12) of determining the imaging means from the first imaging means and the second imaging means, and the imaging unit 10 using the determined imaging means obtained by imaging the person and an estimation step (step S13) of estimating the state of the person or recognizing the person based on the image.
 例えば、推定方法におけるステップは、コンピュータ(コンピュータシステム)によって実行されてもよい。そして、本開示は、推定方法に含まれるステップを、コンピュータに実行させるためのプログラムとして実現できる。 For example, the steps in the estimation method may be executed by a computer (computer system). Further, the present disclosure can be realized as a program for causing a computer to execute the steps included in the estimation method.
 さらに、本開示は、そのプログラムを記録したCD-ROM等である非一時的なコンピュータ読み取り可能な記録媒体として実現できる。 Furthermore, the present disclosure can be implemented as a non-temporary computer-readable recording medium such as a CD-ROM on which the program is recorded.
 例えば、本開示が、プログラム(ソフトウェア)で実現される場合には、コンピュータのCPU、メモリ及び入出力回路等のハードウェア資源を利用してプログラムが実行されることによって、各ステップが実行される。つまり、CPUがデータをメモリ又は入出力回路等から取得して演算したり、演算結果をメモリ又は入出力回路等に出力したりすることによって、各ステップが実行される。 For example, when the present disclosure is implemented by a program (software), each step is executed by executing the program using hardware resources such as the CPU, memory, and input/output circuits of the computer. . In other words, each step is executed by the CPU acquiring data from a memory, an input/output circuit, or the like, performing an operation, or outputting the operation result to the memory, an input/output circuit, or the like.
 また、上記実施の形態の推定システム1に含まれる各構成要素は、専用又は汎用の回路として実現されてもよい。 Also, each component included in the estimation system 1 of the above embodiment may be implemented as a dedicated or general-purpose circuit.
 また、上記実施の形態の推定システム1に含まれる各構成要素は、集積回路(IC:Integrated Circuit)であるLSI(Large Scale Integration)として実現されてもよい。 Also, each component included in the estimation system 1 of the above embodiment may be implemented as an LSI (Large Scale Integration), which is an integrated circuit (IC).
 また、集積回路はLSIに限られず、専用回路又は汎用プロセッサで実現されてもよい。プログラム可能なFPGA(Field Programmable Gate Array)、又は、LSI内部の回路セルの接続及び設定が再構成可能なリコンフィギュラブル・プロセッサが、利用されてもよい。 Also, integrated circuits are not limited to LSIs, and may be realized by dedicated circuits or general-purpose processors. A programmable FPGA (Field Programmable Gate Array) or a reconfigurable processor capable of reconfiguring connections and settings of circuit cells inside the LSI may be used.
 さらに、半導体技術の進歩又は派生する別技術によりLSIに置き換わる集積回路化の技術が登場すれば、当然、その技術を用いて、推定システム1に含まれる各構成要素の集積回路化が行われてもよい。 Furthermore, if a technology for integrating circuits that replaces LSIs emerges due to advances in semiconductor technology or other derived technology, naturally each component included in the estimation system 1 will be integrated circuits using that technology. good too.
 その他、実施の形態に対して当業者が思いつく各種変形を施して得られる形態、本開示の趣旨を逸脱しない範囲で各実施の形態における構成要素及び機能を任意に組み合わせることで実現される形態も本開示に含まれる。 In addition, there are also forms obtained by applying various modifications to the embodiments that a person skilled in the art can think of, and forms realized by arbitrarily combining the components and functions in each embodiment within the scope of the present disclosure. Included in this disclosure.
 本開示は、例えば、人の状態に応じた制御を行うシステムまたは人の認識を行うシステムなどに適用できる。 The present disclosure can be applied, for example, to a system that performs control according to the state of a person, a system that recognizes a person, or the like.
 1 推定システム
 10 撮像部
 20 検知部
 30 決定部
 40 推定部
 41 第一認識モデル
 42 第二認識モデル
 50 提示部
 60 表示部
1 estimation system 10 imaging unit 20 detection unit 30 determination unit 40 estimation unit 41 first recognition model 42 second recognition model 50 presentation unit 60 display unit

Claims (9)

  1.  人の状態の推定または前記人の認識を行うための推定システムであって、
     第一波長帯での第一撮像手段および第二波長帯での第二撮像手段を有する撮像部と、
     前記撮像部による前記人の撮像により得られた画像に基づいて、前記人の装着物の有無を検知する検知部と、
     検知された前記人の装着物の有無に基づいて、前記第一撮像手段および前記第二撮像手段のうちから撮像に用いる撮像手段を決定する決定部と、
     決定された撮像手段を用いた前記撮像部による前記人の撮像により得られた画像に基づいて、前記人の状態の推定または前記人の認識を行う推定部と、を備える、
     推定システム。
    An estimation system for estimating a state of a person or recognizing said person, comprising:
    an imaging unit having a first imaging means in a first wavelength band and a second imaging means in a second wavelength band;
    a detection unit that detects the presence or absence of an object worn by the person based on an image obtained by imaging the person by the imaging unit;
    a determination unit that determines an imaging means to be used for imaging from among the first imaging means and the second imaging means based on the detected presence or absence of an object worn by the person;
    an estimating unit that estimates the state of the person or recognizes the person based on an image obtained by imaging the person by the imaging unit using the determined imaging means;
    estimation system.
  2.  前記第一撮像手段は、前記第一波長帯での撮像を行う第一カメラによる撮像手段であり、
     前記第二撮像手段は、前記第二波長帯での撮像を行う第二カメラによる撮像手段である、
     請求項1に記載の推定システム。
    The first imaging means is imaging means by a first camera that performs imaging in the first wavelength band,
    The second imaging means is imaging means by a second camera that performs imaging in the second wavelength band,
    The estimation system according to claim 1.
  3.  前記第一撮像手段は、前記第二波長帯の光をカットする第二カットフィルタを用いた撮像手段であり、
     前記第二撮像手段は、前記第一波長帯の光をカットする第一カットフィルタを用いた撮像手段である、
     請求項1に記載の推定システム。
    The first imaging means is imaging means using a second cut filter that cuts light in the second wavelength band,
    The second imaging means is imaging means using a first cut filter that cuts light in the first wavelength band,
    The estimation system according to claim 1.
  4.  前記推定システムは、さらに、特定の情報を提示する提示部を備え、
     前記決定部は、検知された前記人の装着物の有無および前記特定の情報の内容に基づいて、前記第一撮像手段および前記第二撮像手段のうちから撮像に用いる撮像手段を決定する、
     請求項1~3のいずれか1項に記載の推定システム。
    The estimation system further comprises a presentation unit that presents specific information,
    The determination unit determines the imaging means to be used for imaging from the first imaging means and the second imaging means based on the detected presence or absence of the wearer of the person and the content of the specific information.
    The estimation system according to any one of claims 1-3.
  5.  前記推定システムは、さらに、
      前記第一撮像手段を用いた前記撮像部による前記人の撮像により得られた画像を認識するための第一認識モデルと、
      前記第二撮像手段を用いた前記撮像部による前記人の撮像により得られた画像を認識するための第二認識モデルと、を備え、
     前記推定部は、
      撮像に用いる撮像手段が前記第一撮像手段に決定された場合、前記第一撮像手段を用いた前記撮像部による前記人の撮像により得られた画像を、前記第一認識モデルを用いて認識することで、前記人の状態の推定または前記人の認識を行い、
      撮像に用いる撮像手段が前記第二撮像手段に決定された場合、前記第二撮像手段を用いた前記撮像部による前記人の撮像により得られた画像を、前記第二認識モデルを用いて認識することで、前記人の状態の推定または前記人の認識を行う、
     請求項1~4のいずれか1項に記載の推定システム。
    The estimation system further comprises:
    a first recognition model for recognizing an image obtained by imaging the person by the imaging unit using the first imaging means;
    a second recognition model for recognizing an image obtained by imaging the person by the imaging unit using the second imaging means;
    The estimation unit
    When the imaging means used for imaging is determined to be the first imaging means, an image obtained by imaging the person by the imaging unit using the first imaging means is recognized using the first recognition model. By estimating the state of the person or recognizing the person,
    When the imaging means used for imaging is determined to be the second imaging means, the image obtained by imaging the person by the imaging unit using the second imaging means is recognized using the second recognition model. By estimating the state of the person or recognizing the person,
    The estimation system according to any one of claims 1-4.
  6.  前記推定システムは、さらに、前記第一認識モデルを用いて認識された前記人の認識結果および前記第二認識モデルを用いて認識された前記人の認識結果を対比して表示する表示部を備える、
     請求項5に記載の推定システム。
    The estimation system further comprises a display unit for displaying the recognition result of the person recognized using the first recognition model and the recognition result of the person recognized using the second recognition model in comparison. ,
    The estimation system according to claim 5.
  7.  前記第一波長帯は、可視光波長帯であり、
     前記第二波長帯は、赤外光波長帯であり、
     前記決定部は、前記人の装着物があると検知された場合、撮像手段を前記第二撮像手段に決定する、
     請求項1~6のいずれか1項に記載の推定システム。
    The first wavelength band is a visible light wavelength band,
    The second wavelength band is an infrared wavelength band,
    The determination unit determines the imaging means to be the second imaging means when it is detected that there is an object worn by the person.
    The estimation system according to any one of claims 1-6.
  8.  人の状態の推定または前記人の認識を行うための推定システムによる推定方法であって、
     前記推定システムは第一波長帯での第一撮像手段および第二波長帯での第二撮像手段を有する撮像部を備え、
     前記推定方法は、
     前記撮像部による前記人の撮像により得られた画像に基づいて、前記人の装着物の有無を検知する検知ステップと、
     検知された前記装着物の有無に基づいて、前記第一撮像手段および前記第二撮像手段のうちから撮像手段を決定する決定ステップと、
     決定された撮像手段を用いた前記撮像部による前記人の撮像により得られた画像に基づいて、前記人の状態の推定または前記人の認識を行う推定ステップと、を含む、
     推定方法。
    An estimation method by an estimation system for estimating a person's state or recognizing the person,
    The estimation system comprises an imaging unit having a first imaging means in a first wavelength band and a second imaging means in a second wavelength band,
    The estimation method is
    a detection step of detecting the presence or absence of an object worn by the person based on an image obtained by imaging the person by the imaging unit;
    a determination step of determining the imaging means from the first imaging means and the second imaging means based on the detected presence or absence of the wearable object;
    An estimation step of estimating the state of the person or recognizing the person based on the image obtained by imaging the person by the imaging unit using the determined imaging means,
    estimation method.
  9.  請求項8に記載の推定方法をコンピュータに実行させるためのプログラム。 A program for causing a computer to execute the estimation method according to claim 8.
PCT/JP2021/037911 2021-03-04 2021-10-13 Estimation system, estimation method, and program WO2022185596A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021034491 2021-03-04
JP2021-034491 2021-03-04

Publications (1)

Publication Number Publication Date
WO2022185596A1 true WO2022185596A1 (en) 2022-09-09

Family

ID=83154195

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/037911 WO2022185596A1 (en) 2021-03-04 2021-10-13 Estimation system, estimation method, and program

Country Status (1)

Country Link
WO (1) WO2022185596A1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017011634A (en) * 2015-06-26 2017-01-12 キヤノン株式会社 Imaging device, control method for the same and program
JP2017055250A (en) * 2015-09-09 2017-03-16 富士通株式会社 Display control apparatus, display control method, and display control program
JP2018018401A (en) * 2016-07-29 2018-02-01 東芝アルパイン・オートモティブテクノロジー株式会社 Eyelid opening/closing detector and eyelid opening/closing detection method
WO2019069599A1 (en) * 2017-10-05 2019-04-11 ソニー株式会社 Image processing device and image processing method
WO2020049636A1 (en) * 2018-09-04 2020-03-12 日本電気株式会社 Identification system, model presentation method, and model presentation program
JP2020125618A (en) * 2019-02-04 2020-08-20 東京電力ホールディングス株式会社 Information processing method, program, water intake control system, and learned model creation method
JP2020156903A (en) * 2019-03-27 2020-10-01 Hoya株式会社 Processor for endoscopes, information processing unit, program, information processing method and learning model generation method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017011634A (en) * 2015-06-26 2017-01-12 キヤノン株式会社 Imaging device, control method for the same and program
JP2017055250A (en) * 2015-09-09 2017-03-16 富士通株式会社 Display control apparatus, display control method, and display control program
JP2018018401A (en) * 2016-07-29 2018-02-01 東芝アルパイン・オートモティブテクノロジー株式会社 Eyelid opening/closing detector and eyelid opening/closing detection method
WO2019069599A1 (en) * 2017-10-05 2019-04-11 ソニー株式会社 Image processing device and image processing method
WO2020049636A1 (en) * 2018-09-04 2020-03-12 日本電気株式会社 Identification system, model presentation method, and model presentation program
JP2020125618A (en) * 2019-02-04 2020-08-20 東京電力ホールディングス株式会社 Information processing method, program, water intake control system, and learned model creation method
JP2020156903A (en) * 2019-03-27 2020-10-01 Hoya株式会社 Processor for endoscopes, information processing unit, program, information processing method and learning model generation method

Similar Documents

Publication Publication Date Title
US11227158B2 (en) Detailed eye shape model for robust biometric applications
JP6885935B2 (en) Eye pose identification using eye features
JP7178403B2 (en) Detailed Eye Shape Model for Robust Biometric Applications
JP6722590B2 (en) Facial expression tracking
EP3047361B1 (en) A method and device for displaying a graphical user interface
WO2014128789A1 (en) Shape recognition device, shape recognition program, and shape recognition method
KR20200051591A (en) Information processing apparatus, information processing method, and program
JP2020057111A (en) Facial expression determination system, program and facial expression determination method
WO2014128751A1 (en) Head mount display apparatus, head mount display program, and head mount display method
KR102364929B1 (en) Electronic device, sever, and system for tracking skin changes
KR20200144196A (en) Electronic device and method for providing function using corneal image thereof
JP2002318652A (en) Virtual input device and its program
WO2021095277A1 (en) Line-of-sight detection method, line-of-sight detection device, and control program
JP2017191546A (en) Medical use head-mounted display, program of medical use head-mounted display, and control method of medical use head-mounted display
JP6745518B1 (en) Eye gaze detection method, eye gaze detection device, and control program
US11328187B2 (en) Information processing apparatus and information processing method
WO2022185596A1 (en) Estimation system, estimation method, and program
JP2021010652A (en) Information processing device, evaluation method, and information processing program
JP2010166939A (en) Expression measuring method, expression measuring program, and expression measuring apparatus
US20230410382A1 (en) Information processing apparatus, head-mounted display apparatus, information processing method, and non-transitory computer readable medium
JP7460450B2 (en) Gaze estimation system, gaze estimation method, gaze estimation program, learning data generation device, and gaze estimation device
US20240153136A1 (en) Eye tracking
Takacs et al. Sensing user needs: recognition technologies and user models for adaptive user interfaces
CN117707333A (en) Wearable electronic device for collaborative use
CN112836545A (en) 3D face information processing method and device and terminal

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21929157

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21929157

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP