WO2022176069A1 - 人検出装置、人検出システム及び人検出方法 - Google Patents
人検出装置、人検出システム及び人検出方法 Download PDFInfo
- Publication number
- WO2022176069A1 WO2022176069A1 PCT/JP2021/005958 JP2021005958W WO2022176069A1 WO 2022176069 A1 WO2022176069 A1 WO 2022176069A1 JP 2021005958 W JP2021005958 W JP 2021005958W WO 2022176069 A1 WO2022176069 A1 WO 2022176069A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- person
- dimensional model
- shape
- detection means
- person detection
- Prior art date
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 255
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 claims abstract description 14
- 230000033001 locomotion Effects 0.000 claims description 69
- 238000012423 maintenance Methods 0.000 claims description 24
- 238000005286 illumination Methods 0.000 abstract 1
- 230000006870 function Effects 0.000 description 21
- 230000015654 memory Effects 0.000 description 19
- 230000003287 optical effect Effects 0.000 description 19
- 238000010586 diagram Methods 0.000 description 17
- 238000012545 processing Methods 0.000 description 16
- 238000000034 method Methods 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 6
- 238000004891 communication Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 230000036544 posture Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B21/00—Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
- G08B21/18—Status alarms
- G08B21/24—Reminder alarms, e.g. anti-loss alarms
Definitions
- the present disclosure relates to a human detection device and the like.
- Patent Document 1 discloses a technique for monitoring other objects (including people and objects) approaching a specific object in a monitoring target area using LiDAR (Light Detection and Ranging). . That is, in the technique described in Patent Document 1, distance images corresponding to such areas are generated in time series using LiDAR. Movement of individual objects in the area is detected using the generated multiple range images. Based on the result of such detection, other approaching objects are detected as described above (see the abstract of Patent Document 1, paragraphs [0008] to [0010], paragraph [0043], etc.).
- LiDAR Light Detection and Ranging
- Patent Document 2 The technology described in Patent Document 2 is also known as a related technology.
- Patent Document 1 detects movement of an object. In other words, the technique described in Patent Document 1 detects a moving object without distinguishing whether it is a person or an object.
- discharge area the area where water from the dam is discharged
- dangerous area the dangerous area
- the present disclosure has been made to solve the above problems, and aims to provide a human detection device etc. that can accurately detect a person existing in a no-entry area.
- One form of the human detection device is based on the reflected light corresponding to the laser beam irradiated to the no-entry area in the area where the water of the dam is discharged and the area around it. It comprises three-dimensional model generation means for generating a model, and human detection means for detecting a person in the no-entry area using the three-dimensional model.
- One form of the human detection system is based on the reflected light corresponding to the laser beam irradiated to the no-entry area in the area where the water of the dam is discharged and the area around it. It comprises three-dimensional model generation means for generating a model, and human detection means for detecting a person in the no-entry area using the three-dimensional model.
- the three-dimensional model generation means is based on the reflected light corresponding to the laser light irradiated to the no-entry area in the area where the water of the dam is discharged and the surrounding area. a three-dimensional model of the no-entry area is generated, and the human detection means detects a person in the no-entry area using the three-dimensional model.
- FIG. 1 is a block diagram showing essential parts of the human detection system according to the first embodiment.
- FIG. 2 is a block diagram showing essential parts of the optical sensing device in the human detection system according to the first embodiment.
- FIG. 3 is a block diagram showing essential parts of the human detection device according to the first embodiment.
- FIG. 4 is a block diagram showing the hardware configuration of main parts of the human detection device according to the first embodiment.
- FIG. 5 is a block diagram showing another hardware configuration of main parts of the human detection device according to the first embodiment.
- FIG. 6 is a block diagram showing another hardware configuration of main parts of the human detection device according to the first embodiment.
- FIG. 7 is a flow chart showing the operation of the human detection device according to the first embodiment.
- FIG. 8 is an explanatory diagram showing a specific example of a no-entry area and the like.
- FIG. 9 is a block diagram showing essential parts of another human detection system according to the first embodiment.
- FIG. 10 is a block diagram showing essential parts of another human detection device according to the first embodiment.
- FIG. 11 is a block diagram showing essential parts of a human detection system according to the second embodiment.
- FIG. 12 is a block diagram showing essential parts of the human detection device according to the second embodiment.
- FIG. 13 is a flow chart showing the operation of the human detection device according to the second embodiment.
- FIG. 1 is a block diagram showing essential parts of the human detection system according to the first embodiment.
- FIG. 2 is a block diagram showing essential parts of the optical sensing device in the human detection system according to the first embodiment.
- FIG. 3 is a block diagram showing essential parts of the human detection device according to the first embodiment. A human detection system according to the first embodiment will be described with reference to FIGS. 1 to 3.
- FIG. 1 is a block diagram showing essential parts of the human detection system according to the first embodiment.
- FIG. 2 is a block diagram showing essential parts of the optical sensing device in the human detection system according to the first embodiment.
- FIG. 3 is a block diagram showing essential parts of the human detection device according to the first embodiment.
- a human detection system according to the first embodiment will be described with reference to FIGS. 1 to 3.
- the human detection system 100 includes an optical sensing device 1, a human detection device 2 and an output device 3.
- the optical sensing device 1 includes a light emitting section 11 and a light receiving section 12 .
- the human detection device 2 includes a three-dimensional model generation section 21 , a human detection section 22 and an output control section 23 .
- the optical sensing device 1 is installed inside or around the no-entry area.
- the no-entry area may include the discharge area and may also include the no-entry area.
- the discharge area is the area where the water of the dam is discharged.
- the dangerous area is an area around the discharge area that is dangerous when people are present. A specific example of the no-entry area will be described later with reference to FIG.
- the light emitting section 11 uses, for example, a laser light source.
- the light emitting unit 11 emits pulsed laser light.
- the direction in which the laser light is emitted by the light emitting portion 11 is variable.
- the light emitting section 11 sequentially emits laser light in a plurality of directions.
- the laser beam is irradiated so as to scan the no-entry area.
- the irradiated laser light is reflected as scattered light by objects (including people) within the no-entry area.
- the reflected light may be referred to as "reflected light”.
- the light receiving section 12 receives the reflected light.
- the light receiving section 12 uses, for example, a light receiving element.
- the 3D model generation unit 21 generates a 3D model of the no-entry area based on the laser light emitted by the light emission unit 11 and the reflected light received by the light reception unit 12 . More specifically, the 3D model generator 21 generates a 3D point cloud model corresponding to the shapes of individual objects in the no-entry area.
- the principle of ToF (Time of Flight) LiDAR is used to generate such a three-dimensional model.
- the three-dimensional model generation unit 21 acquires information indicating the timing at which the laser light was emitted in each direction and information indicating the timing at which the corresponding reflected light was received. These pieces of information are acquired from the optical sensing device 1, for example. The three-dimensional model generator 21 uses these pieces of information to calculate the one-way propagation distance corresponding to the round-trip propagation time for the laser light emitted in each direction and the corresponding reflected light.
- the three-dimensional model generation unit 21 calculates coordinate values indicating the position of the point where the laser beam emitted in each direction is reflected. Such coordinate values are coordinate values in a virtual three-dimensional coordinate space. By arranging points corresponding to individual coordinate values in the three-dimensional coordinate space, a three-dimensional point cloud model corresponding to the shapes of individual objects within the no-entry area is generated. A three-dimensional model is thus generated.
- the human detection unit 22 uses the three-dimensional model generated by the three-dimensional model generation unit 21 to detect a person existing within the no-entry area. More specifically, the human detection unit 22 uses the generated three-dimensional model to determine whether or not each object existing in the no-entry area is a person. Detect people in
- the three-dimensional model generated by the three-dimensional model generation unit 21 includes three-dimensional point cloud models corresponding to the shapes of individual objects existing within the no-entry area.
- information (hereinafter referred to as "reference shape information”) indicating the general shape of a person (hereinafter referred to as “reference shape”) is stored in advance in the human detection unit 22.
- the human detection unit 22 acquires reference shape information.
- the reference shape information may include information indicating a plurality of reference shapes corresponding to different postures.
- the reference shape information may include information indicating a plurality of reference shapes corresponding to different line-of-sight directions.
- the reference shape information may include information indicating a plurality of reference shapes corresponding to different physiques.
- the human detection unit 22 uses the generated three-dimensional model and the stored or acquired reference shape information to compare the shape of each individual object present in the no-entry area with each reference shape. .
- the human detection unit 22 determines whether each object is a person by so-called “pattern matching”. Specifically, for example, the human detection unit 22 computes the difference between the shape of each object and each reference shape. For at least one reference shape, when the calculated difference is within a predetermined range, the person detection unit 22 determines that the object is a person. Otherwise, the person detection unit 22 determines that the object is not a person. In this way, a person existing within the no-entry area is detected.
- the output control unit 23 executes control to output information based on the result of detection by the human detection unit 22 (hereinafter referred to as "detection result information").
- the detection result information includes, for example, information indicating the presence or absence of a person in the no-entry area and information indicating the position of the person in the no-entry area.
- the detection result information may include information regarding the person detected by the person detection unit 22 .
- the output device 3 is used to output the detection result information.
- the output device 3 includes, for example, at least one of a display device, an audio output device, and a communication device.
- the display device uses, for example, a display.
- the audio output device uses, for example, a speaker.
- a communication device for example, uses a dedicated transmitter and receiver.
- the output control unit 23 executes control to display an image corresponding to the detection result information.
- a display device of the output device 3 is used for displaying such an image.
- such an image may include the 3D model generated by the 3D model generation unit 21 .
- the output control unit 23 executes control to output a sound corresponding to the detection result information.
- An audio output device among the output devices 3 is used for outputting such audio.
- the output control unit 23 executes control to transmit a signal corresponding to the detection result information.
- a communication device of the output device 3 is used for transmission of such a signal.
- the dam maintenance management system 200 is a system for dam maintenance management. As shown in FIG. 1 , the dam maintenance management system 200 is provided outside the human detection system 100 . Hereinafter, the dam maintenance management system 200 may be referred to as an "external dam maintenance system".
- the output detection result information is used by the dam maintenance management system 200 for dam maintenance management. Specifically, for example, the output detection result information is used by the dam maintenance management system 200 to issue an alarm to people present in the no-entry area.
- the dam maintenance management system 200 when the detection result information indicates that a person exists in the no-entry area, the dam maintenance management system 200 notifies the person of an alarm using speakers installed around the no-entry area. .
- the dam maintenance management system 200 instructs other persons (for example, the operator of the dam maintenance management system 200) to notify such an alarm. As a result, it is possible to urge people in the no-entry area to evacuate.
- the information output by the output device 3 to the dam maintenance management system 200 is not limited to detection result information (that is, information about the detected person).
- the information to be output may include, in addition to the detection result information, information related to no-entry areas, information used for control of notification of the above-mentioned warning, and the like.
- the main part of the human detection system 100 is configured.
- the light emitting section 11 may be referred to as “light emitting means”.
- the light receiving section 12 may be referred to as “light receiving means”.
- the three-dimensional model generation unit 21 may be referred to as “three-dimensional model generation means”.
- the human detection unit 22 may be referred to as “human detection means”.
- the output control unit 23 may be referred to as "output control means”.
- FIG. 4 the hardware configuration of the main part of the human detection device 2 will be described with reference to FIGS. 4 to 6.
- the human detection device 2 uses a computer 31.
- the computer 31 may be provided integrally with the optical sensing device 1 .
- the computer 31 may be located elsewhere (eg, within a cloud network).
- some elements of the computer 31 may be provided integrally with the optical sensing device 1, and the remaining elements of the computer 31 may be provided elsewhere.
- computer 31 includes processor 41 and memory 42 .
- the memory 42 stores a program for causing the computer 31 to function as the three-dimensional model generation unit 21, the human detection unit 22, and the output control unit 23.
- FIG. The processor 41 reads and executes programs stored in the memory 42 . Thereby, the function F1 of the three-dimensional model generation unit 21, the function F2 of the human detection unit 22, and the function F3 of the output control unit 23 are realized.
- computer 31 includes processing circuitry 43 .
- the processing circuit 43 executes processing for causing the computer 31 to function as the three-dimensional model generation section 21 , the human detection section 22 and the output control section 23 . Thereby, functions F1 to F3 are realized.
- computer 31 includes processor 41 , memory 42 and processing circuitry 43 .
- processor 41 processor 41
- memory 42 memory 42
- processing circuit 43 processing circuit 43 .
- some of the functions F1 to F3 are implemented by the processor 41 and the memory 42, and the rest of the functions F1 to F3 are implemented by the processing circuit 43.
- FIG. 6 shows that some of the functions F1 to F3 are implemented by the processor 41 and the memory 42, and the rest of the functions F1 to F3 are implemented by the processing circuit 43.
- the processor 41 is composed of one or more processors.
- the individual processors use, for example, CPUs (Central Processing Units), GPUs (Graphics Processing Units), microprocessors, microcontrollers, or DSPs (Digital Signal Processors).
- CPUs Central Processing Units
- GPUs Graphics Processing Units
- microprocessors microcontrollers
- DSPs Digital Signal Processors
- the memory 42 is composed of one or more memories. Individual memories include, for example, RAM (Random Access Memory), ROM (Read Only Memory), flash memory, EPROM (Erasable Programmable Read Only Memory), EEPROM (Electrically Erasable Programmable Read Only Memory), hard disk drive, solid state drive, solid state memory Flexible discs, compact discs, DVDs (Digital Versatile Discs), Blu-ray discs, MO (Magneto Optical) discs, or mini discs are used.
- RAM Random Access Memory
- ROM Read Only Memory
- flash memory EPROM (Erasable Programmable Read Only Memory), EEPROM (Electrically Erasable Programmable Read Only Memory), hard disk drive, solid state drive, solid state memory Flexible discs, compact discs, DVDs (Digital Versatile Discs), Blu-ray discs, MO (Magneto Optical) discs, or mini discs are used.
- the processing circuit 43 is composed of one or more processing circuits. Individual processing circuits use, for example, ASIC (Application Specific Integrated Circuit), PLD (Programmable Logic Device), FPGA (Field Programmable Gate Array), SoC (System a Chip), or system LSI (Large Scale) is.
- ASIC Application Specific Integrated Circuit
- PLD Programmable Logic Device
- FPGA Field Programmable Gate Array
- SoC System a Chip
- system LSI Large Scale Scale
- the processor 41 may include a dedicated processor corresponding to each of the functions F1-F3.
- Memory 42 may include dedicated memory corresponding to each of functions F1-F3.
- the processing circuitry 43 may include dedicated processing circuitry corresponding to each of the functions F1-F3.
- the three-dimensional model generation unit 21 generates a three-dimensional model of the no-entry area (step ST1). ToF LiDAR technology is used to generate such a three-dimensional model, as described above.
- the person detection unit 22 detects a person existing in the no-entry area (step ST2). For such detection, the three-dimensional model generated in step ST1 is used as described above.
- the output control unit 23 executes control to output information (that is, detection result information) based on the detection result in step ST2 (step ST3).
- D indicates a dam.
- Dam D may be a hydroelectric dam.
- R indicates a river located downstream with respect to the dam D.
- DD indicates an example of the discharge direction to the river R by the dam D.
- the optical sensing device 1 is installed at a location near the outlet of dam D (the so-called "spillway" or drain).
- A1 indicates the range downstream of dam D.
- the shape of the range A1 is fan-shaped.
- the central position of the sector corresponds to the installation position of the optical sensing device 1 .
- a central angle of such a sector is set to a predetermined value (for example, approximately 180 degrees).
- the radius of such sector is set to a predetermined value (eg, 300 meters). That is, the range A1 corresponds to the range in which the optical sensing device 1 can irradiate the laser beam.
- A2 indicates a rectangular range inscribed in the sector of range A1. As shown in FIG. 8, the range A2 includes part of the river R and part of the coastal area of the river R (more specifically, both bank areas).
- the range A2 is set as the no-entry area. That is, in this case, the portion of the river R included in the range A2 is the discharge area. Also, the part included in the range A2 in the coastal area (more specifically, both bank areas) of the river R is the dangerous area.
- P1 indicates a person who is inside the no-entry area.
- the detection result information may be used by the dam maintenance management system 200 to notify the person P1 of an alarm.
- each of S_1, S_2 and S_3 indicates a speaker used for notification of such an alarm.
- each of P2_1 and P2_2 indicates another person who notifies such an alarm under the direction of the dam maintenance management system 200.
- the no-entry area is not limited to the example shown in FIG.
- range A2 inscribed in range A1 may include the estuary of river R.
- the range A2 can include the coastal area around the estuary. In this case, such a coastal area may be included in the dangerous area.
- each individual object existing within the no-entry area is a person by pattern matching based on its shape. can be done. Therefore, when each object is a person, regardless of whether the person is stationary or moving, the person can be detected based on the shape of the person at the timing when the laser beam is irradiated. . That is, a person present in the no-entry area can be detected regardless of whether the person is stationary or moving. In other words, such persons can be accurately detected.
- the light emitting unit 11 and the light receiving unit 12 may be provided in the human detection device 2 instead of being provided in the optical sensing device 1 . That is, the human detection device 2 may include the light emitting section 11 and the light receiving section 12 . In this case, the optical sensing device 1 is unnecessary.
- the human detection system 100 may include a three-dimensional model generation unit 21 and a human detection unit 22.
- the three-dimensional model generation unit 21 and the human detection unit 22 may constitute a main part of the human detection system 100 .
- the optical sensing device 1 may be provided outside the human detection system 100 .
- the output control unit 23 may be provided outside the human detection system 100 .
- the output device 3 may be provided outside the human detection system 100 .
- each of the three-dimensional model generation unit 21 and the human detection unit 22 may be configured by an independent device.
- the human detection device 2 may include a three-dimensional model generation unit 21 and a human detection unit 22.
- the three-dimensional model generation unit 21 and the human detection unit 22 may constitute the main part of the human detection device 2 .
- the output control section 23 may be provided outside the human detection device 2 .
- the three-dimensional model generation unit 21 generates a three-dimensional model of the no-entry area based on the reflected light corresponding to the laser light irradiated to the no-entry area.
- the human detection unit 22 uses a three-dimensional model to detect people in the no-entry area.
- a person present in the no-entry area can be detected regardless of whether such person is stationary or moving. That is, such persons can be accurately detected.
- FIG. 11 is a block diagram showing essential parts of a human detection system according to the second embodiment.
- FIG. 12 is a block diagram showing essential parts of the human detection device according to the second embodiment.
- a human detection system according to the second embodiment will be described with reference to FIGS. 11 and 12.
- FIG. 11 and 12 blocks similar to those shown in FIGS. 1 to 3 are denoted by the same reference numerals, and description thereof is omitted.
- the human detection system 100a includes a light sensing device 1, a human detection device 2a and an output device 3.
- the human detection device 2a includes a three-dimensional model generation section 21, a human detection section 22a, an output control section 23, and a motion detection section .
- the motion detection unit 24 detects the motion of individual objects existing within the no-entry area based on the laser light emitted by the light emitting unit 11 and the reflected light received by the light receiving unit 12 .
- the principle of Doppler LiDAR is used for such motion detection.
- the 3D model generated by the 3D model generation unit 21 includes 3D point cloud models corresponding to the shapes of individual objects existing within the no-entry area.
- the motion detection unit 24 acquires information indicating frequency components contained in the corresponding laser light and information indicating frequency components contained in the corresponding reflected light for each point in the three-dimensional point cloud model. These pieces of information are acquired from the optical sensing device 1, for example.
- the motion detector 24 uses this information to calculate the Doppler shift amount in the corresponding reflected light for each object.
- the Doppler shift amount is based on the frequency of the laser light emitted from the light emitting section 11 . In other words, the Doppler shift amount is based on the difference between the frequency component included in the corresponding laser light and the frequency component included in the corresponding reflected light for each object.
- the motion detection unit 24 detects the motion of individual objects existing within the no-entry area based on the calculated Doppler shift amount. More specifically, the motion detector 24 detects the direction and speed of such motion. Thereby, the presence or absence of such movement is also detected.
- Various known techniques can be used for object motion detection based on the principle of Doppler LiDAR. A detailed description of these techniques is omitted.
- the human detection unit 22a determines whether each object existing in the no-entry area is a person by the same detection method as the detection method by the human detection unit 22 when the object is stationary. judge. That is, the human detection unit 22a compares the shape of the object with the reference shape, and determines whether the object is a person by pattern matching. On the other hand, when such an object is moving, the person detection unit 22a determines whether or not the object is a person as follows.
- the human detection unit 22a receives information (hereinafter referred to as "reference range information”) indicating a range (hereinafter referred to as “reference range”) that a person can normally take regarding the direction and speed of movement of a person in the no-entry area. ) is stored. Alternatively, the human detection unit 22a acquires reference range information.
- the reference range information may include information indicating a plurality of reference ranges corresponding to mutually different positions in the no-entry area.
- the human detection unit 22a uses the stored or acquired reference range information, based on the result of detection by the motion detection unit 24, for a moving object among the objects existing in the no-entry area, It is determined whether or not the movement of the object is within the reference range. If the motion of the object is within the reference range, the human detector 22a determines that the object is a human. On the other hand, if the movement of the object is out of the reference range, the person detection unit 22a determines that the object is not a person. In this way, a person existing within the no-entry area is detected.
- the human detection unit 22a compares the shape of a moving object among the objects existing in the no-entry area with the reference shape, and determines whether the movement of the object is within the reference range. determine whether or not If the difference between the shape of the object and the reference shape is within a predetermined range and the movement of the object is within the reference range, the human detection unit 22a determines that the object is a person. Otherwise, the person detection unit 22a determines that the object is not a person. In this way, a person existing within the no-entry area is detected.
- the main part of the human detection system 100a is configured.
- human detection unit 22a may be referred to as “human detection means”
- motion detection unit 24 may be referred to as “motion detection means”.
- the hardware configuration of the main part of the human detection device 2a is the same as that described with reference to FIGS. 4 to 6 in the first embodiment. Therefore, detailed description is omitted.
- the function F1 of the three-dimensional model generation unit 21, the function F2a of the human detection unit 22a, the function F3 of the output control unit 23, and the function F4 of the motion detection unit 24 are realized by the processor 41 and the memory 42. Also good. Alternatively, the functions F1, F2a, F3, and F4 may be realized by the processing circuit 43. FIG.
- the processor 41 may include dedicated processors corresponding to each of the functions F1, F2a, F3, and F4.
- Memory 42 may include dedicated memory corresponding to each of functions F1, F2a, F3, and F4.
- the processing circuitry 43 may include dedicated processing circuitry corresponding to each of the functions F1, F2a, F3, and F4.
- the three-dimensional model generation unit 21 generates a three-dimensional model of the no-entry area (step ST1). ToF LiDAR technology is used to generate such a three-dimensional model, as described in the first embodiment.
- the motion detector 24 detects the motion of each object existing within the entry-prohibited area (step ST4). Such motion detection uses Doppler LiDAR technology, as described above.
- the human detection unit 22a detects a person existing in the no-entry area (step ST2a).
- the three-dimensional model generated in step ST1 is used as described in the first embodiment.
- such detection uses the result of motion detection in step ST4, as described above.
- the output control unit 23 executes control to output information (that is, detection result information) based on the detection result in step ST2a (step ST3).
- the human detection system 100a is used to determine whether each object present within the no-entry area is a person, regardless of whether such object is stationary or moving. can do. That is, a person present in the no-entry area can be detected regardless of whether the person is stationary or moving. In other words, such persons can be accurately detected.
- the human detection system 100a can employ various modifications similar to those described in the first embodiment.
- the light emitting unit 11 and the light receiving unit 12 may be provided in the human detection device 2a instead of being provided in the optical sensing device 1. That is, the human detection device 2a may include the light emitting section 11 and the light receiving section 12 .
- the human detection system 100a may include a three-dimensional model generation unit 21, a human detection unit 22a, and a motion detection unit 24.
- the three-dimensional model generation unit 21, the human detection unit 22a, and the motion detection unit 24 may constitute a main part of the human detection system 100a.
- the human detection device 2a may include a three-dimensional model generation unit 21, a human detection unit 22a, and a motion detection unit 24.
- the three-dimensional model generation section 21, the human detection section 22a, and the motion detection section 24 may constitute a main part of the human detection device 2a.
- [Appendix] [Appendix 1] a three-dimensional model generating means for generating a three-dimensional model of the prohibited area based on the reflected light corresponding to the laser light irradiated to the prohibited area in the area where the water of the dam is discharged and the surrounding area; Human detection means for detecting a person in the no-entry area using the three-dimensional model; A person detection device comprising: [Appendix 2] The person detection means detects the person by determining whether or not the object is the person based on the shape of the object in the no-entry area using the three-dimensional model. The human detection device according to appendix 1. [Appendix 3] 2.
- the person detection device determines whether or not the object is the person by comparing the shape of the object with a reference shape.
- motion detection means for detecting motion of the object based on a difference between a frequency component contained in the laser beam and a frequency component contained in the reflected light;
- the person detection device according to appendix 2 or appendix 3, wherein the person detection means determines whether or not the object is the person based on the shape of the object and the movement of the object.
- the person detection means determines whether or not the object is the person based on the shape of the object when the object is stationary, and determines based on the movement of the object when the object is moving. 5.
- the person detection device wherein it is determined whether or not the object is the person.
- the person detection means determines whether or not the object is the person based on the shape of the object when the object is stationary, and determines whether the object is the person when the object is moving. 5.
- the person detection device according to appendix 4, wherein whether or not the object is the person is determined based on movement of the object.
- An output control means for outputting the information about the person to an external dam maintenance system; The human detection device according to any one of appendices 1 to 6, wherein the information is used for notification of an alarm to the person.
- a person detection system comprising: [Appendix 9] The person detection means detects the person by determining whether or not the object is the person based on the shape of the object in the no-entry area using the three-dimensional model. 9. The human detection system of clause 8.
- [Appendix 10] The person detection system according to appendix 9, wherein the person detection means determines whether or not the object is the person by comparing the shape of the object with a reference shape.
- Appendix 12 The person detection means determines whether or not the object is the person based on the shape of the object when the object is stationary, and determines based on the movement of the object when the object is moving. 12.
- the human detection system according to appendix 11, wherein it is determined whether the object is the person.
- the person detection means determines whether or not the object is the person based on the shape of the object when the object is stationary, and determines whether the object is the person when the object is moving. 12.
- the human detection system according to any one of appendices 8 to 13, wherein the information is used for notification of an alarm to the person.
- a three-dimensional model generating means generates a three-dimensional model of the prohibited area based on the reflected light corresponding to the laser beam irradiated to the prohibited area in the area where the water of the dam is discharged and the surrounding area.
- a person detection method wherein a person detection means detects a person in the no-entry area using the three-dimensional model.
- the person detection means detects the person by determining whether or not the object is the person based on the shape of the object in the no-entry area using the three-dimensional model. 16.
- Appendix 17 17.
- the person detection method according to appendix 16 wherein the person detection means determines whether or not the object is the person by comparing a shape of the object with a reference shape.
- motion detection means for detecting motion of the object based on a difference between a frequency component contained in the laser beam and a frequency component contained in the reflected light; 18.
- the person detection means determines whether or not the object is the person based on the shape of the object when the object is stationary, and determines based on the movement of the object when the object is moving. 19.
- the person detection means determines whether or not the object is the person based on the shape of the object when the object is stationary, and determines whether the object is the person when the object is moving. 19.
- the output control means outputs information about the person to an external dam maintenance system; 21.
- the program causes the computer to function as motion detection means for detecting the motion of the object based on the difference between the frequency component contained in the laser beam and the frequency component contained in the reflected light, 25.
- the person detection means determines whether or not the object is the person based on the shape of the object when the object is stationary, and determines based on the movement of the object when the object is moving. 26.
- the recording medium according to appendix 25 wherein it is determined whether or not the object is the person.
- the person detection means determines whether or not the object is the person based on the shape of the object when the object is stationary, and determines whether the object is the person when the object is moving. 26.
- the program causes the computer to function as output control means for outputting information about the person to an external dam maintenance system, 28.
Landscapes
- Business, Economics & Management (AREA)
- Emergency Management (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Optical Radar Systems And Details Thereof (AREA)
- Geophysics And Detection Of Objects (AREA)
Abstract
Description
図1は、第1実施形態に係る人検出システムの要部を示すブロック図である。図2は、第1実施形態に係る人検出システムにおける光センシング装置の要部を示すブロック図である。図3は、第1実施形態に係る人検出装置の要部を示すブロック図である。図1~図3を参照して、第1実施形態に係る人検出システムについて説明する。
図11は、第2実施形態に係る人検出システムの要部を示すブロック図である。図12は、第2実施形態に係る人検出装置の要部を示すブロック図である。図11及び図12を参照して、第2実施形態に係る人検出システムについて説明する。なお、図11及び図12において、図1~図3に示すブロックと同様のブロックについては同一符号を付して説明を省略する。
[付記1]
ダムの水が放流されるエリア及びその周辺のエリアにおいて、侵入禁止エリアに照射されたレーザ光に対応する反射光に基づき、前記侵入禁止エリアの三次元モデルを生成する三次元モデル生成手段と、
前記三次元モデルを用いて、前記侵入禁止エリアにおける人を検出する人検出手段と、
を備える人検出装置。
[付記2]
前記人検出手段は、前記三次元モデルを用いて、前記侵入禁止エリアにおける物体の形状に基づき前記物体が前記人であるか否かを判定することにより、前記人を検出することを特徴とする付記1に記載の人検出装置。
[付記3]
前記人検出手段は、前記物体の形状を基準形状と比較することにより前記物体が前記人であるか否かを判定することを特徴とする付記2に記載の人検出装置。
[付記4]
前記レーザ光に含まれる周波数成分と前記反射光に含まれる周波数成分との差分に基づき、前記物体の動きを検出する動き検出手段を備え、
前記人検出手段は、前記物体の形状及び前記物体の動きに基づき前記物体が前記人であるか否かを判定する
ことを特徴とする付記2又は付記3に記載の人検出装置。
[付記5]
前記人検出手段は、前記物体が静止している場合、前記物体の形状に基づき前記物体が前記人であるか否かを判定して、前記物体が動いている場合、前記物体の動きに基づき前記物体が前記人であるか否かを判定することを特徴とする付記4に記載の人検出装置。
[付記6]
前記人検出手段は、前記物体が静止している場合、前記物体の形状に基づき前記物体が前記人であるか否かを判定して、前記物体が動いている場合、前記物体の形状及び前記物体の動きに基づき前記物体が前記人であるか否かを判定することを特徴とする付記4に記載の人検出装置。
[付記7]
前記人に関する情報を外部ダム保全システムに出力する出力制御手段を備え、
前記情報は、前記人に対する警報の通知に使用される
ことを特徴とする付記1から付記6のうちのいずれか一つに記載の人検出装置。
[付記8]
ダムの水が放流されるエリア及びその周辺のエリアにおいて、侵入禁止エリアに照射されたレーザ光に対応する反射光に基づき、前記侵入禁止エリアの三次元モデルを生成する三次元モデル生成手段と、
前記三次元モデルを用いて、前記侵入禁止エリアにおける人を検出する人検出手段と、
を備える人検出システム。
[付記9]
前記人検出手段は、前記三次元モデルを用いて、前記侵入禁止エリアにおける物体の形状に基づき前記物体が前記人であるか否かを判定することにより、前記人を検出することを特徴とする付記8に記載の人検出システム。
[付記10]
前記人検出手段は、前記物体の形状を基準形状と比較することにより前記物体が前記人であるか否かを判定することを特徴とする付記9に記載の人検出システム。
[付記11]
前記レーザ光に含まれる周波数成分と前記反射光に含まれる周波数成分との差分に基づき、前記物体の動きを検出する動き検出手段を備え、
前記人検出手段は、前記物体の形状及び前記物体の動きに基づき前記物体が前記人であるか否かを判定する
ことを特徴とする付記9又は付記10に記載の人検出システム。
[付記12]
前記人検出手段は、前記物体が静止している場合、前記物体の形状に基づき前記物体が前記人であるか否かを判定して、前記物体が動いている場合、前記物体の動きに基づき前記物体が前記人であるか否かを判定することを特徴とする付記11に記載の人検出システム。
[付記13]
前記人検出手段は、前記物体が静止している場合、前記物体の形状に基づき前記物体が前記人であるか否かを判定して、前記物体が動いている場合、前記物体の形状及び前記物体の動きに基づき前記物体が前記人であるか否かを判定することを特徴とする付記11に記載の人検出システム。
[付記14]
前記人に関する情報を外部ダム保全システムに出力する出力制御手段を備え、
前記情報は、前記人に対する警報の通知に使用される
ことを特徴とする付記8から付記13のうちのいずれか一つに記載の人検出システム。
[付記15]
三次元モデル生成手段が、ダムの水が放流されるエリア及びその周辺のエリアにおいて、侵入禁止エリアに照射されたレーザ光に対応する反射光に基づき、前記侵入禁止エリアの三次元モデルを生成し、
人検出手段が、前記三次元モデルを用いて、前記侵入禁止エリアにおける人を検出する
人検出方法。
[付記16]
前記人検出手段は、前記三次元モデルを用いて、前記侵入禁止エリアにおける物体の形状に基づき前記物体が前記人であるか否かを判定することにより、前記人を検出することを特徴とする付記15に記載の人検出方法。
[付記17]
前記人検出手段は、前記物体の形状を基準形状と比較することにより前記物体が前記人であるか否かを判定することを特徴とする付記16に記載の人検出方法。
[付記18]
動き検出手段が、前記レーザ光に含まれる周波数成分と前記反射光に含まれる周波数成分との差分に基づき、前記物体の動きを検出し、
前記人検出手段は、前記物体の形状及び前記物体の動きに基づき前記物体が前記人であるか否かを判定する
ことを特徴とする付記16又は付記17に記載の人検出方法。
[付記19]
前記人検出手段は、前記物体が静止している場合、前記物体の形状に基づき前記物体が前記人であるか否かを判定して、前記物体が動いている場合、前記物体の動きに基づき前記物体が前記人であるか否かを判定することを特徴とする付記18に記載の人検出方法。
[付記20]
前記人検出手段は、前記物体が静止している場合、前記物体の形状に基づき前記物体が前記人であるか否かを判定して、前記物体が動いている場合、前記物体の形状及び前記物体の動きに基づき前記物体が前記人であるか否かを判定することを特徴とする付記18に記載の人検出方法。
[付記21]
出力制御手段が、前記人に関する情報を外部ダム保全システムに出力し、
前記情報は、前記人に対する警報の通知に使用される
ことを特徴とする付記15から付記20のうちのいずれか一つに記載の人検出方法。
[付記22]
コンピュータを、
ダムの水が放流されるエリア及びその周辺のエリアにおいて、侵入禁止エリアに照射されたレーザ光に対応する反射光に基づき、前記侵入禁止エリアの三次元モデルを生成する三次元モデル生成手段と、
前記三次元モデルを用いて、前記侵入禁止エリアにおける人を検出する人検出手段と、
として機能させるためのプログラムが記録された記録媒体。
[付記23]
前記人検出手段は、前記三次元モデルを用いて、前記侵入禁止エリアにおける物体の形状に基づき前記物体が前記人であるか否かを判定することにより、前記人を検出することを特徴とする付記22に記載の記録媒体。
[付記24]
前記人検出手段は、前記物体の形状を基準形状と比較することにより前記物体が前記人であるか否かを判定することを特徴とする付記23に記載の記録媒体。
[付記25]
前記プログラムは、前記コンピュータを、前記レーザ光に含まれる周波数成分と前記反射光に含まれる周波数成分との差分に基づき、前記物体の動きを検出する動き検出手段として機能させて、
前記人検出手段は、前記物体の形状及び前記物体の動きに基づき前記物体が前記人であるか否かを判定する
ことを特徴とする付記23又は付記24に記載の記録媒体。
[付記26]
前記人検出手段は、前記物体が静止している場合、前記物体の形状に基づき前記物体が前記人であるか否かを判定して、前記物体が動いている場合、前記物体の動きに基づき前記物体が前記人であるか否かを判定することを特徴とする付記25に記載の記録媒体。
[付記27]
前記人検出手段は、前記物体が静止している場合、前記物体の形状に基づき前記物体が前記人であるか否かを判定して、前記物体が動いている場合、前記物体の形状及び前記物体の動きに基づき前記物体が前記人であるか否かを判定することを特徴とする付記25に記載の記録媒体。
[付記28]
前記プログラムは、前記コンピュータを、前記人に関する情報を外部ダム保全システムに出力する出力制御手段として機能させて、
前記情報は、前記人に対する警報の通知に使用される
ことを特徴とする付記22から付記27のうちのいずれか一つに記載の記録媒体。
2,2a 人検出装置
3 出力装置
11 光出射部
12 受光部
21 三次元モデル生成部
22,22a 人検出部
23 出力制御部
24 動き検出部
31 コンピュータ
41 プロセッサ
42 メモリ
43 処理回路
100,100a 人検出システム
200 ダム保全管理システム
Claims (21)
- ダムの水が放流されるエリア及びその周辺のエリアにおいて、侵入禁止エリアに照射されたレーザ光に対応する反射光に基づき、前記侵入禁止エリアの三次元モデルを生成する三次元モデル生成手段と、
前記三次元モデルを用いて、前記侵入禁止エリアにおける人を検出する人検出手段と、
を備える人検出装置。 - 前記人検出手段は、前記三次元モデルを用いて、前記侵入禁止エリアにおける物体の形状に基づき前記物体が前記人であるか否かを判定することにより、前記人を検出することを特徴とする請求項1に記載の人検出装置。
- 前記人検出手段は、前記物体の形状を基準形状と比較することにより前記物体が前記人であるか否かを判定することを特徴とする請求項2に記載の人検出装置。
- 前記レーザ光に含まれる周波数成分と前記反射光に含まれる周波数成分との差分に基づき、前記物体の動きを検出する動き検出手段を備え、
前記人検出手段は、前記物体の形状及び前記物体の動きに基づき前記物体が前記人であるか否かを判定する
ことを特徴とする請求項2又は請求項3に記載の人検出装置。 - 前記人検出手段は、前記物体が静止している場合、前記物体の形状に基づき前記物体が前記人であるか否かを判定して、前記物体が動いている場合、前記物体の動きに基づき前記物体が前記人であるか否かを判定することを特徴とする請求項4に記載の人検出装置。
- 前記人検出手段は、前記物体が静止している場合、前記物体の形状に基づき前記物体が前記人であるか否かを判定して、前記物体が動いている場合、前記物体の形状及び前記物体の動きに基づき前記物体が前記人であるか否かを判定することを特徴とする請求項4に記載の人検出装置。
- 前記人に関する情報を外部ダム保全システムに出力する出力制御手段を備え、
前記情報は、前記人に対する警報の通知に使用される
ことを特徴とする請求項1から請求項6のうちのいずれか1項に記載の人検出装置。 - ダムの水が放流されるエリア及びその周辺のエリアにおいて、侵入禁止エリアに照射されたレーザ光に対応する反射光に基づき、前記侵入禁止エリアの三次元モデルを生成する三次元モデル生成手段と、
前記三次元モデルを用いて、前記侵入禁止エリアにおける人を検出する人検出手段と、
を備える人検出システム。 - 前記人検出手段は、前記三次元モデルを用いて、前記侵入禁止エリアにおける物体の形状に基づき前記物体が前記人であるか否かを判定することにより、前記人を検出することを特徴とする請求項8に記載の人検出システム。
- 前記人検出手段は、前記物体の形状を基準形状と比較することにより前記物体が前記人であるか否かを判定することを特徴とする請求項9に記載の人検出システム。
- 前記レーザ光に含まれる周波数成分と前記反射光に含まれる周波数成分との差分に基づき、前記物体の動きを検出する動き検出手段を備え、
前記人検出手段は、前記物体の形状及び前記物体の動きに基づき前記物体が前記人であるか否かを判定する
ことを特徴とする請求項9又は請求項10に記載の人検出システム。 - 前記人検出手段は、前記物体が静止している場合、前記物体の形状に基づき前記物体が前記人であるか否かを判定して、前記物体が動いている場合、前記物体の動きに基づき前記物体が前記人であるか否かを判定することを特徴とする請求項11に記載の人検出システム。
- 前記人検出手段は、前記物体が静止している場合、前記物体の形状に基づき前記物体が前記人であるか否かを判定して、前記物体が動いている場合、前記物体の形状及び前記物体の動きに基づき前記物体が前記人であるか否かを判定することを特徴とする請求項11に記載の人検出システム。
- 前記人に関する情報を外部ダム保全システムに出力する出力制御手段を備え、
前記情報は、前記人に対する警報の通知に使用される
ことを特徴とする請求項8から請求項13のうちのいずれか1項に記載の人検出システム。 - 三次元モデル生成手段が、ダムの水が放流されるエリア及びその周辺のエリアにおいて、侵入禁止エリアに照射されたレーザ光に対応する反射光に基づき、前記侵入禁止エリアの三次元モデルを生成し、
人検出手段が、前記三次元モデルを用いて、前記侵入禁止エリアにおける人を検出する
人検出方法。 - 前記人検出手段は、前記三次元モデルを用いて、前記侵入禁止エリアにおける物体の形状に基づき前記物体が前記人であるか否かを判定することにより、前記人を検出することを特徴とする請求項15に記載の人検出方法。
- 前記人検出手段は、前記物体の形状を基準形状と比較することにより前記物体が前記人であるか否かを判定することを特徴とする請求項16に記載の人検出方法。
- 動き検出手段が、前記レーザ光に含まれる周波数成分と前記反射光に含まれる周波数成分との差分に基づき、前記物体の動きを検出し、
前記人検出手段は、前記物体の形状及び前記物体の動きに基づき前記物体が前記人であるか否かを判定する
ことを特徴とする請求項16又は請求項17に記載の人検出方法。 - 前記人検出手段は、前記物体が静止している場合、前記物体の形状に基づき前記物体が前記人であるか否かを判定して、前記物体が動いている場合、前記物体の動きに基づき前記物体が前記人であるか否かを判定することを特徴とする請求項18に記載の人検出方法。
- 前記人検出手段は、前記物体が静止している場合、前記物体の形状に基づき前記物体が前記人であるか否かを判定して、前記物体が動いている場合、前記物体の形状及び前記物体の動きに基づき前記物体が前記人であるか否かを判定することを特徴とする請求項18に記載の人検出方法。
- 出力制御手段が、前記人に関する情報を外部ダム保全システムに出力し、
前記情報は、前記人に対する警報の通知に使用される
ことを特徴とする請求項15から請求項20のうちのいずれか1項に記載の人検出方法。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2023500189A JPWO2022176069A1 (ja) | 2021-02-17 | 2021-02-17 | |
PCT/JP2021/005958 WO2022176069A1 (ja) | 2021-02-17 | 2021-02-17 | 人検出装置、人検出システム及び人検出方法 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2021/005958 WO2022176069A1 (ja) | 2021-02-17 | 2021-02-17 | 人検出装置、人検出システム及び人検出方法 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022176069A1 true WO2022176069A1 (ja) | 2022-08-25 |
Family
ID=82930340
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2021/005958 WO2022176069A1 (ja) | 2021-02-17 | 2021-02-17 | 人検出装置、人検出システム及び人検出方法 |
Country Status (2)
Country | Link |
---|---|
JP (1) | JPWO2022176069A1 (ja) |
WO (1) | WO2022176069A1 (ja) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001118176A (ja) * | 1999-10-15 | 2001-04-27 | Fujitsu Denso Ltd | 放流警報装置 |
JP2005056406A (ja) * | 2003-07-24 | 2005-03-03 | Victor Co Of Japan Ltd | 画像の動き検出装置及びコンピュータプログラム |
JP2005216160A (ja) * | 2004-01-30 | 2005-08-11 | Secom Co Ltd | 画像生成装置、侵入者監視装置及び画像生成方法 |
JP2007122507A (ja) * | 2005-10-28 | 2007-05-17 | Secom Co Ltd | 侵入検知装置 |
JP2015213251A (ja) * | 2014-05-02 | 2015-11-26 | 株式会社Ihi | 挙動解析装置、監視システム及びアミューズメントシステム |
WO2020188676A1 (ja) * | 2019-03-18 | 2020-09-24 | 三菱電機株式会社 | ライダ装置及び空気調和機 |
-
2021
- 2021-02-17 JP JP2023500189A patent/JPWO2022176069A1/ja active Pending
- 2021-02-17 WO PCT/JP2021/005958 patent/WO2022176069A1/ja active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001118176A (ja) * | 1999-10-15 | 2001-04-27 | Fujitsu Denso Ltd | 放流警報装置 |
JP2005056406A (ja) * | 2003-07-24 | 2005-03-03 | Victor Co Of Japan Ltd | 画像の動き検出装置及びコンピュータプログラム |
JP2005216160A (ja) * | 2004-01-30 | 2005-08-11 | Secom Co Ltd | 画像生成装置、侵入者監視装置及び画像生成方法 |
JP2007122507A (ja) * | 2005-10-28 | 2007-05-17 | Secom Co Ltd | 侵入検知装置 |
JP2015213251A (ja) * | 2014-05-02 | 2015-11-26 | 株式会社Ihi | 挙動解析装置、監視システム及びアミューズメントシステム |
WO2020188676A1 (ja) * | 2019-03-18 | 2020-09-24 | 三菱電機株式会社 | ライダ装置及び空気調和機 |
Also Published As
Publication number | Publication date |
---|---|
JPWO2022176069A1 (ja) | 2022-08-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7089114B1 (en) | Vehicle collision avoidance system and method | |
JP5682669B2 (ja) | 目標検出装置およびシステム | |
JP5154430B2 (ja) | 空間領域モニター装置および方法 | |
US20220028099A1 (en) | Depth Sensing Using Temporal Coding | |
US10983198B2 (en) | Objective sensor, objective sensor dirt determination method, and object detection device | |
US9977118B2 (en) | Method of operating a distance-measuring monitoring sensor and monitoring sensor | |
US9981604B2 (en) | Object detector and sensing apparatus | |
JP2007225589A (ja) | 目標検出装置およびシステム | |
JP2009110124A (ja) | 対象検出装置、対象検出方法、および対象検出プログラム | |
CN110383102B (zh) | 故障检测装置、故障检测方法及计算机可读取存储介质 | |
JP6647466B2 (ja) | 故障検出装置、故障検出方法及び故障検出プログラム | |
JP2020038135A (ja) | 風洞施設及び風洞試験方法 | |
WO2022176069A1 (ja) | 人検出装置、人検出システム及び人検出方法 | |
JP6396647B2 (ja) | 障害物検知装置及び障害物検知方法 | |
US10896516B1 (en) | Low-power depth sensing using dynamic illumination | |
WO2022172405A1 (ja) | 氾濫予測装置、氾濫予測システム及び氾濫予測方法 | |
JP2012237592A (ja) | レーザレーダ装置及びレーザレーダ法 | |
WO2022157922A1 (ja) | 気体漏れ検出装置、気体漏れ検出システム及び気体漏れ検出方法 | |
WO2023053239A1 (ja) | 材質推定装置、材質推定システム及び材質推定方法 | |
WO2022244108A1 (ja) | 航走体検出装置、航走体検出システム及び航走体検出方法 | |
JP7505599B2 (ja) | 液体漏れ検出装置、液体漏れ検出方法及びプログラム | |
JP5045453B2 (ja) | 高周波センサ装置、処理装置、検知システム、物体検知方法及びプログラム | |
JP7452286B2 (ja) | 浴室内の危険状態を検出する方法、装置及び電子機器 | |
WO2022176070A1 (ja) | 気流観測装置、気流観測システム及び気流観測方法 | |
Blouin et al. | Long-range passive doppler-only target tracking by single-hydrophone underwater sensors with mobility |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21926508 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2023500189 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 18275991 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21926508 Country of ref document: EP Kind code of ref document: A1 |