WO2022070832A1 - 情報処理システム及び情報処理方法 - Google Patents
情報処理システム及び情報処理方法 Download PDFInfo
- Publication number
- WO2022070832A1 WO2022070832A1 PCT/JP2021/033221 JP2021033221W WO2022070832A1 WO 2022070832 A1 WO2022070832 A1 WO 2022070832A1 JP 2021033221 W JP2021033221 W JP 2021033221W WO 2022070832 A1 WO2022070832 A1 WO 2022070832A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- recognition
- unit
- sensor
- information processing
- processing system
- Prior art date
Links
- 230000010365 information processing Effects 0.000 title claims abstract description 134
- 238000003672 processing method Methods 0.000 title claims description 7
- 238000000034 method Methods 0.000 claims description 138
- 230000008569 process Effects 0.000 claims description 118
- 238000013459 approach Methods 0.000 claims description 33
- 238000012549 training Methods 0.000 claims description 33
- 230000007717 exclusion Effects 0.000 claims description 32
- 230000033001 locomotion Effects 0.000 claims description 29
- 238000000605 extraction Methods 0.000 claims description 12
- 238000013528 artificial neural network Methods 0.000 claims description 8
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 claims description 4
- 238000007726 management method Methods 0.000 description 103
- 238000012545 processing Methods 0.000 description 36
- 230000008859 change Effects 0.000 description 26
- 238000012544 monitoring process Methods 0.000 description 26
- 230000000694 effects Effects 0.000 description 21
- 230000011218 segmentation Effects 0.000 description 17
- 230000001133 acceleration Effects 0.000 description 16
- 238000010276 construction Methods 0.000 description 15
- 238000010586 diagram Methods 0.000 description 15
- 230000006378 damage Effects 0.000 description 14
- 230000009471 action Effects 0.000 description 12
- 238000001514 detection method Methods 0.000 description 12
- 238000004088 simulation Methods 0.000 description 10
- 230000005856 abnormality Effects 0.000 description 9
- 238000004458 analytical method Methods 0.000 description 8
- 230000006399 behavior Effects 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 230000036760 body temperature Effects 0.000 description 4
- 210000004556 brain Anatomy 0.000 description 4
- 238000004891 communication Methods 0.000 description 4
- 230000007774 longterm Effects 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 241000282412 Homo Species 0.000 description 3
- 208000027418 Wounds and injury Diseases 0.000 description 3
- 208000014674 injury Diseases 0.000 description 3
- 238000009434 installation Methods 0.000 description 3
- 230000003068 static effect Effects 0.000 description 3
- 238000013527 convolutional neural network Methods 0.000 description 2
- 230000008921 facial expression Effects 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 238000010295 mobile communication Methods 0.000 description 2
- 239000011435 rock Substances 0.000 description 2
- 206010019345 Heat stroke Diseases 0.000 description 1
- 241001465754 Metazoa Species 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000004397 blinking Effects 0.000 description 1
- 238000004140 cleaning Methods 0.000 description 1
- 230000034994 death Effects 0.000 description 1
- 231100000517 death Toxicity 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000001151 other effect Effects 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000000779 smoke Substances 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 238000012876 topography Methods 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W40/00—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
- B60W40/02—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R21/00—Arrangements or fittings on vehicles for protecting or preventing injuries to occupants or pedestrians in case of accidents or other traffic risks
- B60R21/01—Electrical circuits for triggering passive safety arrangements, e.g. airbags, safety belt tighteners, in case of vehicle accidents or impending vehicle accidents
- B60R21/013—Electrical circuits for triggering passive safety arrangements, e.g. airbags, safety belt tighteners, in case of vehicle accidents or impending vehicle accidents including means for detecting collisions, impending collisions or roll-over
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W50/08—Interaction between the driver and the control system
- B60W50/14—Means for informing the driver, warning the driver or prompting a driver intervention
-
- E—FIXED CONSTRUCTIONS
- E02—HYDRAULIC ENGINEERING; FOUNDATIONS; SOIL SHIFTING
- E02F—DREDGING; SOIL-SHIFTING
- E02F9/00—Component parts of dredgers or soil-shifting machines, not restricted to one of the kinds covered by groups E02F3/00 - E02F7/00
- E02F9/24—Safety devices, e.g. for preventing overload
-
- E—FIXED CONSTRUCTIONS
- E02—HYDRAULIC ENGINEERING; FOUNDATIONS; SOIL SHIFTING
- E02F—DREDGING; SOIL-SHIFTING
- E02F9/00—Component parts of dredgers or soil-shifting machines, not restricted to one of the kinds covered by groups E02F3/00 - E02F7/00
- E02F9/26—Indicating devices
- E02F9/261—Surveying the work-site to be treated
- E02F9/262—Surveying the work-site to be treated with follow-up actions to control the work tool, e.g. controller
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/16—Anti-collision systems
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/20—Monitoring the location of vehicles belonging to a group, e.g. fleet of vehicles, countable or determined number of vehicles
- G08G1/205—Indicating the location of the monitored vehicles as destination, e.g. accidents, stolen, rental
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W50/08—Interaction between the driver and the control system
- B60W50/14—Means for informing the driver, warning the driver or prompting a driver intervention
- B60W2050/143—Alarm means
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2300/00—Indexing codes relating to the type of vehicle
- B60W2300/17—Construction vehicles, e.g. graders, excavators
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2420/00—Indexing codes relating to the type of sensors based on the principle of their operation
- B60W2420/40—Photo, light or radio wave sensitive means, e.g. infrared sensors
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2420/00—Indexing codes relating to the type of sensors based on the principle of their operation
- B60W2420/40—Photo, light or radio wave sensitive means, e.g. infrared sensors
- B60W2420/403—Image sensing, e.g. optical camera
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60Y—INDEXING SCHEME RELATING TO ASPECTS CROSS-CUTTING VEHICLE TECHNOLOGY
- B60Y2200/00—Type of vehicle
- B60Y2200/40—Special vehicles
- B60Y2200/41—Construction vehicles, e.g. graders, excavators
Definitions
- This disclosure relates to an information processing system and an information processing method.
- this disclosure proposes an information processing system and an information processing method that can improve the safety of the present and future in the field.
- the information processing system of one form according to the present disclosure is an information processing system for ensuring the safety of the site where heavy machinery is introduced, and is mounted on the equipment arranged at the site. Based on one or more sensor units that detect the situation at the site, a recognition unit that recognizes the situation at the site based on the sensor data acquired by the one or more sensor units, and a recognition result by the recognition unit. A device management unit that manages the device is provided.
- FIG. 1 is a block diagram showing a system configuration example of the information processing system according to the embodiment of the present disclosure.
- the information processing system 1 is composed of, for example, one or more devices 100, a cloud 200, and a field server 300.
- the device 100, cloud 200, and field server 300 include, for example, a wired or wireless LAN (Local Area Network) (including WiFi), a WAN (Wide Area Network), the Internet, or a mobile communication system (4G (4th Generation Mobile Communication System), They are connected to each other via a predetermined network such as 4G-LTE (Long Term Evolution), 5G, etc.), Bluetooth (registered trademark), and infrared communication.
- LAN Local Area Network
- WiFi Wireless Fide Area Network
- WAN Wide Area Network
- the Internet or a mobile communication system (4G (4th Generation Mobile Communication System)
- 4G (4th Generation Mobile Communication System) 4G (4th Generation Mobile Communication System
- the device 100 is a construction device or a measuring device such as a heavy machine used at a work site such as a construction site or a construction site.
- the equipment 100 also includes construction equipment and the like equipped with measuring equipment.
- it is not limited to construction equipment and measuring equipment, but also moving objects such as automobiles, railroad vehicles, aircraft (including helicopters) and ships that are directly or remotely operated by drivers, transport robots, cleaning robots and interactive robots.
- Autonomous robots such as pet robots, various drones (including flying type, traveling type, underwater type, etc.), structures such as surveillance cameras (including fixed point cameras) and traffic signals, smartphones carried by humans and pets
- Various objects that can be connected to a predetermined network and have a sensor, such as a wearable device and an information processing terminal, may be applied as the device 100.
- Each device 100 includes a sensor unit 101, a position / surrounding recognition unit 110, a device management unit 120, a monitor 131, a user interface 132, an output unit 133, a device control unit 134, and an operation system 135. Will be done.
- the sensor unit 101 is, for example, a color or monochrome image sensor, a ToF (Time-of-Flight) sensor, a ranging sensor such as LiDAR (Light Detection and Ranging), LADAR (Laser Detection and Ranging), or a millimeter wave radar, and EVS.
- Various image sensors 102 such as (Event-based Vision Sensor), various inertial sensors 105 such as IMU (Inertial Measurement Unit), gyro sensor, acceleration / angular velocity sensor, various position sensors 108 such as GNSS (Global Navigation Satellite System), etc. It includes one or more sensor units 101, 104, 107 composed of a sensor and signal processing units 103, 106, 109 that perform predetermined processing on the detection signal output from each sensor to generate sensor data.
- various sensors include a sound sensor, a pressure sensor, a water pressure sensor, an illuminance sensor, a temperature sensor, a humidity sensor, an infrared sensor, and a wind direction and speed sensor. Sensors may be used.
- the position / periphery recognition unit 110 refers to one or more of the sensor units 101, 104, and 107 (hereinafter, for the sake of simplicity of explanation, one or more of the sensor units 101, 104, 107 is referred to as "sensor unit 101, etc.” Based on the sensor data input from (referred to as), the position of the device 100 and the surrounding situation of the device 100 are recognized.
- the position of the device 100 may be a global coordinate position acquired by GNSS or the like, or may be a coordinate position in a certain space estimated by SLAM (Simultaneous Localization and Mapping) or the like.
- the position / surrounding recognition unit 110 inputs the sensor data input from the sensor unit 101 or the like, and outputs the recognition unit 111 that outputs the position information of the device 100 and the information related to the surrounding situation (hereinafter referred to as situation information). Be prepared.
- the input of the recognition unit 111 is not limited to the sensor data acquired by the sensor unit 101 or the like of the own machine, and may include the sensor data acquired by the sensor unit 101 or the like of the other device 100. Data input from the cloud 200, the field server 300, or the like via the network may be included.
- the recognition unit 111 may be an inference device provided with a learning model learned by using machine learning, or may be a rule-based recognition unit that specifies an output from an input according to a predetermined algorithm.
- the learning model in this embodiment is, for example, learning using a neural network such as DNN (Deep Neural Network), CNN (Convolutional Neural Network), RNN (Recurrent Neural Network), GAN (Generative Adversarial Network), or an autoencoder. It can be a model.
- the learning model may be a single-modal-learned learning device that inputs one type of data, or may be a multi-modal-learned learning model that collectively inputs different types of data. ..
- the recognition unit 111 may be configured by one learning model, or may be provided with two or more learning models and may be configured to output the final recognition result from the inference results output from each. ..
- the recognition executed by the recognition unit 111 may be a short-term recognition in which sensor data input from the sensor unit 101 or the like is input in a short period of time, or may be a long-term recognition such as several hours, several days, or several years. It may be a long-term recognition that inputs sensor data over a period of time.
- the position / peripheral recognition unit 110 and the recognition unit 111 mounted on each device 100 are described as a single configuration, but the present invention is not limited to this.
- a plurality of position / periphery recognition units 110 may cooperate to realize one or a plurality of recognition units 111, or one position / periphery recognition unit 110 may include a plurality of recognition units 111. ..
- At that time even if one or a plurality of position / surrounding recognition units 110 and / or one or a plurality of recognition units 111 are composed of position / surrounding recognition units 110 and / or recognition units 111 mounted on different devices 100, respectively. good.
- the recognition unit 111 is not limited to the position / periphery recognition unit 110, and is, for example, another unit in the device 100 such as the device management unit 120 or the sensor unit 101, or a cloud connected to the device 100 via a network. It may be arranged in 200, a field server 300, or the like.
- the device management unit 120 is a control unit that manages and controls the overall operation of the device 100.
- the device management unit 120 may be a control unit such as an ECU (Electronic Control Unit) that collectively controls the entire vehicle.
- the device 100 is a fixed or semi-fixed device such as a sensor device, it may be a control unit that controls the overall operation of the device 100.
- the monitor 131 may be a display unit that presents various information to the operator of the device 100, surrounding people, and the like.
- the user interface 132 may be the user interface 132 for the operator to input settings for the device 100, switching of display information, and the like.
- Various input means such as a touch panel type and a switch type may be used for the user interface 132.
- the output unit 133 is composed of, for example, a lamp, an LED (Light Emitting Diode), a speaker, etc., and presents various information to the operator by a method different from that of the monitor 131, and the scheduled operation of the device 100 (right turn, left turn, crane up / down). Etc.) may be an output unit for notifying the surroundings of such things.
- the operation system 135 may include, for example, a handle, an operation lever, a shift lever, various switches, and the like, and may be an operation unit for the operator to input operation information regarding the running and operation of the device 100.
- the device control unit 134 may be a control unit that controls the device 100 based on the operation information input from the operation system 135 and the control signal input from the device management unit 120.
- the cloud 200 is a service form that provides computer resources via a computer network such as the Internet, and is composed of, for example, one or more cloud servers arranged on the network.
- the cloud 200 includes, for example, a learning unit 201 for learning the recognition unit 111.
- the recognition unit 111 is an inference device including a learning model
- the learning unit 201 trains the learning model using supervised learning or unsupervised learning.
- the learning model after training is downloaded to the device 100 and mounted on the recognition unit 111 of the position / surrounding recognition unit 110, for example.
- the recognition unit 111 is a rule-based recognition unit
- the learning unit 201 manually or automatically creates / updates an algorithm for deriving an output as a recognition result with respect to an input.
- the program in which the created / updated algorithm is described is downloaded to the device 100 and executed by, for example, the recognition unit 111 of the position / periphery recognition unit 110.
- the recognition unit 111 is an inference device provided with a learning model and the learning model is trained by supervised learning, it is output from the sensor data acquired by the sensor unit 101 or the like of each device 100 or the recognition unit 111.
- the recognition result (also referred to as an inference result) may be transmitted to the learning unit 201 for the purpose of training or re-learning of the learning model. Further, the re-learned learning model may be downloaded to the device 100 and incorporated into the recognition unit 111 to update the recognition unit 111.
- the learning unit 201 is not limited to the cloud 200, and may be arranged for edge computing such as fog computing or multi-access edge computing performed in the core network of the base station, or the sensor unit 101 or the like. It may be realized by a processor such as a DSP (Digital Signal Processor), a CPU (Central Processing Unit), or a GPU (Graphics Processing Unit) that constitutes the signal processing units 103, 106, 109. That is, the learning unit 201 may be arranged anywhere in the information processing system 1.
- DSP Digital Signal Processor
- CPU Central Processing Unit
- GPU Graphics Processing Unit
- the site server 300 is a server for managing one or more devices 100 installed in one or more sites.
- the site server 300 includes a site management unit 301 and a construction planning unit 302.
- the site management unit 301 collects various information from the position / surrounding recognition unit 110 of the device 100.
- the site management unit 301 collects sensor data acquired by the sensor unit 101 or the like of the device 100, recognition results derived from the sensor data by the recognition unit 111, and the like.
- Various collected information is input to the construction planning unit 302.
- the sensor data to be collected may be raw data as it is acquired by the sensor unit 101 or the like, or a process in which a part or all of the data is processed such as mosaic or cutout in consideration of privacy or the like. It may be completed data.
- the construction planning unit 302 creates a schedule of construction work being carried out at the site based on the information input from the site management unit 301, and inputs this to the site management unit 301.
- Information such as schedules created by the construction planning unit 302 of the site server 300 is stored and managed in the site server 300, and is re-used for other devices 100 at the same site, devices 100 at other sites, and the like. It may be used.
- the site management unit 301 creates an action plan for managing and controlling each device 100 based on the construction schedule input from the construction planning section 302 and various information collected from the device 100, and the created action plan. Is transmitted to the device management unit 120 of each device 100. On the other hand, the device management unit 120 of each device 100 manages and controls each unit of the device 100 according to the received action plan.
- FIG. 2 is a diagram for explaining a flow from recognition of an earthquake to warning to an operator in the information processing system 1 according to the present embodiment.
- the recognition unit 111 of the position / surrounding recognition unit 110 recognizes the presence or absence of an earthquake by using the sensor data input from the sensor unit 101 or the like as an input. do.
- the recognition unit 111 when recognizing the presence or absence of an earthquake on a rule basis, the recognition unit 111 has sensor data such as the position, attitude (angle), velocity (angular acceleration), and acceleration (acceleration of each) of the device 100, which are extreme values or sudden changes.
- sensor data such as the position, attitude (angle), velocity (angular acceleration), and acceleration (acceleration of each) of the device 100, which are extreme values or sudden changes.
- the optical flow in the continuous image data in the time series shows an extreme value or a sudden change
- the distance to each subject in the depth image which is continuous in the time series shows a sudden change.
- the recognition unit 111 may recognize that an earthquake has occurred when the sensor unit 101 or the like detects ground noise or odor (air component) peculiar to the occurrence of an earthquake.
- the recognition unit 111 inputs sensor data obtained from a sensor unit 101 or the like such as an image sensor into an internal learning model, and from the learning model. It may be recognized that an earthquake has occurred when the output indicates that it is an earthquake.
- the recognition unit 111 For the recognition of the earthquake by the recognition unit 111, a different learning model may be used for each device 100, or different sensor data may be input for each device 100.
- the recognition unit 111 is not limited to the presence or absence of an earthquake, and information on the scale of the earthquake (seismic intensity, magnitude, etc.), the damage predicted from it (number of deaths, number of injuries, amount of damage, etc.) and the situation. May be recognized.
- two approaches can be considered for training and re-learning of the recognition unit 111 that recognizes such an earthquake.
- the first is a method of training or re-learning a learning model using sensor data generated by earthquake simulation or the like.
- the learning model trains or re-trains the pair of information such as the scale, topography, and depth of the earthquake source set at the time of simulation with the sensor data generated by the simulation and information such as damage and situation as teacher data. May be learned.
- the second is a method of training or re-learning a learning model by an anomaly detection method.
- a learning model is trained using sensor data detected in a state where no earthquake has occurred.
- the trained learning model outputs the detection of an abnormality, that is, the occurrence of an earthquake, as a recognition result for the input of the sensor data at the time of an earthquake that is not expected in normal times.
- the recognition unit 111 recognizes an earthquake
- the recognition result is input to the equipment management unit 120.
- the device management unit 120 determines the necessity of a warning to the operator and the intensity of the warning (hereinafter, also referred to as a warning level) based on the input recognition result, and executes a warning to the operator based on the determination result.
- a warning level As the warning method to the operator, for example, a warning via the monitor 131, a warning via an output unit 133 such as a lamp, an LED, or a speaker may be used.
- the warning level may be, for example, the strength of the volume, the amount of light, the expression on the monitor, or the like.
- the device management unit 120 may execute a danger avoidance action such as an emergency stop of the device 100 or automatic movement to a safe place.
- the recognition result by the recognition unit 111 may be transmitted to the site server 300.
- the sensor data processed data processed such as row data or mosaic
- the recognition result may be transmitted to the site server 300 together with the recognition result.
- the recognition unit 111 recognizes an event that occurs in a wide range such as an earthquake
- the recognition results and sensor data acquired at a plurality of sites are aggregated in the site server 300 to comprehensively judge the situation and give instructions to multiple sites.
- the sensor data and the recognition result when the recognition unit 111 recognizes the earthquake may be sent to the cloud 200 for analysis and used for training or re-learning of the learning model.
- the sensor data when the earthquake is recognized may be the sensor data acquired by the sensor unit 101 or the like during a certain period before and after the occurrence of the earthquake.
- FIG. 3 is a flowchart showing an example of the operation flow of the information processing system 1 according to the present embodiment.
- the sensor data acquired by the sensor unit 101 or the like is input to the recognition unit 111 of the position / periphery recognition unit 110 (step S101).
- the input of the sensor data from the sensor unit 101 or the like may be periodic, or may be necessary, such as when the sensor data shows an abnormal value.
- the sensor data input from the sensor unit 101 or the like may be raw data or may be processed data that has been processed such as a mosaic.
- the recognition unit 111 executes the recognition process based on the input sensor data (step S102). As described above, the recognition unit 111 may output information such as the scale of the earthquake that has occurred and the damage and situation predicted from the earthquake as the recognition result, in addition to the presence or absence of the earthquake.
- step S102 If it is recognized in the recognition process of step S102 that no earthquake has occurred (NO in step S103), this operation proceeds to step S106.
- the position / surrounding recognition unit 110 inputs the recognition result output from the recognition unit 111 to the device management unit 120, and also via a predetermined network. It is transmitted to the site server 300 (step S104).
- the device management unit 120 issues a warning to the operator via the monitor 131 and the output unit 133 based on the input recognition result (step S105), and proceeds to step S106.
- the recognition result includes information such as the scale of the earthquake that occurred and the damage and situation predicted from it
- the equipment management unit 120 responds to the scale of the earthquake and the damage and situation predicted from it. You may give a warning.
- the device management unit 120 may execute a danger avoidance action such as an emergency stop of the device 100 or automatic movement to a safe place.
- step S106 the control unit that controls the device 100 determines whether or not to end this operation, and if it is determined to end (YES in step S106), this operation ends. On the other hand, if it is determined that the process does not end (NO in step S106), this operation returns to step S101, and the subsequent operations are executed.
- FIG. 4 is a diagram for explaining a flow from recognition of an accident to warning to an operator in the information processing system 1 according to the present embodiment.
- the recognition unit 111 of the position / surrounding recognition unit 110 recognizes the presence or absence of an accident by using the sensor data input from the sensor unit 101 or the like as an input. do.
- the accident in the present embodiment may include an accident in which the device 100 itself is involved, an accident that occurs around the device 100 and is observed by the device 100 as a third party, and the like.
- the recognition unit 111 has sensor data such as the position, posture (angle), speed (angular acceleration), and acceleration (acceleration of each) of the device 100, which are extreme values or sudden changes.
- sensor data such as the position, posture (angle), speed (angular acceleration), and acceleration (acceleration of each) of the device 100, which are extreme values or sudden changes.
- the optical flow in the continuous image data in the time series shows an extreme value or a sudden change
- the distance to each subject in the depth image which is continuous in the time series shows a sudden change.
- the position, posture (angle), speed (angular acceleration), acceleration (acceleration), etc. acquired by the sensor unit 101 or the like attached to the arm, boom, swivel or other movable part of the device 100, etc.
- the device 100 If the sensor data of is an extreme value or a sudden change, if the device 100 is an aircraft equipped with a brake, the strength of the brake operated by the operator or if the sudden brake is applied, the operator will press the emergency stop button. When pressed, when the crawler or tire of the device 100 is idling, or when the amount of vibration generated in the device 100 and its duration show an extreme value or a sudden change, the operator who operates the device 100 When a specific expression is shown, the body temperature, heart rate, brain wave, etc. of the operator operating the device 100 show an extreme value or a sudden change, the load weight of the device 100 shows an extreme value or a sudden change. If the value or state directly or indirectly acquired from the sensor data input from the sensor unit 101 shows a value or state or change that is not expected in normal times, it may be recognized that an accident has occurred.
- the recognition unit 111 When recognizing the occurrence of an accident using a learning model using a neural network, the recognition unit 111 inputs sensor data obtained from a sensor unit 101 or the like such as an image sensor into an internal learning model, and the learning model. It may be recognized that an accident has occurred when the output from is indicated to be an accident.
- the recognition unit 111 determines that an accident has occurred when the above events occur, the device 100 falls from a high place such as a cliff or a building, and if the device 100 is a flying object, it crashes. If the device 100 collides with a person, a building, a natural object (tree, rock, etc.) or another device 100, or if the ceiling, ground, wall, cliff, etc. around the device 100 collapses, the device 100 is loaded. It is possible to recognize the case where the luggage has fallen as an accident. However, the recognition unit 111 may recognize various events as accidents based on various sensor data.
- the recognition unit 111 For the recognition of the accident by the recognition unit 111, a different learning model may be used for each device 100, or different sensor data may be input for each device 100. Further, the recognition unit 111 is not limited to the presence or absence of an accident, and the scale of the accident (range, number of fatalities, number of injured, etc.) and the damage predicted from it (number of dead, number of injured, amount of damage, etc.) And information such as the situation may be recognized.
- a method using a simulation and a method for detecting an abnormality are used in the same manner as in the recognition unit 111 (first embodiment) that recognizes an earthquake.
- the method used may be used.
- the recognition unit 111 When the recognition unit 111 recognizes an accident, the recognition result may be input to the device management unit 120 and used for a warning to the operator or a danger avoidance action, as in the first embodiment.
- the recognition result by the recognition unit 111 is the on-site server together with the sensor data (raw data or processed data processed such as mosaic) acquired by the sensor unit 101 or the like at that time. It may be transmitted to 300.
- the accident information recognized by the recognition unit 111 of a certain device 100 may be notified to another device 100 operating at the same site and shared.
- the recognition of an accident that has occurred in a certain device 100 may be executed not by the recognition unit 111 of the device 100 but by the recognition unit arranged in another device 100, the cloud 200, or the site server 300.
- the sensor data acquired by the sensor unit 101 of a certain device 100 may be shared by another device 100, the cloud 200, or the site server 300, and the recognition unit provided therein may recognize the presence or absence of an accident.
- the sensor data and the recognition result when the recognition unit 111 recognizes the accident may be sent to the cloud 200 for analysis and used for training or re-learning of the learning model.
- the sensor data when the accident is recognized may be the sensor data acquired by the sensor unit 101 or the like during a certain period before and after the accident occurs.
- FIG. 5 is a flowchart showing an example of the operation flow of the information processing system 1 according to the present embodiment. As shown in FIG. 5, in the present embodiment, steps S102 and S103 are replaced with steps S202 and S203 among the same operations as those described with reference to FIG. 3 in the first embodiment.
- step S202 the recognition unit 111 recognizes the presence or absence of an accident by executing the recognition process based on the input sensor data. At that time, information such as the scale of the accident that has occurred and the damage or situation predicted from it may be output as the recognition result.
- step S203 as in step S103 in FIG. 3, when it is recognized that no accident has occurred in the recognition process of step S202 (NO in step S203), this operation proceeds to step S106 and the occurrence of the accident is recognized. If so (YES in step S203), the recognition result is input to the device management unit 120 and transmitted to the site server 300 via a predetermined network (step S104).
- step S105 a warning is issued according to the scale of the accident and the damage and situation predicted from it. You may be injured.
- the device management unit 120 may execute a danger avoidance action such as an emergency stop of the device 100 or automatic movement to a safe place.
- FIG. 6 is a diagram for explaining a flow from recognition of a near-hat state to a warning to an operator in the information processing system 1 according to the present embodiment.
- the recognition unit 111 of the position / surrounding recognition unit 110 receives the sensor data input from the sensor unit 101 or the like as an input to the own machine. Or recognize the near-hat condition in the surroundings.
- the recognition of the hiyari-hat state may be performed, for example, by extracting "accident with no injuries" in Heinrich's law (accident triangle). Therefore, for example, when recognizing a spectacular hat state on a rule basis, the recognition unit 111 has extreme values of sensor data such as the position, posture (angle), speed (angular acceleration), and acceleration (acceleration) of the device 100.
- sensor data such as the position, posture (angle), speed (angular acceleration), and acceleration (acceleration) of the device 100.
- the sensor unit 101 Position, posture (angle), speed (angular speed), acceleration (each) acquired by the sensor unit 101 or the like attached to the arm, boom, swivel, or other movable part of the device 100
- the sensor data such as (acceleration) shows an extreme value or a sudden change
- the device 100 is an aircraft equipped with a brake, the strength of the brake operated by the operator or if a sudden brake is applied, the operator urgently applies.
- the stop button is pressed, the crawler or tire of the device 100 is idling, the amount of vibration generated in the device 100 and its duration show an extreme value or a sudden change, the device 100 is operated.
- the operator who operates the device 100 shows a specific expression, the body temperature, heart rate, brain wave, etc.
- the load weight of the device 100 is an extreme value or a sudden change.
- the value or state directly or indirectly acquired from the sensor data input from the sensor unit 101 shows a value or state or change that is not expected in normal times, such as when a change is shown, it is recognized as a very hat state. You can do it.
- the recognition unit 111 inputs sensor data obtained from a sensor unit 101 or the like such as an image sensor into an internal learning model for learning.
- a sensor unit 101 or the like such as an image sensor
- the output from the model is in the hiyari-hat state, it may be recognized as the hiyari-hat state.
- the recognition unit 111 determines that the recognition unit 111 is in a collision / hat state when the above event occurs, the device 100 is a flying object when the device 100 is likely to fall from a high place such as a cliff or a building. If the device 100 is likely to crash, if the device 100 is likely to collide with people, buildings, natural objects (trees, rocks, etc.) or other devices 100, or if the ceiling, ground, walls, cliffs, etc. around the device 100 are likely to collapse. It is possible to recognize a case where the load loaded on the device 100 is likely to fall as a ceiling hat state. However, the recognition unit 111 may recognize various events as a near-hat state based on various sensor data.
- the recognition unit 111 Similar to the recognition unit 111 (first and second embodiments) for recognizing an earthquake or an accident, for training and re-learning of the recognition unit 111 that recognizes such a very hat state, for example, a simulation is used. A method, a method using an abnormality detection method, or the like may be used.
- the recognition unit 111 When the recognition unit 111 recognizes the very hat state, the recognition result may be input to the device management unit 120 and used for a warning to the operator or a danger avoidance action, as in the above-described embodiment.
- the recognition result by the recognition unit 111 is the on-site server 300 together with the sensor data (raw data or processed data processed such as mosaic) acquired by the sensor unit 101 or the like at that time. May be sent to.
- whether or not the state is a favorable hat recognized by the recognition unit 111 of a certain device 100 may be notified to another device 100 operating at the same site and shared.
- the recognition of whether or not a certain device 100 is in the very hat state is executed not by the recognition unit 111 in the device 100 but by the recognition unit arranged in the other device 100, the cloud 200, or the site server 300. May be good.
- the sensor data acquired by the sensor unit 101 of a certain device 100 may be shared by another device 100, the cloud 200, or the site server 300, and the recognition unit provided therein may recognize the convincing hat state.
- the sensor data and the recognition result when the recognition unit 111 recognizes the favorable hat state may be sent to the cloud 200 for analysis and used for training or re-learning of the learning model.
- the sensor data when the near-hat state is recognized may be the sensor data acquired by the sensor unit 101 or the like during a certain period before and after the occurrence of the near-hat state.
- FIG. 7 is a flowchart showing an operation flow example of the information processing system 1 according to the present embodiment. As shown in FIG. 7, in the present embodiment, steps S102 and S103 are replaced with steps S302 and S303 among the same operations as those described with reference to FIG. 3 in the first embodiment.
- step S302 the recognition unit 111 recognizes whether or not it is in the convincing hat state by executing the recognition process based on the input sensor data. At that time, information such as how dangerous is imminent may be output as a recognition result.
- step S303 as in step S103 in FIG. 3, when it is recognized in the recognition process of step S302 that the state is not in the very hat state (NO in step S303), this operation proceeds to step S106 and is in the convincing hat state. If it is recognized (YES in step S303), the recognition result is input to the device management unit 120 and transmitted to the site server 300 via a predetermined network (step S104).
- step S105 If the recognition result includes information such as how dangerous the danger is, in step S105, a warning may be given according to the information such as how dangerous the danger is.
- the device management unit 120 may execute a danger avoidance action such as an emergency stop of the device 100 or automatic movement to a safe place.
- the dangerous state in this explanation is the high possibility that an accident will occur due to the behavior or situation of the device 100 itself or surrounding people or things, in other words, the device 100 itself or surrounding people or things should be protected. It may be a high possibility (also referred to as danger) that the subject is in danger. Further, in the following description, the same configurations, operations and effects as those of the above-described embodiment will be referred to, and duplicate description will be omitted.
- FIG. 8 is a diagram for explaining a flow from recognition of a dangerous state to warning to an operator in the information processing system 1 according to the present embodiment.
- the recognition unit 111 of the position / surrounding recognition unit 110 receives the sensor data input from the sensor unit 101 or the like as an input, and the own machine or its surroundings. Recognize the danger situation in.
- the operation of the device 100 and the device 100 that puts people and things around it in a dangerous state can be, for example, if the device 100 is a dump truck, traveling forward / backward, dumping the loading platform, and the like. Further, for example, if the device 100 is a hydraulic excavator, traveling forward / backward, turning the upper swing body, operating the boom / arm / bucket, and the like may be applicable.
- an accident may occur, such as a collision between the own device and an object to be protected by the operation of the device 100, or a collision between an object transported or moved by the operation of the device 100 and an object to be protected.
- the health of the object to be protected may be threatened or destroyed, such as the operator suffering from heat stroke or frost injury, or an earthquake may occur.
- Objects to be protected may include humans, other animals, objects such as buildings and equipment, and the like.
- the recognition unit 111 recognizes a dangerous state by inputting sensor data such as image data, depth image data, IMU data, and GNSS data input from the sensor unit 101 and the like.
- sensor data such as image data, depth image data, IMU data, and GNSS data input from the sensor unit 101 and the like.
- the recognition unit 111 may be trained using a wider variety of sensor data or long-term sensor data than, for example, accident recognition in the second embodiment or near-hat state recognition in the third embodiment. ..
- the data input to the recognition unit 111 can include all the information acquired by the device 100 in addition to the sensor data described above. For example, information on the speed and torque of crawlers and tires, information on the position, posture (angle), speed, acceleration, etc. of the device 100, information on the position, angle, speed, etc. of moving parts such as arms, booms, and swivels, and equipment. Information about 100 vibrations, load weight, operator's facial expression (camera image), body temperature, heartbeat, brain wave, etc. may be included.
- the recognition unit 111 may output the degree of danger (hereinafter, also referred to as a danger level) in the dangerous state as a recognition result.
- the present invention is not limited to this, and the device management unit 120 may determine the danger level based on the sensor data input from the sensor unit 101 or the like and the recognition result. For example, when the recognition result indicates that the state is in a dangerous state and it is specified that the sensor data is approaching a cliff or the like from the image data or the like, the device management unit 120 may determine that the danger level is high.
- the intensity (warning level) of the warning given to the operator by the device management unit 120 may be changed according to, for example, the danger level. For example, when the danger level is very high, the device management unit 120 issues a strong warning such as a warning sound or a blinking lamp to the operator, a message display of an operation stop instruction, or the like, or the device 100 to the device control unit 134. You may output the instruction to stop the operation of.
- a strong warning such as a warning sound or a blinking lamp to the operator, a message display of an operation stop instruction, or the like, or the device 100 to the device control unit 134. You may output the instruction to stop the operation of.
- the recognition unit 111 infers the time from the dangerous state to the actual dangerous state (hiyari hat state) and the time until the accident occurs (hereinafter, predicted time). You may. Then, the recognition unit 111 or the device management unit 120 may determine that the shorter the inferred prediction time is, the higher the danger level is.
- the recognition unit 111 may infer the scale of danger predicted from the dangerous state as a part of the recognition result. Then, the recognition unit 111 or the device management unit 120 may determine that the greater the scale of the predicted danger, the higher the danger level.
- the recognition unit 111 may infer a target class or the like that is determined to be dangerous as a part of the recognition result. Then, the recognition unit 111 or the device management unit 120 may change the danger level according to the inferred target class. For example, the recognition unit 111 or the device management unit 120 determines that the danger level is high when the target class is a human or manned heavy equipment, and the danger level is high when the target class is a building or unmanned heavy equipment. It may be determined that it is low.
- the recognition unit 111 may infer the operation and the direction of the device 100 which can be a factor for actually transitioning the dangerous state to the dangerous state. Then, in the device management unit 120, the operation based on the operation input from the device control unit 134 and its direction, and the next operation and its direction in the action plan received from the site server 300 are actually dangerous states. In the case of an operation that may be a factor for transitioning to, or when it matches or approximates the direction thereof, a warning may be issued to the operator or attention may be issued via the monitor 131.
- a method using a simulation, a method using an abnormality detection method, or the like may be used, as in the above-described embodiment. ..
- the recognition result by the recognition unit 111 is the on-site server 300 together with the sensor data (raw data or processed data processed such as mosaic) acquired by the sensor unit 101 or the like at that time. May be sent to.
- the recognition of whether or not a certain device 100 is in a dangerous state may be executed not by the recognition unit 111 in the device 100 but by the recognition unit arranged in another device 100, the cloud 200, or the site server 300. ..
- the sensor data acquired by the sensor unit 101 of a certain device 100 may be shared by another device 100, the cloud 200, or the site server 300, and the recognition unit provided therein may recognize the dangerous state.
- the sensor data and the recognition result when the recognition unit 111 recognizes the dangerous state may be sent to the cloud 200 for analysis and used for training or re-learning of the learning model.
- the sensor data when the dangerous state is recognized may be the sensor data acquired by the sensor unit 101 or the like for a certain period before and after the time when the dangerous state is recognized.
- FIG. 9 is a flowchart showing an example of the operation flow of the information processing system 1 according to the present embodiment. As shown in FIG. 9, in the present embodiment, among the operations similar to those described with reference to FIG. 3 in the first embodiment, steps S102, S103 and S105 are replaced with steps S402, S403 and S405. There is.
- step S402 the recognition unit 111 recognizes whether or not it is in a dangerous state by executing the recognition process based on the input sensor data. At that time, information such as the danger level, the predicted time, the scale of the danger, the behavior that causes the danger, and the direction thereof may be output as the recognition result.
- step S403 similarly to step S103 in FIG. 3, when it is recognized that the recognition process in step S402 is not a dangerous state (NO in step S403), this operation proceeds to step S106 and is recognized as a dangerous state. If (YES in step S403), the recognition result is input to the device management unit 120 and transmitted to the site server 300 via a predetermined network (step S104).
- step S405 the device management unit 120 issues a warning to the operator according to the danger level estimated by the recognition unit 111 or the device management unit 120.
- the device management unit 120 may issue a warning or caution to the operator regarding the next action when the action that may be a factor or the direction thereof is matched or approximated.
- the information processing system 1 is used to recognize and predict the movements of people, objects, areas, etc. around the device 100 to improve the current and future safety in the field. Will be explained.
- the same configurations, operations and effects as those of the above-described embodiment will be referred to, and duplicate description will be omitted.
- FIG. 10 is a diagram for explaining a flow from motion recognition to notification to an operator (may include a warning) in the information processing system 1 according to the present embodiment.
- the recognition unit 111 of the position / circumference recognition unit 110 receives the sensor data input from the sensor unit 101 or the like as an input, and is specific. Recognizes and / or predicts the movement of an object belonging to the area or an object belonging to a predetermined range centered on the own machine. Further, each device 100 is provided with an object database 512.
- the recognition unit 111 inputs image data, depth image data, IMU data, GNSS data, etc. input from the sensor unit 101 and the like, and recognizes the existence and position of people and objects existing around the own machine. Executes an area recognition process that recognizes the area in which they exist by semantic segmentation or the like.
- the device management unit 120 has the existence of people, objects, etc. (hereinafter, also referred to as an object) and the result of semantic segmentation (object exists) based on the result of the area recognition process by the recognition unit 111 as auxiliary information for the operation of the device 100.
- the area or position to be used) is presented to the operator via the monitor 131 (area presentation).
- the operator pays attention to the target area, that is, the area targeted for motion recognition by the recognition unit 111 (the area where the object exists or the specific area based on the own machine). Etc.) is input using, for example, UI132.
- Examples of objects include floors, walls, ceilings of buildings, tunnels, and other structures (which may be demolished), indoor or outdoor ground, cliffs, slopes, etc., as well as within a certain range from your own machine. People and things (including other devices 100, etc.) can be considered. Further, as an example of the area, in addition to the area where the object exists, an area within a certain range from the own machine, a specific area such as a passage or an entrance / exit, etc. can be considered.
- the device management unit 120 refers to the selected area or the object corresponding to the area, "the appearance and position of the object or the area” (hereinafter referred to as “the appearance and position of the object”). (Also referred to as object information) is registered in the object database 512.
- the "appearance" in the object information may be the recognition result of the recognition unit 111, or may be the "appearance” of the shape, pattern (texture), color, material, etc. of the object or region. ..
- the "position” may be either a position on an image, an absolute position on the site, or a combination thereof. Specifically, for example, in the case of a semi-fixed observation aircraft in which the device 100 does not move, the "position” may be a position on an image. On the other hand, in the case of a moving aircraft, the "position” may be the absolute position of the site. In the case of the device 100 that can be operated while moving or stopped, both the position on the image and the absolute position on the site may be used.
- the recognition unit 111 is the sensor unit for the selected area or the object corresponding to the area (that is, the object or area registered in the object database 512).
- the motion recognition process using the sensor data input from 101 as an input is executed.
- the recognition unit 111 receives motion recognition of the selected object or region by inputting sensor data such as image data, depth image data, IMU data, and GNSS data input from the sensor unit 101 or the like, for example. Run. As a result, when it is detected that the selected object or area has moved, the recognition unit 111 notifies the device management unit 120 of this. On the other hand, the device management unit 120 notifies the operator via the monitor 131 and the output unit 133 that the selected object or area has moved.
- the object information registered in the object database 512 of a certain device 100 may be collected in the site server 300 and shared by another device 100 at the same site, a device 100 at another site, or the like. Thereby, for example, it becomes possible to monitor the motion recognition of the object or area tracked and designated (selected) by one device 100 by another device 100 or two or more devices 100. At that time, the device 100 or the site server 300 may automatically collect the object information by the site server 300 and share the object information between the devices 100 (download to each device 100), or manually by the operator. May be done at. Further, the movement of the target recognized by one device 100 may be shared by another device 100 via the site server 300.
- the object information registered in the object database 512 may be uploaded to the cloud 200 for analysis and used for training or re-learning of the learning model.
- the recognition performance of the recognition unit 111 can be improved.
- a method using a simulation, a method using an abnormality detection method, or the like may be used, as in the above-described embodiment.
- the recognition unit 111 recognizes that the selected object actually moves has been described, but the present invention is not limited to this, and the recognition unit 111 predicts that the selected object moves.
- the process may be executed. For example, there is a possibility that the target will move by detecting changes in the amount of spring water around the target, dirt smoke around the target, ambient noise, atmospheric components, etc. with the sensor unit 101 or the like and inputting them to the recognition unit 111. Can be inferred.
- the recognition result by the recognition unit 111 is the on-site server 300 together with the sensor data (raw data or processed data processed such as mosaic) acquired by the sensor unit 101 or the like at that time. May be sent to.
- FIG. 11 is a flowchart showing an example of the operation flow of the information processing system 1 according to the present embodiment.
- the area to be monitored is selected.
- the sensor data acquired by the sensor unit 101 or the like is input to the recognition unit 111 of the position / periphery recognition unit 110 (step S501).
- the recognition unit 111 executes the area recognition process based on the input sensor data (step S502).
- the area in which the object exists is labeled by, for example, semantic segmentation, and the area of the object is specified.
- the recognized object or area is notified to the device management unit 120.
- the device management unit 120 presents the recognized object or area to the operator via the monitor 131 (step S503).
- the operator inputs the selection of the object or area using, for example, UI132 (YES in step S504)
- the device management unit 120 registers the object information about the selected object or area in the object database 512. (Step S505). If the selection by the operator is not input (NO in step S504), this operation returns to step S501.
- step S506 motion recognition of the object or area selected as the monitoring target is executed.
- the sensor data acquired by the sensor unit 101 or the like is input to the recognition unit 111 of the position / periphery recognition unit 110 (step S506).
- the recognition unit 111 executes the motion recognition process based on the input sensor data (step S507).
- the motion recognition process for example, the motion of an object or a region may be recognized from image data, depth image data, or the like.
- the recognition unit 111 determines whether or not there is movement in the selected object or area based on the result of the motion recognition process (step S508). If there is no movement (NO in step S508), this operation proceeds to step S511. On the other hand, when there is a movement (YES in step S508), the recognition unit 111 notifies the device management unit 120 that the selected object or area has moved (step S509). The movement of the selected object or area may be notified from the recognition unit 111 or the device management unit 120 to the site server 300 and shared with the other device 100.
- the device management unit 120 notifies the operator that the selected object or area has moved, for example, via the monitor 131 or the output unit 133 (step S510).
- the device management unit 120 determines whether or not to change the object or area to be monitored based on, for example, an operation input by the operator from UI132 or the like (step S511). If the object or area to be monitored is not changed (NO in step S511), this operation returns to step S506.
- step S511 when the area to be monitored is changed (YES in step S511), the control unit controlling the device 100 determines whether or not to end this operation, and when it is determined to end (YES in step S512). , This operation ends. On the other hand, if it is determined that the process does not end (NO in step S512), this operation returns to step S501, and the subsequent operations are executed.
- the object or area to be monitored is selected based on one or more sensor data input from the sensor unit 101 or the like, and the movement of the selected monitoring target is performed. Since it can be monitored, it is possible to improve the safety of the site now and in the future. Since other configurations, operations, and effects may be the same as those in the above-described embodiment, detailed description thereof will be omitted here.
- FIG. 12 is a diagram for explaining a flow from operator fatigue recognition to warning to the operator in the information processing system 1 according to the present embodiment.
- the recognition unit 111 of the position / surrounding recognition unit 110 receives the sensor data input from the sensor unit 101 or the like as an input, and the operator's fatigue. Recognize the degree of.
- the recognition unit 111 inputs image data, depth image data, IMU data, GNSS data, etc. input from the sensor unit 101 and the like, and executes a fatigue recognition process for recognizing the degree of fatigue of the operator.
- the recognition unit 111 may use, for example, image data or depth image data input from a sensor unit 101 or the like attached to the operator or the like, such as the axle, driver's seat, arm unit, frame, or crawler of the device 100.
- IMU data, GNSS data, sensor data such as operator's facial expression, body temperature, heart rate, brain wave, etc., as well as the elapsed time from the start of operation of the device 100, the operator's continuous working hours, etc. are input to recognize the operator's fatigue. You may.
- the learning model for recognizing the operator's fatigue may be trained or relearned based on, for example, the behavior of the device 100 when the operator is tired, the operation content, and the like.
- the learning model is not limited to this, and the learning model includes abnormality detection learning based on the behavior and maneuvering content of the device 100 in the normal state when the operator is not tired, and the behavior and maneuvering content of the device 100 at the start of operation of the device 100.
- the training or re-learning may be performed by learning based on the difference between the behavior of the device 100 and the operation content and the like when a certain time has passed from the start of the operation.
- the learning model used by the recognition unit 111, the threshold value used for judgment, and the like may be different for each operator.
- the learning model, the judgment threshold value, etc. corresponding to the individual operator may be managed by the device 100, the site server 300, the cloud 200, or the like at each site.
- the recognition unit 111 uses the learning model and the judgment threshold value prepared in advance as a template. May be good. In that case, the template may be managed by any of the device 100, the field server 300, and the cloud 200.
- the recognition unit 111 notifies the device management unit 120 of this.
- the device management unit 120 notifies the operator that he / she is tired via the monitor 131 and the output unit 133.
- the fact that the operator of the device 100 is tired may be shared with the site server 300 or another device 100 at the same site. This makes it possible to improve the efficiency of on-site operations.
- the recognition result by the recognition unit 111 is the on-site server together with the sensor data (raw data or processed data processed such as mosaic) acquired by the sensor unit 101 or the like at that time. It may be transmitted to 300.
- FIG. 13 is a flowchart showing an example of the operation flow of the information processing system 1 according to the present embodiment. As shown in FIG. 13, in the present embodiment, steps S102 and S103 are replaced with steps S602 and S603 among the same operations as those described with reference to FIG. 3 in the first embodiment.
- step S602 the recognition unit 111 recognizes the degree of fatigue of the operator by executing the recognition process based on the input sensor data.
- step S603 similarly to step S103 in FIG. 3, when it is recognized that the operator is not tired in the recognition process of step S202 (NO in step S603), this operation proceeds to step S106, and the operator is tired.
- the recognition result is input to the device management unit 120 and transmitted to the site server 300 via a predetermined network (step S104).
- the recognition unit 111 adds the sensing data such as image data, depth image data, IMU data, and GNSS data, as well as the height, angle, FoV (angle of view), and the like of the camera.
- sensing data such as image data, depth image data, IMU data, and GNSS data
- the height, angle, FoV (angle of view), and the like of the camera is illustrated.
- attribute information information about the sensor unit 101
- FIG. 14 is a diagram for explaining a flow from recognition processing to warning to an operator in the information processing system 1 according to the present embodiment.
- the recognition unit 111 of the position / surrounding recognition unit 110 is in addition to the sensor data input from the sensor unit 101 or the like to the sensor unit.
- the recognition process is executed by inputting the attribute information such as 101.
- the recognition unit 111 has a height and posture of the installation location if the sensor unit 101 and the like are cameras. Attribute information such as (angle) and FoV (angle of view) is used as additional input, and various recognition processes such as object recognition such as humans and objects and semantic segmentation are executed. The result of the recognition process may be used for a warning or the like for preventing contact between the device 100 and a person or an object, as in the above-described embodiment.
- the camera one of the sensor units 101, etc.
- the silhouette of a person or an object reflected in the image data acquired by the camera has a special shape different from the silhouette of the person or the object in the image data acquired by the camera or the like mounted on the automobile or the like. Therefore, as in the present embodiment, by adding the attribute information related to the height and posture of the installation position of the sensor unit 101 or the like to the input of the recognition unit 111 and executing the recognition process, the silhouette is copied as a silhouette having a different special shape. Since it is possible to improve the recognition accuracy for people and things, it is possible to accurately issue a warning to the operator. This makes it possible to further improve current and future safety in the field.
- a learning model that can add such attribute information can be learned by training or re-learning in which the attribute information is added to the teacher data. As a result, it becomes possible to cause the recognition unit 111 to execute more accurate recognition processing in consideration of the attribute information of the sensor unit 101 or the like.
- the position / attitude information obtained from the IMU or GNSS receiver provided in the sensor unit 101 or the like or the device 100 may be used as the attribute information regarding the height and attitude of the installation position of the sensor unit 101 or the like.
- the recognition unit 111 may execute an appropriate recognition process according to the change.
- the attribute information of the sensor unit 101 or the like may be a static value.
- dynamic values and static values may be mixed, such that a part of the attribute information is a dynamic value and the rest is a static value.
- FIG. 15 is a flowchart showing an operation flow example of the information processing system 1 according to the present embodiment.
- step S701 is added after step S101, and step S701 is added.
- S102 and S103 have been replaced by steps S702 and S703.
- step S701 attribute information of the sensor unit 101 and the like is input.
- the present invention is not limited to this, and the attribute information of each sensor unit 101 or the like may be input in the form of accompanying the sensor data in step S101.
- step S702 the recognition unit 111 executes the recognition process based on the input sensor data and attribute information. For example, when the present embodiment is applied to the earthquake recognition according to the first embodiment, the recognition unit 111 executes the recognition process based on the input sensor data and the attribute information to determine whether or not an earthquake has occurred. Recognize.
- step S703 for example, as a result of the recognition process in step S702, it is determined whether or not a warning to the operator is necessary.
- the recognition unit 111 determines that an earthquake has occurred, that is, a warning is required (YES in step S103 of FIG. 3).
- this operation proceeds to step S106, and when it is determined that the warning is necessary (YES in step S703), the recognition result is input to the device management unit 120.
- FIG. 16 is a diagram for explaining a flow from recognition processing to a warning to an operator in the information processing system 1 according to the present embodiment.
- the recognition unit 111 of the position / surrounding recognition unit 110 has a warning level set for each object or its type. Warn the operator.
- each device 100 is provided with an attribute database 812 for managing the warning level set for each object or its type.
- the recognition unit 111 inputs image data, depth image data, IMU data, GNSS data, etc. input from the sensor unit 101 and the like, and recognizes the existence and position of people and objects existing around the own machine. Executes an area recognition process that recognizes the area in which they exist by semantic segmentation or the like.
- the device management unit 120 provides the operator with the existence of an object and the result of semantic segmentation (area or position where the object exists) based on the result of the area recognition process by the recognition unit 111 as auxiliary information for the operation of the device 100. It is presented (area presentation) via the monitor 131. On the other hand, the operator inputs the selection of the area in which the object for setting the warning level exists among the presented areas such as objects by using, for example, UI132.
- the example of the object may be the same as the example described in the fifth embodiment, for example.
- the device management unit 120 When the operator inputs the selection of the area for which the warning level is to be set, the device management unit 120 describes the selected area, the object corresponding to the area, or the type to which the object belongs, and "the appearance of the object”. "Position and designated warning intensity” (hereinafter, also referred to as warning level information) is set, and the warning level information set for each object or its type is registered in the attribute database 812.
- the "view” and “position” in the warning level information may be the same as those in the object information according to the fifth embodiment.
- the recognition unit 111 executes various recognition processes for the sensor data input from the sensor unit 101 and the like.
- the recognition process executed by the recognition unit 111 may be any of the recognition processes exemplified in the above-described embodiment. Further, the recognition unit 111 may include an object for which warning level information is not set as a recognition target.
- the recognition unit 111 recognizes an object or area requiring a warning by inputting sensor data such as image data, depth image data, IMU data, and GNSS data input from the sensor unit 101 or the like, for example.
- the recognition unit 111 executes an attribute information assigning process for imparting warning level information set for the object or its type in the attribute database 812 to the object or area recognized as requiring a warning. For example, the recognition unit 111 compares the object or its type obtained as the recognition result with the object or its type in which the warning level information is registered in the attribute database 812, and compares the object or its type with the same appearance and the same position. Applicable warning level information is given to the type. At that time, objects with the same appearance but at different positions may be warned at a low warning level (attention level). Then, the recognition unit 111 notifies the device management unit 120 of the information of the object or area to which the warning level information is added as the recognition result.
- the area recognition process and the attribute information addition process may be executed by one recognition unit 111, or may be executed by different recognition units 111 included in the position / periphery recognition unit 110.
- the device management unit 120 issues a warning according to the warning level given to the object or area for which a warning is required in the recognition result to the operator via the monitor 131 or the output unit 133.
- the device management unit 120 issues a warning according to the warning level given to the object or area for which a warning is required in the recognition result to the operator via the monitor 131 or the output unit 133.
- the warning level information registered in the attribute database 812 of a certain device 100 may be collected in the site server 300 and shared by another device 100 at the same site, a device 100 at another site, or the like. As a result, the warning level can be set for each object or its type in the two or more devices 100, so that the warning level setting work can be made more efficient. At that time, when giving a warning based on the warning level information set in the other device 100, the operator may be notified that the warning level information has already been set in addition to the warning according to the warning level. Further, the device 100 or the site server 300 may automatically collect the warning level information by the site server 300 and share the warning level information between the devices 100 (download to each device 100), or the operator may perform it. It may be done manually.
- the warning level information registered in the attribute database 812 may be uploaded to the cloud 200 for analysis and used for training or re-learning of the learning model.
- the recognition performance of the recognition unit 111 can be improved.
- a method using a simulation, a method using an abnormality detection method, or the like may be used, as in the above-described embodiment.
- FIG. 17 is a flowchart showing an example of the operation flow of the information processing system 1 according to the present embodiment.
- the information processing system 1 first selects a region to be monitored, as in the fifth embodiment. Specifically, for example, similarly to step S101 in FIG. 3, first, the sensor data acquired by the sensor unit 101 or the like is input to the recognition unit 111 of the position / periphery recognition unit 110 (step S801).
- the recognition unit 111 executes the area recognition process based on the input sensor data (step S802).
- the area in which the object exists is labeled by, for example, semantic segmentation, and the area of the object is specified.
- the recognized object or area is notified to the device management unit 120.
- the device management unit 120 presents the recognized object or area to the operator via the monitor 131 (step S803).
- the operator selects an object or area using, for example, UI132, and specifies a warning level for the selected object or area (YES in step S804)
- the device management unit 120 selects.
- the warning level information regarding the created object or area is registered in the attribute database 812 (step S805). If the selection by the operator is not input (NO in step S804), this operation returns to step S801.
- the recognition process is executed for the object for which the warning level is specified and the object or area of the same type. Specifically, for example, similarly to step S101 in FIG. 3, first, the sensor data acquired by the sensor unit 101 or the like is input to the recognition unit 111 of the position / periphery recognition unit 110 (step S806).
- the recognition unit 111 executes recognition processing for the target object or area based on the input sensor data (step S807).
- the recognition unit 111 determines whether or not a warning to the operator regarding the target object or area is necessary based on the result of the recognition process (step S808). If it is determined that the warning is not necessary (NO in step S808), this operation proceeds to step S811. On the other hand, when it is determined that a warning is necessary (YES in step S808), the recognition unit 111 notifies the device management unit 120 of the recognition result (step S809). The recognition result may be notified from the recognition unit 111 or the device management unit 120 to the site server 300 and shared with other devices 100.
- the device management unit 120 issues a warning at the specified warning level regarding the object or area for which the warning is determined to be necessary to the operator via, for example, the monitor 131 or the output unit 133 (step S810).
- the device management unit 120 determines whether or not to change the object or area to be warned based on, for example, an operation input by the operator from UI132 or the like (step S811). If the object or area to be warned is not changed (NO in step S811), this operation returns to step S806.
- step S811 when the area to be warned is changed (YES in step S811), the control unit controlling the device 100 determines whether or not to end this operation, and when it is determined to end (YES in step S812). , This operation ends. On the other hand, if it is determined that the process does not end (NO in step S812), this operation returns to step S801, and the subsequent operations are executed.
- a warning level is set for each object or its type based on one or more sensor data input from the sensor unit 101 or the like, and the set warning level is used. Since the warning is issued to the operator, the operator can know what kind of situation the object to be protected more accurately is. This makes it possible to improve current and future safety in the field. Since other configurations, operations, and effects may be the same as those in the above-described embodiment, detailed description thereof will be omitted here.
- FIG. 18 is a diagram for explaining a flow from recognition processing to a warning to an operator in the information processing system 1 according to the present embodiment.
- the recognition unit 111 of the position / periphery recognition unit 110 is the object specified in the recognition result or its own.
- the area exclusion process for determining the necessity of warning for the target object or area is executed.
- each device 100 is provided with an exclusion database 912 for excluding objects or their types from the target of warning.
- the recognition unit 111 inputs image data, depth image data, IMU data, GNSS data, etc. input from the sensor unit 101 and the like, and recognizes the existence and position of people and objects existing around the own machine. Executes an area recognition process that recognizes the area in which they exist by semantic segmentation or the like.
- the device management unit 120 provides the operator with the existence of an object and the result of semantic segmentation (area or position where the object exists) based on the result of the area recognition process by the recognition unit 111 as auxiliary information for the operation of the device 100. It is presented (area presentation) via the monitor 131. On the other hand, the operator inputs the selection of the area in which the object to be excluded from the warning exists among the areas such as the presented objects by using, for example, UI132.
- the example of the object may be the same as the example described in the fifth embodiment, for example.
- the device management unit 120 When the operator inputs the selection of the area to be excluded from the warning, the device management unit 120 describes the selected area, the object corresponding to the area, or the type to which the object belongs, "the appearance and position of the object". (Hereinafter, also referred to as exclusion information), and the exclusion information set for each object or its type is registered in the exclusion database 912.
- exclusion information also referred to as exclusion information
- the "view” and "position” in the exclusion information may be the same as those in the object information according to the fifth embodiment.
- the recognition unit 111 executes various recognition processes for the sensor data input from the sensor unit 101 and the like.
- the recognition process executed by the recognition unit 111 may be any of the recognition processes exemplified in the above-described embodiment. Further, the recognition unit 111 may include an object excluded from the warning target as a recognition target.
- the recognition unit 111 recognizes an object or area requiring a warning by inputting sensor data such as image data, depth image data, IMU data, and GNSS data input from the sensor unit 101 or the like, for example.
- the recognition unit 111 executes an area exclusion process for determining whether or not exclusion information is registered in the exclusion database 912 with respect to the object or area recognized by the area recognition process. For example, the recognition unit 111 compares the object or its type obtained as the recognition result with the object or its type registered as a non-warning target in the exclusion database 912, and the object or the object at the same position with the same appearance. The type is excluded from the warning.
- the present invention is not limited to this, and objects having the same appearance but different positions may be warned at a low warning level (warning level).
- the recognition unit 111 notifies the device management unit 120 of the information of the object or area that is not excluded from the warning in the exclusion database 912 as the recognition result.
- the area recognition process and the area exclusion process may be executed by one recognition unit 111, or may be executed by different recognition units 111 included in the position / periphery recognition unit 110.
- the device management unit 120 issues a warning to the warning target object or area to the operator via the monitor 131 and the output unit 133 based on the recognition result notified from the recognition unit 111.
- the operator is issued only a warning about an object or area of interest, so that a more accurate warning can be issued to the operator.
- the exclusion information registered in the exclusion database 912 of a certain device 100 may be collected in the site server 300 and shared by another device 100 at the same site, a device 100 at another site, or the like. As a result, it is possible to set an object to be excluded or its type in two or more devices 100, so that the exclusion work can be made more efficient. At that time, when the object or area is excluded based on the exclusion information set in the other device 100, the operator is notified of the object or area excluded from the target in addition to the warning for the target object or area. good. Further, the collection of exclusion information by the site server 300 and the sharing of exclusion information between the devices 100 (downloading to each device 100) may be automatically performed by the device 100 or the site server 300, or may be manually performed by the operator. It may be done.
- the exclusion information registered in the exclusion database 912 may be uploaded to the cloud 200 for analysis and used for training or re-learning of the learning model.
- the recognition performance of the recognition unit 111 can be improved.
- a method using a simulation, a method using an abnormality detection method, or the like may be used, as in the above-described embodiment.
- FIG. 19 is a flowchart showing an example of the operation flow of the information processing system 1 according to the present embodiment.
- the information processing system 1 first selects a region to be monitored, as in the fifth embodiment. Specifically, for example, similarly to step S101 in FIG. 3, first, the sensor data acquired by the sensor unit 101 or the like is input to the recognition unit 111 of the position / periphery recognition unit 110 (step S901).
- the recognition unit 111 executes the area recognition process based on the input sensor data (step S902).
- the area in which the object exists is labeled by, for example, semantic segmentation, and the area of the object is specified.
- the recognized object or area is notified to the device management unit 120.
- the device management unit 120 presents the recognized object or area to the user via the monitor 131 (step S903).
- the device management unit 120 determines the exclusion information regarding the selected object or area. Is registered in the exclusion database 912 (step S905). If the selection by the operator is not input (NO in step S904), this operation returns to step S901.
- step S906 the sensor data acquired by the sensor unit 101 or the like is input to the recognition unit 111 of the position / periphery recognition unit 110 (step S906).
- the recognition unit 111 executes the recognition process for the object or area based on the input sensor data (step S907).
- the recognition unit 111 determines whether or not the recognized object or area is excluded from the warning based on the result of the recognition process (step S908). If it is excluded from the warning (YES in step S908), this operation proceeds to step S911. On the other hand, when it is the target of the warning (NO in step S908), the recognition unit 111 notifies the device management unit 120 of the recognition result (step S909). The recognition result may be notified from the recognition unit 111 or the device management unit 120 to the site server 300 and shared with other devices 100.
- the device management unit 120 issues a warning regarding the object or area targeted for the warning to the operator via, for example, the monitor 131 or the output unit 133 (step S910).
- the device management unit 120 determines whether or not to change the object or area to be excluded based on, for example, an operation input by the operator from UI132 or the like (step S911). If the object or area to be excluded is not changed (NO in step S911), this operation returns to step S906.
- step S911 when the area to be excluded is changed (YES in step S911), the control unit controlling the device 100 determines whether or not to end this operation, and when it is determined to end (YES in step S912). , This operation ends. On the other hand, if it is determined that the process does not end (NO in step S912), this operation returns to step S901, and the subsequent operations are executed.
- FIG. 20 is a diagram for explaining a flow from recognition processing to warning to an operator in the information processing system 1 according to the present embodiment.
- the recognition unit 111 of the position / surrounding recognition unit 110 is an object specified in the recognition result or Among the objects of the same type, the recognition process of recognizing the approach of the object or area designated as a dangerous object or dangerous area by the operator, that is, the object of approach monitoring is executed.
- each device 100 is provided with an approach monitoring database 1012 for registering an object to be monitored for approach to the device 100 or its type.
- the recognition unit 111 inputs image data, depth image data, IMU data, GNSS data, etc. input from the sensor unit 101 and the like, and recognizes the existence and position of people and objects existing around the own machine. Executes an area recognition process that recognizes the area in which they exist by semantic segmentation or the like.
- the device management unit 120 obtains the result of the area recognition process by the recognition unit 111 in addition to the image of the surroundings of the device 100 (hereinafter, also referred to as the surrounding image) acquired by the sensor unit 101 or the like as auxiliary information for the operation of the device 100. Based on this, the existence of the object and the result of semantic segmentation (the area or position where the object exists) are presented to the operator via the monitor 131 (area presentation). On the other hand, the operator inputs the selection of the area where the object to be the approach monitoring exists among the areas such as the objects presented together with the surrounding image by using, for example, UI132.
- the example of the object may be the same as the example described in the fifth embodiment, for example.
- the device management unit 120 When the operator inputs the selection of the area to be monitored by the operator, the device management unit 120 describes the "view and position" (hereinafter referred to as “view and position") with respect to the selected area, the object corresponding to the area, or the type to which the object belongs. , Also referred to as approach monitoring information), and the approach monitoring information set for each object or its type is registered in the approach monitoring database 1012.
- the "view” and “position” in the approach monitoring information may be the same as those in the object information according to the fifth embodiment.
- the recognition unit 111 executes various recognition processes for the sensor data input from the sensor unit 101 or the like.
- the recognition unit 111 approaches (or approaches) the device 100 of the object or area by inputting sensor data such as image data, depth image data, IMU data, and GNSS data input from the sensor unit 101 or the like, for example. , Approach of the device 100 to the object or area).
- the recognition unit 111 executes an approach recognition process for determining whether or not the device 100 has been approached with respect to the object or area in which the approach monitoring information is registered in the approach monitoring database 1012. For example, the recognition unit 111 compares the object or its type obtained as the recognition result with the object or its type registered as a warning target in the proximity monitoring database 1012, and compares the object or its type with the same appearance and the same position. The types are monitored, and it is recognized whether the distance between them and the device 100 is within a predetermined distance. At that time, the distance for determining the necessity of notification may be changed according to the object or its type.
- the recognition unit 111 notifies the device management unit 120 of the recognition result for the object or area.
- the area recognition process and the proximity recognition process may be executed by one recognition unit 111, or may be executed by different recognition units 111 included in the position / periphery recognition unit 110.
- the device management unit 120 notifies the operator via the monitor 131 and the output unit 133 that the object or area to be monitored for approach has approached, based on the recognition result notified from the recognition unit 111. As a result, the operator can accurately know that the device 100 has approached a dangerous object or a dangerous area.
- the proximity monitoring information registered in the proximity monitoring database 1012 of a certain device 100 may be collected in the site server 300 and shared by another device 100 at the same site, a device 100 at another site, or the like. As a result, it is possible to set the object to be monitored or the type thereof by the two or more devices 100, so that the registration work of the monitoring target can be made more efficient. At that time, when monitoring the approach of the object or area based on the approach monitoring information set by the other device 100, in addition to the notification regarding the target object or area, the monitored object or area is sent to the operator. You may be notified. Further, the device 100 or the site server 300 may automatically collect the approach monitoring information by the site server 300 and share the approach monitoring information between the devices 100 (download to each device 100), or the operator may use the site server 300. It may be done manually.
- the proximity monitoring information registered in the proximity monitoring database 1012 may be uploaded to the cloud 200 for analysis and used for training or re-learning of the learning model.
- the recognition performance of the recognition unit 111 can be improved.
- a method using a simulation, a method using an abnormality detection method, or the like may be used, as in the above-described embodiment.
- FIG. 21 is a flowchart showing an example of the operation flow of the information processing system 1 according to the present embodiment.
- the information processing system 1 first selects a region to be monitored, as in the fifth embodiment. Specifically, for example, similarly to step S101 in FIG. 3, first, the sensor data acquired by the sensor unit 101 or the like is input to the recognition unit 111 of the position / periphery recognition unit 110 (step S1001).
- the recognition unit 111 executes the area recognition process based on the input sensor data (step S1002).
- the area in which the object exists is labeled by, for example, semantic segmentation, and the area of the object is specified.
- the recognized object or area is notified to the device management unit 120.
- the device management unit 120 presents the recognized object or area to the operator via the monitor 131 together with the surrounding image input from the sensor unit 101 or the like (step S1003).
- the device management unit 120 monitors the proximity of the selected object or area. The information is registered in the proximity monitoring database 1012 (step S1005). If the selection by the operator is not input (NO in step S1004), this operation returns to step S1001.
- step S1006 the approach recognition process for the monitored object or area is executed. Specifically, for example, similarly to step S101 in FIG. 3, first, the sensor data acquired by the sensor unit 101 or the like is input to the recognition unit 111 of the position / periphery recognition unit 110 (step S1006).
- the recognition unit 111 executes an approach recognition process for recognizing whether or not the object or area has approached the device 100 based on the input sensor data (step S1007).
- the recognition unit 111 determines whether or not an object or area close to the device 100 is to be monitored based on the result of the recognition process (step S1008). If it is not monitored (NO in step S1008), this operation proceeds to step S1011. On the other hand, when it is a monitoring target (YES in step S1008), the recognition unit 111 notifies the device management unit 120 of the recognition result (step S1009). The recognition result may be notified from the recognition unit 111 or the device management unit 120 to the site server 300 and shared with other devices 100.
- the device management unit 120 notifies the operator that the monitored object or area is close to the device 100, for example, via the monitor 131 or the output unit 133 (step S1010).
- the device management unit 120 determines whether or not to change the object or area to be monitored based on, for example, an operation input by the operator from UI132 or the like (step S1011). If the object or area to be monitored is not changed (NO in step S1011), this operation returns to step S1006.
- step S1011 when the area to be monitored is changed (YES in step S1011), the control unit controlling the device 100 determines whether or not to end this operation, and when it is determined to end (YES in step S1012). , This operation ends. On the other hand, if it is determined that the process does not end (NO in step S1012), this operation returns to step S1001 and the subsequent operations are executed.
- a dangerous substance or a dangerous area designated by the operator approaches the device 100 based on one or more sensor data input from the sensor unit 101 or the like. Can be notified to the operator, so that the operator can more accurately secure the safety of the device 100 and itself. This makes it possible to improve current and future safety in the field. Since other configurations, operations, and effects may be the same as those in the above-described embodiment, detailed description thereof will be omitted here.
- the sensor data acquired by the sensor unit 101 or the like and the recognition result thereof may be uploaded to the cloud 200 or the like, analyzed and corrected, and used for training or re-learning of the learning model.
- the information related to the object or area specified by the operator such as the object information, the attribute information, the exclusion information, the approach caution information, etc.
- the human or non-human specified by the recognition unit 111 Information (hereinafter, also referred to as extracted information) of the area recognized as a thing (for example, the area surrounded by the bounding box) and the area recognized as a person or a thing by semantic segmentation (hereinafter also referred to as a free area) to the cloud 200 or the like.
- extracted information for example, the area surrounded by the bounding box
- free area the area recognized as a person or a thing by semantic segmentation
- extracted information effective for training and re-learning of the learning model is extracted from the sensor data acquired by the sensor unit 101 and the like, and this extracted information is transferred to the cloud 200.
- this extracted information is transferred to the cloud 200.
- it is uploaded and used for training and re-learning of the learning model.
- FIG. 22 is a diagram for explaining a flow from recognition processing to warning to an operator in the information processing system 1 according to the present embodiment.
- the recognition unit 111 of the position / surrounding recognition unit 110 is based on the sensor data. Execute the extraction process to extract the extraction information used for training and re-learning of the learning model.
- the recognition unit 111 may use the sensor data as a region surrounded by a bounding box or a free region labeled by semantic segmentation (hereinafter collectively referred to as a region of interest).
- the extracted attention area is uploaded to the learning unit 201 of the cloud 200.
- various information associated with the area of interest such as object information, attribute information, exclusion information, and approach caution information, is also uploaded to the learning unit 201 in order to further improve the performance and functionality of the learning model. Is possible.
- the amount of information to be uploaded is reduced by uploading the extracted information extracted from the sensor data to the cloud 200. It becomes possible. However, both the sensor data acquired by the sensor unit 101 or the like and the extracted information extracted from the sensor data may be uploaded to the cloud 200.
- FIG. 23 is a flowchart showing an example of the operation flow of the information processing system 1 according to the present embodiment.
- the sensor data acquired by the sensor unit 101 or the like is input to the recognition unit 111 of the position / periphery recognition unit 110 (step S1101).
- the recognition unit 111 executes an extraction process for the input sensor data (step S1102).
- an extraction process for example, an area surrounded by a bounding box or a free area labeled by semantic segmentation is extracted.
- the device management unit 120 uploads the extracted extracted information to the cloud 200 (step S1103).
- the learning unit 201 on the cloud 200 side executes training and re-learning of the learning model using the uploaded extracted information.
- the information uploaded from the recognition unit 111 to the cloud 200 includes various information such as the sensor data itself, the recognition result, the object information, the attribute information, the exclusion information, and the approach caution information. May be good.
- step S1104 determines whether or not to end this operation (step S1104), and if it is determined to end (YES in step S1104), this operation ends. On the other hand, if it is determined that the process does not end (NO in step S1104), this operation returns to step S1101 and the subsequent operations are executed.
- FIG. 24 is a hardware configuration diagram showing an example of a computer 1000 that realizes functions such as a position / periphery recognition unit 110, a device management unit 120, a learning unit 201, and a field server 300.
- the computer 1000 has a CPU 1100, a RAM 1200, a ROM (Read Only Memory) 1300, an HDD (Hard Disk Drive) 1400, a communication interface 1500, and an input / output interface 1600. Each part of the computer 1000 is connected by a bus 1050.
- the CPU 1100 operates based on the program stored in the ROM 1300 or the HDD 1400, and controls each part. For example, the CPU 1100 expands the program stored in the ROM 1300 or the HDD 1400 into the RAM 1200, and executes processing corresponding to various programs.
- the ROM 1300 stores a boot program such as a BIOS (Basic Input Output System) executed by the CPU 1100 when the computer 1000 is started, a program depending on the hardware of the computer 1000, and the like.
- BIOS Basic Input Output System
- the HDD 1400 is a computer-readable recording medium that non-temporarily records a program executed by the CPU 1100 and data used by such a program.
- the HDD 1400 is a recording medium for recording a program for executing each operation according to the present disclosure, which is an example of program data 1450.
- the communication interface 1500 is an interface for the computer 1000 to connect to an external network 1550 (for example, the Internet).
- the CPU 1100 receives data from another device or transmits data generated by the CPU 1100 to another device via the communication interface 1500.
- the input / output interface 1600 has a configuration including the above-mentioned I / F unit 18, and is an interface for connecting the input / output device 1650 and the computer 1000.
- the CPU 1100 receives data from an input device such as a keyboard or mouse via the input / output interface 1600. Further, the CPU 1100 transmits data to an output device such as a display, a speaker, or a printer via the input / output interface 1600.
- the input / output interface 1600 may function as a media interface for reading a program or the like recorded on a predetermined recording medium (media).
- the media is, for example, an optical recording medium such as DVD (Digital Versatile Disc) or PD (Phase change rewritable Disk), a magneto-optical recording medium such as MO (Magneto-Optical disk), a tape medium, a magnetic recording medium, or a semiconductor memory.
- an optical recording medium such as DVD (Digital Versatile Disc) or PD (Phase change rewritable Disk)
- a magneto-optical recording medium such as MO (Magneto-Optical disk)
- tape medium such as DVD (Digital Versatile Disc) or PD (Phase change rewritable Disk)
- MO Magneto-optical disk
- the CPU 1100 of the computer 1000 loads a program loaded on the RAM 1200.
- the functions of the position / surrounding recognition unit 110, the device management unit 120, the learning unit 201, the site server 300, and the like are realized.
- the program and the like related to the present disclosure are stored in the HDD 1400.
- the CPU 1100 reads the program data 1450 from the HDD 1400 and executes the program, but as another example, these programs may be acquired from another device via the external network 1550.
- the present technology can also have the following configurations.
- the one or more sensor units include an image sensor, a distance measuring sensor, an EVS (Event-based Vision Sensor), an inertial sensor, a position sensor, a sound sensor, a pressure sensor, a water pressure sensor, an illuminance sensor, a temperature sensor, a humidity sensor, and an infrared sensor.
- the device management unit notifies the operator of the device of the recognition result, or executes a control to issue a warning to the operator based on the recognition result, any one of the above (1) to (3).
- the information processing system according to any one of (1) to (4), wherein the recognition unit recognizes the fatigue of an operator who operates the device based on the sensor data.
- the recognition unit recognizes the fatigue of the operator based on the operating time of the device in addition to the sensor data.
- the recognition unit recognizes the situation at the site based on the attribute information of one or more sensor units in addition to the sensor data.
- the recognition unit has a first recognition process for recognizing an object or area existing around the device based on the sensor data, and a second recognition process for recognizing the situation at the site based on the sensor data.
- the device management unit executes control to issue a warning of intensity corresponding to the object or region recognized by the first recognition process to the operator of the device.
- the information processing system according to any one of (1) to (12).
- (14) Further provided with a holding unit that retains the intensity of the warning for each object or region.
- the device management unit causes the operator to set the intensity of the warning for the object or area recognized by the first recognition process.
- the holding unit holds the intensity of the warning for each object or region set by the operator.
- the device management unit controls to issue a warning to the operator of the device according to the intensity of the warning for each object or region held in the holding unit.
- the information processing system according to (13) above.
- a holding unit for holding exclusion information as to whether or not to exclude from the warning target for each object or area.
- the recognition unit has a first recognition process for recognizing an object or area existing around the device based on the sensor data, and a second recognition process for recognizing the situation at the site based on the sensor data. And execute When the object or area recognized in the first recognition process is excluded from the warning target in the exclusion information held in the holding unit, the device management unit issues a warning regarding the object or area.
- the information processing system according to any one of (1) to (12) above.
- the recognition unit recognizes the first recognition process of recognizing an object or area existing around the device based on the sensor data, and the approach of the object or area to the device based on the sensor data.
- the device management unit controls the operator of the device when the object or area recognized by the first recognition process is close to the device.
- the information processing system according to any one of (1) to (12) to be executed.
- the recognition unit executes an extraction process for extracting extraction information that is a part of the sensor data from the sensor data, and transmits the extraction information extracted by the extraction process to the learning unit.
- Information processing system 100 Equipment 101, 104, 107 Sensor unit 102 Image sensor 103, 106, 109 Signal processing unit 105 Inertivity sensor 108 Position sensor 110 Position / surrounding recognition unit 111 Recognition unit 120 Equipment management unit 131 Monitor 132 User interface 133 Output Unit 134 Equipment control unit 135 Operation system 512 Object database 812 Attribute database 912 Exclusion database 1012 Proximity monitoring database
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Mechanical Engineering (AREA)
- Transportation (AREA)
- Mathematical Physics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Mining & Mineral Resources (AREA)
- Civil Engineering (AREA)
- Structural Engineering (AREA)
- Alarm Systems (AREA)
- Emergency Alarm Devices (AREA)
- Storage Device Security (AREA)
Abstract
Description
1.システム構成例
2.第1の実施形態
2.1 処理フロー例
2.2 動作フロー例
2.3 まとめ
3.第2の実施形態
3.1 処理フロー例
3.2 動作フロー例
3.3 まとめ
4.第3の実施形態
4.1 処理フロー例
4.2 動作フロー例
4.3 まとめ
5.第4の実施形態
5.1 処理フロー例
5.2 動作フロー例
5.3 まとめ
6.第5の実施形態
6.1 処理フロー例
6.2 動作フロー例
6.3 まとめ
7.第6の実施形態
7.1 処理フロー例
7.2 動作フロー例
7.3 まとめ
8.第7の実施形態
8.1 処理フロー例
8.2 動作フロー例
8.3 まとめ
9.第8の実施形態
9.1 処理フロー例
9.2 動作フロー例
9.3 まとめ
10.第9の実施形態
10.1 処理フロー例
10.2 動作フロー例
10.3 まとめ
11.第10の実施形態
11.1 処理フロー例
11.2 動作フロー例
11.3 まとめ
12.第11の実施形態
12.1 処理フロー例
12.2 動作フロー例
12.3 まとめ
13.ハードウエア構成
まず、以下の実施形態に共通のシステム構成例について、図面を参照して詳細に説明する。図1は、本開示の実施形態に係る情報処理システムのシステム構成例を示すブロック図である。
機器100は、建設現場や工事現場などの作業現場で使用される重機などの建設機器や測定機器である。なお、測定機器を搭載する建設機器なども機器100に含まれる。また、建設機器や測定機器に限定されず、自動車や鉄道車両や航空機(ヘリコプタ等を含む)や船などの運転者が直接又は遠隔で操縦する移動体、運搬ロボットや掃除ロボットや対話型ロボットやペット型ロボットなどの自律型ロボット、各種ドローン(飛行型、走行型、水中型等を含む)、監視カメラ(定点カメラ等を含む)や交通信号機などの建造物、ヒトやペットが携帯するスマートフォンやウェアラブル機器や情報処理端末など、所定のネットワークに接続可能で且つセンサを備える種々の物体が機器100として適用されてよい。
センサ部101は、例えば、カラー又はモノクロの画像センサ、ToF(Time-of-Flight)センサやLiDAR(Light Detection and Ranging)やLADAR(Laser Detection and Ranging)やミリ波レーダなどの測距センサ、EVS(Event-based Vision Sensor)などの各種イメージセンサ102、IMU(Inertial Measurement Unit)やジャイロセンサや加速度・角速度センサなどの各種慣性センサ105、GNSS(Global Navigation Satellite System)などの各種位置センサ108などのセンサと、各センサから出力された検出信号に所定の処理を実行してセンサデータを生成する信号処理部103、106、109とから構成される1以上のセンサ部101、104、107を備える。
位置・周囲認識部110は、センサ部101、104、107のうちの1以上(以下、説明の簡略化のため、センサ部101、104、107のうちの1以上を「センサ部101等」と称する)から入力されたセンサデータに基づいて、機器100の位置や機器100の周囲の状況を認識する。なお、機器100の位置は、GNSS等で取得される地球規模の座標位置であってもよいし、SLAM(Simultaneous Localization and Mapping)等で推定されるある空間内の座標位置であってもよい。
機器管理部120は、機器100の全体的な動作を管理・制御する制御部である。例えば、機器100が重機や自動車などの移動体である場合には、機器管理部120は、ECU(Electronic Control Unit)などの車両全体を統括的に制御する制御部であってもよい。また、例えば機器100がセンサ機器などの固定又は半固定された機器である場合には、機器100の全体動作を制御する制御部であってよい。
モニタ131は、機器100のオペレータや周囲の人物等に各種情報を提示する表示部であってよい。
クラウド200は、インターネットなどのコンピュータネットワークを経由してコンピュータ資源を提供するサービス形態であり、例えば、ネットワーク上に配置された1以上のクラウドサーバにより構成される。クラウド200は、例えば、認識部111を学習するための学習部201を備える。例えば、認識部111が学習モデルを備える推論器である場合、学習部201は、教師あり学習又は教師なし学習を用いて学習モデルをトレーニングする。トレーニング後の学習モデルは、機器100にダウンロードされ、たとえば、位置・周囲認識部110の認識部111に実装される。また、認識部111がルールベースの認識部である場合、学習部201は、入力に対して認識結果としての出力を導き出すためのアルゴリズムを手動又は自動で作成・更新する。作成・更新されたアルゴリズムが記述されたプログラムは、機器100にダウンロードされ、たとえば、位置・周囲認識部110の認識部111において実行される。
現場サーバ300は、一カ所以上の現場に導入された1台以上の機器100を管理するためのサーバである。例えば、現場が工事現場である場合には、現場サーバ300は、現場管理部301と、工事計画部302とを備える。
次に、上述した情報処理システム1を利用した第1の実施形態について、以下に図面を参照して詳細に説明する。第1の実施形態では、情報処理システム1を利用して地震を認識することで、現場における現在や将来の安全性を向上させる場合について説明する。
図2は、本実施形態に係る情報処理システム1における地震の認識からオペレータへの警告までの流れを説明するための図である。図2に示すように、情報処理システム1を地震認識に適用した場合、位置・周囲認識部110の認識部111は、センサ部101等から入力されたセンサデータを入力として、地震の有無を認識する。
図3は、本実施形態に係る情報処理システム1の動作フロー例を示すフローチャートである。図3に示すように、本実施形態において、情報処理システム1では、まず、センサ部101等で取得されたセンサデータが位置・周囲認識部110の認識部111に入力される(ステップS101)。センサ部101等からのセンサデータの入力は、定期的であってもよいし、センサデータが異常値を示した場合など必要に応じてであってもよい。また、センサ部101等から入力されるセンサデータは、ロウデータであってもよいし、モザイクなどの加工処理が施された処理済みデータであってもよい。
以上のように、本実施形態によれば、センサ部101等から入力された1以上のセンサデータに基づいて地震の有無を的確に認識することが可能となるため、現場における現在や将来の安全性を向上させることが可能となる。
次に、上述した情報処理システム1を利用した第2の実施形態について、以下に図面を参照して詳細に説明する。第2の実施形態では、情報処理システム1を利用して事故の発生を認識することで、現場における現在や将来の安全性を向上させる場合について説明する。なお、以下の説明において、上述した実施形態と同様の構成、動作及び効果については、それらを引用することで、重複する説明を省略する。
図4は、本実施形態に係る情報処理システム1における事故の認識からオペレータへの警告までの流れを説明するための図である。図4に示すように、情報処理システム1を事故認識に適用した場合、位置・周囲認識部110の認識部111は、センサ部101等から入力されたセンサデータを入力として、事故の有無を認識する。なお、本実施形態における事故には、機器100自体が巻き込まれた事故や、機器100の周囲で発生して機器100が第3者として観測した事故等が含まれてよい。
図5は、本実施形態に係る情報処理システム1の動作フロー例を示すフローチャートである。図5に示すように、本実施形態では、第1の実施形態において図3を用いて説明した動作と同様の動作のうち、ステップS102及びS103がステップS202及びS203に置き換えられている。
以上のように、本実施形態によれば、センサ部101等から入力された1以上のセンサデータに基づいて事故の有無を的確に認識することが可能となるため、現場における現在や将来の安全性を向上させることが可能となる。なお、その他の構成、動作及び効果は、上述した実施形態と同様であってよいため、ここでは詳細な説明を省略する。
次に、上述した情報処理システム1を利用した第3の実施形態について、以下に図面を参照して詳細に説明する。第3の実施形態では、情報処理システム1を利用してヒヤリ・ハット状態を認識することで、現場における現在や将来の安全性を向上させる場合について説明する。なお、本説明におけるヒヤリ・ハット状態とは、事故に至ってはいないものの、人であればヒヤリとしたりハッとしたりするような、事故一歩手前の状況(事故につながる状況ともいう)を指すとする。また、以下の説明において、上述した実施形態と同様の構成、動作及び効果については、それらを引用することで、重複する説明を省略する。
図6は、本実施形態に係る情報処理システム1におけるヒヤリ・ハット状態の認識からオペレータへの警告までの流れを説明するための図である。図6に示すように、情報処理システム1をヒヤリ・ハット状態認識に適用した場合、位置・周囲認識部110の認識部111は、センサ部101等から入力されたセンサデータを入力として、自機又は周囲におけるヒヤリ・ハット状態を認識する。
図7は、本実施形態に係る情報処理システム1の動作フロー例を示すフローチャートである。図7に示すように、本実施形態では、第1の実施形態において図3を用いて説明した動作と同様の動作のうち、ステップS102及びS103がステップS302及びS303に置き換えられている。
以上のように、本実施形態によれば、センサ部101等から入力された1以上のセンサデータに基づいてヒヤリ・ハット状態であるか否かを的確に認識することが可能となるため、現場における現在や将来の安全性を向上させることが可能となる。なお、その他の構成、動作及び効果は、上述した実施形態と同様であってよいため、ここでは詳細な説明を省略する。
次に、上述した情報処理システム1を利用した第4の実施形態について、以下に図面を参照して詳細に説明する。第4の実施形態では、情報処理システム1を利用して危険状態を認識することで、現場における現在や将来の安全性を向上させる場合について説明する。なお、本説明における危険状態とは、機器100自体や周囲のヒトやモノの行動や状況から事故が発生する可能性の高さ、言い換えれば、機器100自体や周囲のヒトやモノなどの守るべき対象に対して危険が及ぶ可能性の高さ(危険性ともいう)であってもよい。また、以下の説明において、上述した実施形態と同様の構成、動作及び効果については、それらを引用することで、重複する説明を省略する。
図8は、本実施形態に係る情報処理システム1における危険状態の認識からオペレータへの警告までの流れを説明するための図である。図8に示すように、情報処理システム1を危険状態認識に適用した場合、位置・周囲認識部110の認識部111は、センサ部101等から入力されたセンサデータを入力として、自機又は周囲における危険状態を認識する。
図9は、本実施形態に係る情報処理システム1の動作フロー例を示すフローチャートである。図9に示すように、本実施形態では、第1の実施形態において図3を用いて説明した動作と同様の動作のうち、ステップS102、S103及びS105がステップS402、S403及びS405に置き換えられている。
以上のように、本実施形態によれば、センサ部101等から入力された1以上のセンサデータに基づいて危険状態であるか否かを的確に認識することが可能となるため、現場における現在や将来の安全性を向上させることが可能となる。なお、その他の構成、動作及び効果は、上述した実施形態と同様であってよいため、ここでは詳細な説明を省略する。
次に、上述した情報処理システム1を利用した第5の実施形態について、以下に図面を参照して詳細に説明する。第5の実施形態では、情報処理システム1を利用して機器100周囲のヒトやモノや領域等の動きを認識したり予測したりすることで、現場における現在や将来の安全性を向上させる場合について説明する。なお、以下の説明において、上述した実施形態と同様の構成、動作及び効果については、それらを引用することで、重複する説明を省略する。
図10は、本実施形態に係る情報処理システム1における動き認識からオペレータへの通知(警告も含まれてよい)までの流れを説明するための図である。図10に示すように、情報処理システム1を動き認識・動き予測に適用した場合、位置・周囲認識部110の認識部111は、センサ部101等から入力されたセンサデータを入力として、特定の領域内に属する物体又は自機を中心とした所定範囲内に属する物体の動きを認識及び/又は予測する。また、各機器100には、物体データベース512が設けられる。
図11は、本実施形態に係る情報処理システム1の動作フロー例を示すフローチャートである。図11に示すように、本実施形態において、情報処理システム1では、まず、監視対象の領域の選定が実行される。具体的には、例えば、図3におけるステップS101と同様に、まず、センサ部101等で取得されたセンサデータが位置・周囲認識部110の認識部111に入力される(ステップS501)。
以上のように、本実施形態によれば、センサ部101等から入力された1以上のセンサデータに基づいて監視対象のオブジェクト又は領域が選択され、選択された監視対象の動きを監視することが可能となるため、現場における現在や将来の安全性を向上させることが可能となる。なお、その他の構成、動作及び効果は、上述した実施形態と同様であってよいため、ここでは詳細な説明を省略する。
次に、上述した情報処理システム1を利用した第6の実施形態について、以下に図面を参照して詳細に説明する。第6の実施形態では、情報処理システム1を利用してオペレータの疲れを認識することで、現場における現在や将来の安全性を向上させる場合について説明する。なお、以下の説明において、上述した実施形態と同様の構成、動作及び効果については、それらを引用することで、重複する説明を省略する。
図12は、本実施形態に係る情報処理システム1におけるオペレータの疲れ認識からオペレータへの警告までの流れを説明するための図である。図12に示すように、情報処理システム1をオペレータの疲れ認識に適用した場合、位置・周囲認識部110の認識部111は、センサ部101等から入力されたセンサデータを入力として、オペレータの疲れの程度を認識する。
図13は、本実施形態に係る情報処理システム1の動作フロー例を示すフローチャートである。図13に示すように、本実施形態では、第1の実施形態において図3を用いて説明した動作と同様の動作のうち、ステップS102及びS103がステップS602及びS603に置き換えられている。
以上のように、本実施形態によれば、センサ部101等から入力された1以上のセンサデータに基づいてオペレータの疲れの程度を認識して警告することが可能となるため、現場における現在や将来の安全性を向上させることが可能となる。なお、その他の構成、動作及び効果は、上述した実施形態と同様であってよいため、ここでは詳細な説明を省略する。
次に、上述した情報処理システム1を利用した第7の実施形態について、以下に図面を参照して詳細に説明する。第7の実施形態では、上述した実施形態において、認識部111が、画像データやデプス画像データやIMUデータやGNSSデータなどのセンシングデータに加え、カメラの高さ・角度・FoV(画角)など、センサ部101に関する情報(以下、属性情報ともいう)を追加的な入力として、各種認識処理(セマンティック・セグメンテーション等も含み得る)を実行する場合を例示する。なお、以下の説明において、上述した実施形態と同様の構成、動作及び効果については、それらを引用することで、重複する説明を省略する。
図14は、本実施形態に係る情報処理システム1における認識処理からオペレータへの警告までの流れを説明するための図である。図14に示すように、情報処理システム1に属性情報を加味した認識を適用した場合、位置・周囲認識部110の認識部111は、センサ部101等から入力されたセンサデータに加え、センサ部101等の属性情報を入力として、認識処理を実行する。
図15は、本実施形態に係る情報処理システム1の動作フロー例を示すフローチャートである。図15に示すように、本実施形態では、第1の実施形態において図3を用いて説明した動作と同様の動作と同様の動作において、ステップS101の次にステップS701が追加されるとともに、ステップS102及びS103がステップS702及びS703に置き換えられている。
以上のように、本実施形態によれば、センサ部101等から入力された1以上のセンサデータに加え、各センサ部101等の属性情報に基づいて各種認識処理が実行されるため、より正確な認識を実行することが可能となる。それにより、現場における現在や将来の安全性をより向上させることが可能となる。なお、その他の構成、動作及び効果は、上述した実施形態と同様であってよいため、ここでは詳細な説明を省略する。
次に、上述した情報処理システム1を利用した第8の実施形態について、以下に図面を参照して詳細に説明する。第8の実施形態では、上述した実施形態において、認識部111により認識されたオブジェクト又はその種類に応じて、警告の強度(警告レベル)を変化させる場合を例示する。なお、以下の説明において、上述した実施形態と同様の構成、動作及び効果については、それらを引用することで、重複する説明を省略する。
図16は、本実施形態に係る情報処理システム1における認識処理からオペレータへの警告までの流れを説明するための図である。図16に示すように、情報処理システム1でオブジェクトやその種類に応じて警告レベルを変化させる場合、位置・周囲認識部110の認識部111は、オブジェクト又はその種類ごとに設定された警告レベルでオペレータに警告を発する。また、各機器100には、オブジェクト又はその種類ごとに設定された警告レベルを管理するための属性データベース812が設けられる。
図17は、本実施形態に係る情報処理システム1の動作フロー例を示すフローチャートである。図17に示すように、本実施形態において、情報処理システム1では、第5の実施形態と同様に、まず、監視対象の領域の選定が実行される。具体的には、例えば、図3におけるステップS101と同様に、まず、センサ部101等で取得されたセンサデータが位置・周囲認識部110の認識部111に入力される(ステップS801)。
以上のように、本実施形態によれば、センサ部101等から入力された1以上のセンサデータに基づいてオブジェクト又はその種類ごとの警告レベルが設定され、設定された警告レベルでオペレータに警告が発せられるため、オペレータは、より的確に守るべき対象等がどのような状況にあるかを知ることができる。それにより、現場における現在や将来の安全性を向上させることが可能となる。なお、その他の構成、動作及び効果は、上述した実施形態と同様であってよいため、ここでは詳細な説明を省略する。
次に、上述した情報処理システム1を利用した第9の実施形態について、以下に図面を参照して詳細に説明する。上述した第8の実施形態では、オブジェクトやその種類ごとに警告レベルを変化させる場合を例示した。これに対し、第9の実施形態では、特定のオブジェクトやそれと同じ種類のオブジェクトを警告対象から除外する場合を例示する。なお、以下の説明において、上述した実施形態と同様の構成、動作及び効果については、それらを引用することで、重複する説明を省略する。
図18は、本実施形態に係る情報処理システム1における認識処理からオペレータへの警告までの流れを説明するための図である。図18に示すように、情報処理システム1で特定のオブジェクトやそれと同一種類のオブジェクトを警告対象から除外する場合、位置・周囲認識部110の認識部111は、認識結果で特定されたオブジェクト又はその種類に応じて、対象のオブジェクト又は領域に対する警告の要否を判断する領域除外処理を実行する。また、各機器100には、オブジェクト又はその種類ごとに警告の対象から除外するための除外データベース912が設けられる。
図19は、本実施形態に係る情報処理システム1の動作フロー例を示すフローチャートである。図19に示すように、本実施形態において、情報処理システム1では、第5の実施形態と同様に、まず、監視対象の領域の選定が実行される。具体的には、例えば、図3におけるステップS101と同様に、まず、センサ部101等で取得されたセンサデータが位置・周囲認識部110の認識部111に入力される(ステップS901)。
以上のように、本実施形態によれば、センサ部101等から入力された1以上のセンサデータに基づいて警告の対象外とするオブジェクト又はその種類を設定することが可能であるため、オペレータは、より的確に守るべき対象等についての警告を受けることが可能となる。それにより、現場における現在や将来の安全性を向上させることが可能となる。なお、その他の構成、動作及び効果は、上述した実施形態と同様であってよいため、ここでは詳細な説明を省略する。
次に、上述した情報処理システム1を利用した第10の実施形態について、以下に図面を参照して詳細に説明する。第10の実施形態では、認識部111により認識されたオブジェクト又はそれと同じ種類のオブジェクトが危険物や危険領域である場合、その接近を検出してオペレータに通知する場合を例示する。なお、以下の説明において、上述した実施形態と同様の構成、動作及び効果については、それらを引用することで、重複する説明を省略する。
図20は、本実施形態に係る情報処理システム1における認識処理からオペレータへの警告までの流れを説明するための図である。図20に示すように、情報処理システム1で特定のオブジェクトやそれと同一種類のオブジェクトの接近をオペレータに通知する場合、位置・周囲認識部110の認識部111は、認識結果で特定されたオブジェクト又はそれと同じ種類のオブジェクトのうち、オペレータにより危険物や危険領域として指定された、すなわち接近監視の対象とするオブジェクト又は領域の接近を認識する認識処理を実行する。また、各機器100には、機器100への接近の監視対象とするオブジェクト又はその種類を登録するための接近監視データベース1012が設けられる。
図21は、本実施形態に係る情報処理システム1の動作フロー例を示すフローチャートである。図21に示すように、本実施形態において、情報処理システム1では、第5の実施形態と同様に、まず、監視対象の領域の選定が実行される。具体的には、例えば、図3におけるステップS101と同様に、まず、センサ部101等で取得されたセンサデータが位置・周囲認識部110の認識部111に入力される(ステップS1001)。
以上のように、本実施形態によれば、センサ部101等から入力された1以上のセンサデータに基づいて、オペレータにより指定された危険物や危険領域が機器100に接近したことをオペレータに通知することが可能となるため、オペレータは、より的確に機器100及び自己の安全を確保することが可能となる。それにより、現場における現在や将来の安全性を向上させることが可能となる。なお、その他の構成、動作及び効果は、上述した実施形態と同様であってよいため、ここでは詳細な説明を省略する。
次に、上述した情報処理システム1を利用した第11の実施形態について、以下に図面を参照して詳細に説明する。なお、以下の説明において、上述した実施形態と同様の構成、動作及び効果については、それらを引用することで、重複する説明を省略する。
図22は、本実施形態に係る情報処理システム1における認識処理からオペレータへの警告までの流れを説明するための図である。図22に示すように、情報処理システム1においてセンサ部101等で取得されたセンサデータを学習モデルのトレーニングや再学習に使用する場合、位置・周囲認識部110の認識部111は、センサデータから学習モデルのトレーニングや再学習に使用する抽出情報を抽出する抽出処理を実行する。例えば、センサデータが画像データやデプス画像データ等である場合、認識部111は、センサデータから、バウンディングボックスで囲まれた領域やセマンティック・セグメンテーションによりラベリングされた自由領域(以下、まとめて注目領域ともいう)を抽出し、抽出された注目領域をクラウド200の学習部201へアップロードする。その際、物体情報、属性情報、除外情報、接近注意情報など、注目領域に関連付けられた各種情報も学習部201にアップロードすることで、学習モデルの更なる性能向上や機能性の向上を図ることが可能となる。
図23は、本実施形態に係る情報処理システム1の動作フロー例を示すフローチャートである。図23に示すように、本実施形態では、まず、センサ部101等で取得されたセンサデータが位置・周囲認識部110の認識部111に入力される(ステップS1101)。
以上のように、本実施形態によれば、センサデータから抽出された抽出情報を用いて学習モデルのトレーニングや再学習を実行することが可能となるため、効率的/効果的に学習モデルをトレーニング・再学習することが可能となる。そして、効率的/効果的にトレーニング/再学習された学習モデルを用いて認識部111を構成することで、現場における現在や将来の安全性を向上させることが可能となる。なお、その他の構成、動作及び効果は、上述した実施形態と同様であってよいため、ここでは詳細な説明を省略する。
上述してきた実施形態に係る位置・周囲認識部110、機器管理部120、学習部201、現場サーバ300等は、例えば図24に示すような構成のコンピュータ1000によって実現され得る。図24は、位置・周囲認識部110、機器管理部120、学習部201、現場サーバ300等の機能を実現するコンピュータ1000の一例を示すハードウエア構成図である。コンピュータ1000は、CPU1100、RAM1200、ROM(Read Only Memory)1300、HDD(Hard Disk Drive)1400、通信インタフェース1500、及び入出力インタフェース1600を有する。コンピュータ1000の各部は、バス1050によって接続される。
(1)
重機が導入される現場の安全を確保するための情報処理システムであって、
前記現場に配置される機器に搭載れて前記現場の状況を検出する1以上のセンサ部と、
前記1以上のセンサ部で取得されたセンサデータに基づいて前記現場の状況を認識する認識部と、
前記認識部による認識結果に基づいて前記機器を管理する機器管理部と、
を備える情報処理システム。
(2)
前記認識部は、ニューラルネットワークを利用した学習モデルを備える
前記(1)に記載の情報処理システム。
(3)
前記1以上のセンサ部は、画像センサ、測距センサ、EVS(Event-based Vision Sensor)、慣性センサ、位置センサ、音センサ、気圧センサ、水圧センサ、照度センサ、温度センサ、湿度センサ、赤外線センサ及び風向風速センサのうちの少なくとも1つを含む
前記(1)又は(2)に記載の情報処理システム。
(4)
前記機器管理部は、前記認識結果を前記機器のオペレータに通知する、又は、前記認識結果に基づいて前記オペレータに向けて警告を発する制御を実行する
前記(1)~(3)の何れか1つに記載の情報処理システム。
(5)
前記認識部は、前記センサデータに基づいて地震を認識する
前記(1)~(4)の何れか1つに記載の情報処理システム。
(6)
前記認識部は、前記センサデータに基づいて事故を認識する
前記(1)~(4)の何れか1つに記載の情報処理システム。
(7)
前記認識部は、前記センサデータに基づいて事故につながる状況を認識する
前記(1)~(4)の何れか1つに記載の情報処理システム。
(8)
前記認識部は、前記センサデータに基づいて事故が発生する可能性の高さを認識する
前記(1)~(4)の何れか1つに記載の情報処理システム。
(9)
前記認識部は、前記センサデータに基づいて前記機器の周囲のオブジェクト又は領域の動きを認識又は予測する
前記(1)~(4)の何れか1つに記載の情報処理システム。
(10)
前記認識部は、前記センサデータに基づいて前記機器を操作するオペレータの疲れを認識する
前記(1)~(4)の何れか1つに記載の情報処理システム。
(11)
前記認識部は、前記センサデータに加え、前記機器の稼働時間に基づいて、前記オペレータの疲れを認識する
前記(10)に記載の情報処理システム。
(12)
前記認識部は、前記センサデータに加え、前記1以上のセンサ部の属性情報に基づいて前記現場の状況を認識する
前記(1)~(11)の何れか1つに記載の情報処理システム。
(13)
前記認識部は、前記センサデータに基づいて、前記機器の周囲に存在するオブジェクト又は領域を認識する第1認識処理と、前記センサデータに基づいて前記現場の状況を認識する第2認識処理と、を実行し、
前記機器管理部は、前記第2認識処理の認識結果に基づき、前記第1認識処理で認識された前記オブジェクト又は領域に応じた強度の警告を前記機器のオペレータに向けて発する制御を実行する
前記(1)~(12)の何れか1つに記載の情報処理システム。
(14)
前記オブジェクト又は領域ごとの警告の強度を保持する保持部をさらに備え、
前記機器管理部は、前記第1認識処理により認識された前記オブジェクト又は領域に対する警告の強度を前記オペレータに設定させ、
前記保持部は、前記オペレータにより設定された前記オブジェクト又は領域ごとの前記警告の強度を保持し、
前記機器管理部は、前記第2認識処理の認識結果に基づき、前記保持部に保持されている前記オブジェクト又は領域ごとの前記警告の強度に応じた警告を前記機器のオペレータに向けて発する制御を実行する
前記(13)に記載の情報処理システム。
(15)
オブジェクト又は領域ごとに警告の対象から除外するか否かの除外情報を保持する保持部をさらに備え、
前記認識部は、前記センサデータに基づいて、前記機器の周囲に存在するオブジェクト又は領域を認識する第1認識処理と、前記センサデータに基づいて前記現場の状況を認識する第2認識処理と、を実行し、
前記機器管理部は、前記第1認識処理で認識された前記オブジェクト又は領域が前記保持部に保持された前記除外情報において警告の対象から除外されている場合、当該オブジェクト又は領域に関して警告を発する制御を実行しない
前記(1)~(12)の何れか1つに記載の情報処理システム。
(16)
前記認識部は、前記センサデータに基づいて、前記機器の周囲に存在するオブジェクト又は領域を認識する第1認識処理と、前記センサデータに基づいて前記オブジェクト又は領域の前記機器との接近を認識する第2認識処理と、を実行し、
前記機器管理部は、前記第2認識処理の認識結果に基づき、前記第1認識処理で認識された前記オブジェクト又は領域が前記機器に接近している場合、前記機器のオペレータに向けて発する制御を実行する
前記(1)~(12)の何れか1つに記載の情報処理システム。
(17)
前記学習モデルをトレーニング又は再学習する学習部をさらに備え、
前記認識部は、前記センサデータから当該センサデータの一部である抽出情報を抽出する抽出処理を実行し、前記抽出処理で抽出された前記抽出情報を前記学習部へ送信し、
前記学習部は、前記認識部から受信した前記抽出情報を用いて前記学習モデルのトレーニング又は再学習を実行する
前記(2)に記載の情報処理システム。
(18)
重機が導入される現場の安全を確保するための情報処理方法であって、
前記現場に配置される機器に搭載れて前記現場の状況を検出する1以上のセンサ部で取得されたセンサデータに基づいて前記現場の状況を認識する認識工程と、
前記認識工程による認識結果に基づいて前記機器を管理する機器工程と、
を備える情報処理方法。
100 機器
101、104、107 センサ部
102 イメージセンサ
103、106、109 信号処理部
105 慣性センサ
108 位置センサ
110 位置・周囲認識部
111 認識部
120 機器管理部
131 モニタ
132 ユーザインタフェース
133 出力部
134 機器制御部
135 操作系
512 物体データベース
812 属性データベース
912 除外データベース
1012 接近監視データベース
Claims (18)
- 重機が導入される現場の安全を確保するための情報処理システムであって、
前記現場に配置される機器に搭載れて前記現場の状況を検出する1以上のセンサ部と、
前記1以上のセンサ部で取得されたセンサデータに基づいて前記現場の状況を認識する認識部と、
前記認識部による認識結果に基づいて前記機器を管理する機器管理部と、
を備える情報処理システム。 - 前記認識部は、ニューラルネットワークを利用した学習モデルを備える
請求項1に記載の情報処理システム。 - 前記1以上のセンサ部は、画像センサ、測距センサ、EVS(Event-based Vision Sensor)、慣性センサ、位置センサ、音センサ、気圧センサ、水圧センサ、照度センサ、温度センサ、湿度センサ、赤外線センサ及び風向風速センサのうちの少なくとも1つを含む
請求項1に記載の情報処理システム。 - 前記機器管理部は、前記認識結果を前記機器のオペレータに通知する、又は、前記認識結果に基づいて前記オペレータに向けて警告を発する制御を実行する
請求項1に記載の情報処理システム。 - 前記認識部は、前記センサデータに基づいて地震を認識する
請求項1に記載の情報処理システム。 - 前記認識部は、前記センサデータに基づいて事故を認識する
請求項1に記載の情報処理システム。 - 前記認識部は、前記センサデータに基づいて事故につながる状況を認識する
請求項1に記載の情報処理システム。 - 前記認識部は、前記センサデータに基づいて事故が発生する可能性の高さを認識する
請求項1に記載の情報処理システム。 - 前記認識部は、前記センサデータに基づいて前記機器の周囲のオブジェクト又は領域の動きを認識又は予測する
請求項1に記載の情報処理システム。 - 前記認識部は、前記センサデータに基づいて前記機器を操作するオペレータの疲れを認識する
請求項1に記載の情報処理システム。 - 前記認識部は、前記センサデータに加え、前記機器の稼働時間に基づいて、前記オペレータの疲れを認識する
請求項10に記載の情報処理システム。 - 前記認識部は、前記センサデータに加え、前記1以上のセンサ部の属性情報に基づいて前記現場の状況を認識する
請求項1に記載の情報処理システム。 - 前記認識部は、前記センサデータに基づいて、前記機器の周囲に存在するオブジェクト又は領域を認識する第1認識処理と、前記センサデータに基づいて前記現場の状況を認識する第2認識処理と、を実行し、
前記機器管理部は、前記第2認識処理の認識結果に基づき、前記第1認識処理で認識された前記オブジェクト又は領域に応じた強度の警告を前記機器のオペレータに向けて発する制御を実行する
請求項1に記載の情報処理システム。 - 前記オブジェクト又は領域ごとの警告の強度を保持する保持部をさらに備え、
前記機器管理部は、前記第1認識処理により認識された前記オブジェクト又は領域に対する警告の強度を前記オペレータに設定させ、
前記保持部は、前記オペレータにより設定された前記オブジェクト又は領域ごとの前記警告の強度を保持し、
前記機器管理部は、前記第2認識処理の認識結果に基づき、前記保持部に保持されている前記オブジェクト又は領域ごとの前記警告の強度に応じた警告を前記機器のオペレータに向けて発する制御を実行する
請求項13に記載の情報処理システム。 - オブジェクト又は領域ごとに警告の対象から除外するか否かの除外情報を保持する保持部をさらに備え、
前記認識部は、前記センサデータに基づいて、前記機器の周囲に存在するオブジェクト又は領域を認識する第1認識処理と、前記センサデータに基づいて前記現場の状況を認識する第2認識処理と、を実行し、
前記機器管理部は、前記第1認識処理で認識された前記オブジェクト又は領域が前記保持部に保持された前記除外情報において警告の対象から除外されている場合、当該オブジェクト又は領域に関して警告を発する制御を実行しない
請求項1に記載の情報処理システム。 - 前記認識部は、前記センサデータに基づいて、前記機器の周囲に存在するオブジェクト又は領域を認識する第1認識処理と、前記センサデータに基づいて前記オブジェクト又は領域の前記機器との接近を認識する第2認識処理と、を実行し、
前記機器管理部は、前記第2認識処理の認識結果に基づき、前記第1認識処理で認識された前記オブジェクト又は領域が前記機器に接近している場合、前記機器のオペレータに向けて発する制御を実行する
請求項1に記載の情報処理システム。 - 前記学習モデルをトレーニング又は再学習する学習部をさらに備え、
前記認識部は、前記センサデータから当該センサデータの一部である抽出情報を抽出する抽出処理を実行し、前記抽出処理で抽出された前記抽出情報を前記学習部へ送信し、
前記学習部は、前記認識部から受信した前記抽出情報を用いて前記学習モデルのトレーニング又は再学習を実行する
請求項2に記載の情報処理システム。 - 重機が導入される現場の安全を確保するための情報処理方法であって、
前記現場に配置される機器に搭載れて前記現場の状況を検出する1以上のセンサ部で取得されたセンサデータに基づいて前記現場の状況を認識する認識工程と、
前記認識工程による認識結果に基づいて前記機器を管理する機器工程と、
を備える情報処理方法。
Priority Applications (7)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP21875134.5A EP4224452A4 (en) | 2020-09-29 | 2021-09-09 | INFORMATION PROCESSING SYSTEM AND INFORMATION PROCESSING METHOD |
CN202180065033.2A CN116250027A (zh) | 2020-09-29 | 2021-09-09 | 信息处理***和信息处理方法 |
JP2022553754A JPWO2022070832A1 (ja) | 2020-09-29 | 2021-09-09 | |
CA3190757A CA3190757A1 (en) | 2020-09-29 | 2021-09-09 | Information processing system and information processing method |
KR1020237008828A KR20230074479A (ko) | 2020-09-29 | 2021-09-09 | 정보 처리 시스템 및 정보 처리 방법 |
BR112023005307A BR112023005307A2 (pt) | 2020-09-29 | 2021-09-09 | Sistema de processamento de informação e método de processamento de informação |
US18/028,428 US20240034325A1 (en) | 2020-09-29 | 2021-09-09 | Information processing system and information processing method |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US202063084781P | 2020-09-29 | 2020-09-29 | |
US63/084,781 | 2020-09-29 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022070832A1 true WO2022070832A1 (ja) | 2022-04-07 |
Family
ID=80950394
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2021/033221 WO2022070832A1 (ja) | 2020-09-29 | 2021-09-09 | 情報処理システム及び情報処理方法 |
Country Status (9)
Country | Link |
---|---|
US (1) | US20240034325A1 (ja) |
EP (1) | EP4224452A4 (ja) |
JP (1) | JPWO2022070832A1 (ja) |
KR (1) | KR20230074479A (ja) |
CN (1) | CN116250027A (ja) |
BR (1) | BR112023005307A2 (ja) |
CA (1) | CA3190757A1 (ja) |
TW (1) | TW202232448A (ja) |
WO (1) | WO2022070832A1 (ja) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012203677A (ja) * | 2011-03-25 | 2012-10-22 | Penta Ocean Construction Co Ltd | 安全管理システム |
JP2013029523A (ja) * | 2012-10-05 | 2013-02-07 | Yupiteru Corp | 車載用電子機器及びプログラム |
JP2013159930A (ja) * | 2012-02-02 | 2013-08-19 | Sumitomo Heavy Ind Ltd | 周囲監視装置 |
JP2016076037A (ja) * | 2014-10-03 | 2016-05-12 | トヨタ自動車株式会社 | 車両用情報提示装置 |
JP2018053537A (ja) * | 2016-09-28 | 2018-04-05 | 日立建機株式会社 | 作業機械 |
JP2020092447A (ja) | 2015-11-30 | 2020-06-11 | 住友重機械工業株式会社 | ショベル |
JP2020119031A (ja) * | 2019-01-18 | 2020-08-06 | 日立建機株式会社 | 作業機械 |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6581139B2 (ja) * | 2017-03-31 | 2019-09-25 | 日立建機株式会社 | 作業機械の周囲監視装置 |
JPWO2019031431A1 (ja) * | 2017-08-08 | 2020-08-13 | 住友重機械工業株式会社 | ショベル、ショベルの支援装置、及び、ショベルの管理装置 |
WO2019111859A1 (ja) * | 2017-12-04 | 2019-06-13 | 住友重機械工業株式会社 | 周辺監視装置、情報処理端末、情報処理装置、情報処理プログラム |
US11227242B2 (en) * | 2018-08-28 | 2022-01-18 | Caterpillar Inc. | System and method for automatically triggering incident intervention |
JP7111641B2 (ja) * | 2019-03-14 | 2022-08-02 | 日立建機株式会社 | 建設機械 |
-
2021
- 2021-09-09 KR KR1020237008828A patent/KR20230074479A/ko unknown
- 2021-09-09 JP JP2022553754A patent/JPWO2022070832A1/ja active Pending
- 2021-09-09 WO PCT/JP2021/033221 patent/WO2022070832A1/ja active Application Filing
- 2021-09-09 CN CN202180065033.2A patent/CN116250027A/zh active Pending
- 2021-09-09 BR BR112023005307A patent/BR112023005307A2/pt unknown
- 2021-09-09 EP EP21875134.5A patent/EP4224452A4/en active Pending
- 2021-09-09 US US18/028,428 patent/US20240034325A1/en active Pending
- 2021-09-09 CA CA3190757A patent/CA3190757A1/en active Pending
- 2021-09-22 TW TW110135101A patent/TW202232448A/zh unknown
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012203677A (ja) * | 2011-03-25 | 2012-10-22 | Penta Ocean Construction Co Ltd | 安全管理システム |
JP2013159930A (ja) * | 2012-02-02 | 2013-08-19 | Sumitomo Heavy Ind Ltd | 周囲監視装置 |
JP2013029523A (ja) * | 2012-10-05 | 2013-02-07 | Yupiteru Corp | 車載用電子機器及びプログラム |
JP2016076037A (ja) * | 2014-10-03 | 2016-05-12 | トヨタ自動車株式会社 | 車両用情報提示装置 |
JP2020092447A (ja) | 2015-11-30 | 2020-06-11 | 住友重機械工業株式会社 | ショベル |
JP2018053537A (ja) * | 2016-09-28 | 2018-04-05 | 日立建機株式会社 | 作業機械 |
JP2020119031A (ja) * | 2019-01-18 | 2020-08-06 | 日立建機株式会社 | 作業機械 |
Non-Patent Citations (1)
Title |
---|
See also references of EP4224452A4 |
Also Published As
Publication number | Publication date |
---|---|
EP4224452A1 (en) | 2023-08-09 |
CA3190757A1 (en) | 2022-04-07 |
TW202232448A (zh) | 2022-08-16 |
JPWO2022070832A1 (ja) | 2022-04-07 |
BR112023005307A2 (pt) | 2023-04-25 |
EP4224452A4 (en) | 2024-03-06 |
KR20230074479A (ko) | 2023-05-30 |
CN116250027A (zh) | 2023-06-09 |
US20240034325A1 (en) | 2024-02-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11307576B2 (en) | Interactions between vehicle and teleoperations system | |
JP6969756B2 (ja) | 自律走行車の動作管理制御 | |
US10745009B2 (en) | Electronic apparatus for determining a dangerous situation of a vehicle and method of operating the same | |
US10386836B2 (en) | Interactions between vehicle and teleoperations system | |
JP6898394B2 (ja) | 車両の自動運転制御補助方法、車両の自動運転制御補助装置、機器、コンピュータ読み取り可能な記憶媒体及び車路連携システム | |
US20230159062A1 (en) | Method and apparatus for controlling vehicle driving mode switching | |
US20220179418A1 (en) | Depart constraints implementation in autonomous vehicle routing | |
WO2021222375A1 (en) | Constraining vehicle operation based on uncertainty in perception and/or prediction | |
JP2024023534A (ja) | 車両、ロボットまたはドローンを遠隔監視するシステムおよび方法 | |
JP2018538647A (ja) | 自律車両の軌道修正のための遠隔操作システムおよび方法 | |
CN107036600A (zh) | 用于自主交通工具导航的***和方法 | |
US11514363B2 (en) | Using a recursive reinforcement model to determine an agent action | |
WO2019125276A1 (en) | Method and control arrangement in a surveillance system for monitoring a transportation system comprising autonomous vehicles | |
JP2024526037A (ja) | 自律エージェントの遠隔支援のための方法及びシステム | |
JP7057874B2 (ja) | 貨物を輸送するための自律走行車の盗難防止技術 | |
JP2023024956A (ja) | 自動運転車両に用いられるシミュレーション方法及び自動運転車両を制御する方法 | |
KR20230103002A (ko) | 산업 현장의 안전 관리 시스템 | |
CN115769049A (zh) | 映射***和方法 | |
WO2022070832A1 (ja) | 情報処理システム及び情報処理方法 | |
WO2019125277A1 (en) | Method and control arrangement in a transportation surveillance system | |
WO2024075477A1 (ja) | 避難情報生成システム、避難情報生成装置、自律走行装置、避難情報生成方法、避難情報生成プログラム | |
US20230356754A1 (en) | Control Mode Selection And Transitions | |
JP2022166865A (ja) | 自律移動装置、サーバ装置、学習装置、異常検知方法、及びプログラム | |
CN117809434A (zh) | 车辆紧急检测***、方法和计算机程序产品 | |
CN117195678A (zh) | 用于运载工具运动规划的基于控制参数的搜索空间 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21875134 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2022553754 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 3190757 Country of ref document: CA |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202317013433 Country of ref document: IN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 18028428 Country of ref document: US |
|
REG | Reference to national code |
Ref country code: BR Ref legal event code: B01A Ref document number: 112023005307 Country of ref document: BR |
|
ENP | Entry into the national phase |
Ref document number: 112023005307 Country of ref document: BR Kind code of ref document: A2 Effective date: 20230322 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2021875134 Country of ref document: EP Effective date: 20230502 |