WO2023135738A1 - Information processing device, estimation system, estimation method, and estimation program - Google Patents

Information processing device, estimation system, estimation method, and estimation program Download PDF

Info

Publication number
WO2023135738A1
WO2023135738A1 PCT/JP2022/001093 JP2022001093W WO2023135738A1 WO 2023135738 A1 WO2023135738 A1 WO 2023135738A1 JP 2022001093 W JP2022001093 W JP 2022001093W WO 2023135738 A1 WO2023135738 A1 WO 2023135738A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
image
acquisition unit
information processing
estimation
Prior art date
Application number
PCT/JP2022/001093
Other languages
French (fr)
Japanese (ja)
Inventor
俊仁 池西
Original Assignee
三菱電機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 三菱電機株式会社 filed Critical 三菱電機株式会社
Priority to JP2022535944A priority Critical patent/JPWO2023135738A1/ja
Priority to PCT/JP2022/001093 priority patent/WO2023135738A1/en
Publication of WO2023135738A1 publication Critical patent/WO2023135738A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention

Definitions

  • the present disclosure relates to an information processing device, an estimation system, an estimation method, and an estimation program.
  • Patent Literature 1 risk is expressed as a value based on the possibility of a user encountering a predetermined contingency.
  • the purpose of this disclosure is to perform highly accurate estimation.
  • the information processing device estimates risk details for a first moving object that is present on the road and has the first imaging device.
  • the information processing device includes a learned model, map information, a first image showing the condition of the road generated by the first imaging device, position information of the first moving body, and the first moving body.
  • an acquisition unit that acquires first information that is at least one of the velocity and acceleration of the learned model, the map information, the first image, position information of the first moving object, and and an estimation unit that estimates the risk content based on the first information.
  • FIG. 1 is a diagram showing an estimation system according to Embodiment 1;
  • FIG. 2 illustrates hardware included in the information processing apparatus according to the first embodiment;
  • FIG. 2 is a block diagram showing functions of the information processing apparatus according to Embodiment 1;
  • FIG. 4 is a flow chart showing an example of processing executed by the information processing apparatus according to the first embodiment;
  • FIG. 12 is a diagram showing an estimation system according to Embodiment 2;
  • FIG. 3 is a block diagram showing functions of an information processing apparatus according to a second embodiment;
  • FIG. 10 is a flow chart showing an example of processing executed by the information processing apparatus according to the second embodiment;
  • FIG. 13 is a diagram showing an estimation system according to Embodiment 3;
  • FIG. 1 is a diagram showing an estimation system according to Embodiment 1.
  • the estimation system includes information processing device 100 and drive recorder 201 .
  • Information processing apparatus 100 and drive recorder 201 are connected via a network.
  • the network is a wired network or a wireless network.
  • the information processing device 100 is a device that executes an estimation method.
  • the information processing device 100 is a server.
  • FIG. 1 shows a car 200 existing on the road.
  • the car 200 may be a car capable of automatic driving.
  • the car 200 may be an ADAS (Advanced Driver-Assistance Systems) car.
  • the car 200 may be a PMV (personal mobility vehicle) or an AMR (Autonomous Mobile Robot).
  • the car 200 is also called a first moving object.
  • a car 200 has a drive recorder 201 .
  • This sentence means the case where the drive recorder 201 is installed in the vehicle 200 and the case where the drive recorder 201 is externally attached to the vehicle 200 .
  • the drive recorder 201 is also called a first imaging device.
  • the information processing device 100 may be a device mounted on the vehicle 200 .
  • the drive recorder 201 takes images of road conditions. For example, the drive recorder 201 captures an image of road conditions in a range 202 . In addition, the drive recorder 201 may take an image of the situation of the road behind. The drive recorder 201 captures images to generate an image showing road conditions. The image is acquired by the information processing apparatus 100 .
  • the information processing device 100 estimates the risk content for the car 200 using the image.
  • the information processing apparatus 100 will be described in detail below.
  • FIG. 2 illustrates hardware included in the information processing apparatus according to the first embodiment.
  • the information processing device 100 has a processor 101 , a volatile memory device 102 and a nonvolatile memory device 103 .
  • the processor 101 controls the information processing apparatus 100 as a whole.
  • the processor 101 is a CPU (Central Processing Unit), FPGA (Field Programmable Gate Array), or the like.
  • Processor 101 may be a multiprocessor.
  • the information processing device 100 may have a processing circuit.
  • the volatile memory device 102 is the main memory device of the information processing device 100 .
  • the volatile memory device 102 is RAM (Random Access Memory).
  • the nonvolatile storage device 103 is an auxiliary storage device of the information processing device 100 .
  • the nonvolatile storage device 103 is a HDD (Hard Disk Drive) or an SSD (Solid State Drive).
  • FIG. 3 is a block diagram showing functions of the information processing apparatus according to the first embodiment.
  • the information processing device 100 has a storage unit 110 , an acquisition unit 120 , an estimation unit 130 and an output unit 140 .
  • the storage unit 110 may be implemented as a storage area secured in the volatile storage device 102 or the nonvolatile storage device 103 .
  • a part or all of the acquisition unit 120, the estimation unit 130, and the output unit 140 may be realized by a processing circuit. Also, part or all of the acquisition unit 120, the estimation unit 130, and the output unit 140 may be implemented as modules of a program executed by the processor 101.
  • FIG. For example, the program executed by processor 101 is also called an estimation program.
  • the estimation program is recorded on a recording medium.
  • the storage unit 110 may store the learned model 111 and the map information 112.
  • the acquisition unit 120 acquires the trained model 111.
  • the acquisition unit 120 acquires the learned model 111 from the storage unit 110 .
  • the acquisition unit 120 acquires the trained model 111 from an external device. The illustration of the external device is omitted.
  • the acquisition unit 120 acquires the map information 112. For example, the acquisition unit 120 acquires the map information 112 from the storage unit 110 . Also, for example, the acquisition unit 120 acquires the map information 112 from an external device.
  • the acquisition unit 120 acquires an image showing road conditions generated by the drive recorder 201 .
  • the acquisition unit 120 may be expressed as acquiring the image obtained by the drive recorder 201 capturing the image.
  • the acquisition unit 120 acquires the image from the drive recorder 201 .
  • the image is also called the first image.
  • the acquisition unit 120 acquires the position information of the car 200.
  • the acquisition unit 120 acquires the position information from the drive recorder 201 .
  • the acquisition unit 120 acquires the position information from the communication device of the car 200 .
  • the position information is information obtained by a GPS (Global Positioning System) that the car 200 has.
  • the acquisition unit 120 acquires first information that is at least one information of the speed and acceleration of the vehicle 200 .
  • the acquisition unit 120 acquires first information from the drive recorder 201 .
  • the acquisition unit 120 acquires the first information from the communication device of the vehicle 200 .
  • the speed is a speed measured by a speed sensor that the vehicle 200 has.
  • the acceleration is the acceleration measured by an IMU (Inertial Measurement Unit) sensor that the vehicle 200 has.
  • the estimation unit 130 estimates risk details based on the learned model 111, the map information 112, the image showing the road conditions, the position information of the car 200, and the first information. For example, the estimation unit 130 estimates risk content based on the learned model 111 , the map information 112 , the image showing the road conditions, the position information of the vehicle 200 , the speed of the vehicle 200 , and the acceleration of the vehicle 200 . Specifically, the estimation unit 130 inputs the map information 112 , the image, the position information of the car 200 , the speed of the car 200 , and the acceleration of the car 200 to the learned model 111 . As a result, the learned model 111 outputs risk details.
  • the learned model 111 estimates the risk content based on the result of the identification processing of the object included in the image, the result of the arithmetic processing of the distance between the object and the car 200, and the like.
  • the risk content is the content specified from one or more risk factors.
  • the risk content is at least one content of an actual risk and a latent risk.
  • an overt risk is a risk of a collision between the car 200 and the person due to a person included in the image suddenly jumping out.
  • a latent risk is a risk whose existence is currently undetermined but could occur in the next few seconds.
  • the risk content may be at least one content of a static potential risk and a dynamic manifest risk.
  • a static potential risk is a risk arising from the relationship between a non-moving object such as a building and the vehicle 200 .
  • a static potential risk is the risk of a car 200 colliding with a person jumping out of a building.
  • a dynamic manifest risk is a risk arising from the relationship between a moving object such as a bus and the car 200 .
  • a dynamic manifest risk is the risk of a collision between a person getting off a bus and the car 200 .
  • the acquisition unit 120 may acquire a plurality of images. That is, the acquisition unit 120 may acquire the video.
  • the estimation unit 130 may estimate risk details using video. Also, in the following description, it is assumed that the speed and acceleration of the vehicle 200 are acquired.
  • the output unit 140 outputs the estimation result. For example, if the information processing device 100 is a server, the output unit 140 outputs the estimation result to the communication device of the car 200 . The vehicle 200 then outputs the estimation results to a display that the vehicle 200 has. Further, for example, when the information processing device 100 is mounted on the vehicle 200 , the output unit 140 outputs the estimation result to the display of the vehicle 200 . Specifically, the display of the car 200 displays "There is a risk of a collision between the person getting off the bus and the car 200.” Also, the estimation result may be output by voice. In this way, the output processing of the output unit 140 allows the user present in the vehicle 200 to recognize the details of the risk. Further, when the car 200 is capable of automatic driving, the car 200 can drive based on the estimation result.
  • Step S11 The acquisition unit 120 acquires an image showing road conditions.
  • Step S ⁇ b>12 The acquisition unit 120 acquires position information of the car 200 .
  • Step S ⁇ b>13 The acquisition unit 120 acquires the speed and acceleration of the vehicle 200 .
  • Step S ⁇ b>14 Acquisition unit 120 acquires learned model 111 and map information 112 .
  • Step S ⁇ b>15 The estimating unit 130 estimates risk content based on the learned model 111 , the map information 112 , the image, the position information of the vehicle 200 , the speed of the vehicle 200 , and the acceleration of the vehicle 200 .
  • Step S16 The output unit 140 outputs the estimation result.
  • the processing of FIG. 4 may be executed each time the image, the position information of the car 200, the speed of the car 200, and the acceleration of the car 200 are acquired. Note that the order in which steps S11 to S14 are executed is not limited to the order shown in FIG.
  • the information processing apparatus 100 uses the learned model 111 to estimate risk details. Therefore, the information processing apparatus 100 can perform highly accurate estimation. In addition, the information processing apparatus 100 can estimate the risk details using at least one information of the speed of the vehicle 200 and the acceleration of the vehicle 200 . However, the information processing apparatus 100 can estimate risk details with higher accuracy by using information on both the speed and the acceleration.
  • Embodiment 2 Next, Embodiment 2 will be described. In Embodiment 2, mainly matters different from Embodiment 1 will be described. In the second embodiment, descriptions of items common to the first embodiment are omitted.
  • Embodiment 1 the case where estimation is performed based on information obtained from vehicle 200 has been described.
  • Embodiment 2 describes a case where estimation is performed based on information obtained from vehicle 200 and information obtained from vehicles other than vehicle 200 .
  • FIG. 5 is a diagram showing an estimation system according to Embodiment 2.
  • the estimation system includes information processing device 100 a and drive recorder 201 .
  • FIG. 5 shows a car 200 and a car 300 on the road.
  • Vehicle 300 may be an ADAS vehicle.
  • vehicle 300 may be a PMV or an AMR.
  • Car 300 has a drive recorder. This sentence means the case where the vehicle 300 is equipped with a drive recorder and the case where the drive recorder is externally attached to the vehicle 300 .
  • the drive recorder is also called a second imaging device.
  • the drive recorder captures road conditions in range 301 .
  • an image showing road conditions is generated.
  • the image is acquired by the information processing device 100a.
  • FIG. 5 shows a car 300 as a car other than car 200 . That is, the number of vehicles other than the vehicle 200 in FIG. 5 is one. However, the number of cars other than car 200 may be two or more.
  • vehicles other than the vehicle 200 are also referred to as second moving bodies.
  • the car 300 is also called a second moving body.
  • FIG. 6 is a block diagram showing functions of the information processing apparatus according to the second embodiment. 6 that are the same as those shown in FIG. 3 are given the same reference numerals as those shown in FIG.
  • the information processing device 100a has a storage unit 110a, an acquisition unit 120a, and an estimation unit 130a.
  • the storage unit 110a may store the learned model 111a.
  • Acquisition unit 120 a has almost the same function as acquisition unit 120 . Therefore, descriptions of the same functions are omitted.
  • the acquisition unit 120a acquires the learned model 111a.
  • the acquisition unit 120a acquires the trained model 111a from the storage unit 110a or an external device.
  • the illustration of the external device is omitted.
  • the acquisition unit 120a acquires an image showing road conditions generated by a drive recorder possessed by one or more second moving bodies.
  • the acquisition unit 120a may be expressed as acquiring the image obtained by the drive recorder capturing the image.
  • the acquisition unit 120a acquires an image showing road conditions generated by the drive recorder of the car 300.
  • FIG. Note that the acquisition unit 120 a may acquire the image from the communication device of the vehicle 300 . That is, the acquiring unit 120a may acquire the image directly from the communication device of the vehicle 300.
  • the acquisition unit 120a may acquire the image from the communication device of the vehicle 300 via an external device. That is, the acquisition unit 120a may indirectly acquire the image from the communication device of the vehicle 300.
  • the image acquired by the acquisition unit 120a is also referred to as a second image.
  • the acquisition unit 120a acquires position information of one or more second moving bodies. For example, the acquisition unit 120a acquires the position information of the car 300. FIG. Note that the acquisition unit 120 a may acquire the position information directly from the communication device of the vehicle 300 . The acquisition unit 120 a may indirectly acquire the position information from the communication device of the vehicle 300 .
  • the acquisition unit 120a acquires second information, which is at least one of the speed and acceleration of one or more second moving bodies. For example, the acquisition unit 120a acquires second information that is at least one of the speed and acceleration of the vehicle 300 . Note that the acquisition unit 120 a may acquire the second information directly from the communication device of the vehicle 300 . The acquisition unit 120 a may indirectly acquire the second information from the communication device of the vehicle 300 .
  • the estimating unit 130a stores the learned model 111a, the map information 112, the image generated by the drive recorder 201, the position information of the vehicle 200, the first information, the image generated by the drive recorder of one or more second mobile bodies, Based on the location information of one or more second moving bodies and the second information, risk content is estimated.
  • the estimating unit 130a uses the learned model 111a, the map information 112, the image generated by the drive recorder 201, the position information of the vehicle 200, the speed of the vehicle 200, the acceleration of the vehicle 200, and the Risk content is estimated based on the image generated by the drive recorder, the position information of the one or more second mobile bodies, the speed of the one or more second mobile bodies, and the acceleration of the one or more second mobile bodies.
  • the one or more second moving bodies are assumed to be a car 300. Also, in the following description, it is assumed that the velocity and acceleration of one or more second moving bodies are acquired.
  • Step S21 The acquisition unit 120a acquires an image showing road conditions.
  • the image is an image generated by the drive recorder 201 .
  • the acquisition unit 120 a acquires position information of the car 200 .
  • the acquisition unit 120 a acquires the speed and acceleration of the vehicle 200 .
  • the acquisition unit 120 a acquires an image showing road conditions, the position information of the car 300 , the speed of the car 300 , and the acceleration of the car 300 .
  • the image is an image generated by the drive recorder of the vehicle 300 .
  • Step S ⁇ b>25 The acquisition unit 120 a acquires the learned model 111 a and the map information 112 .
  • Step S26 The estimation unit 130a generates the learned model 111a, the map information 112, the image generated by the drive recorder 201, the position information of the vehicle 200, the speed of the vehicle 200, the acceleration of the vehicle 200, and the The risk content is estimated based on the image, the position information of the car 300, the speed of the car 300, and the acceleration of the car 300.
  • Step S27 The output unit 140 outputs the estimation result. Note that the order in which steps S21 to S25 are executed is not limited to the order shown in FIG.
  • the acquisition unit 120a may acquire a plurality of images generated by the drive recorder of the car 300. That is, the acquiring unit 120a may acquire the video generated by the drive recorder of the car 300.
  • FIG. The estimation unit 130a may estimate risk details using the video.
  • the information processing device 100a estimates the risk details based on the information obtained from the drive recorder 201 and the information obtained from the drive recorder of the vehicle 300. For example, the information processing device 100a estimates risk details based on information obtained from the range 202 in FIG. 5 and the range 301 in FIG. Therefore, the information processing apparatus 100a can estimate risk details with higher accuracy than in the first embodiment.
  • Embodiment 3 Next, Embodiment 3 will be described. In the third embodiment, mainly matters different from the first embodiment will be described. In the third embodiment, descriptions of matters common to the first embodiment are omitted.
  • Embodiment 1 the case where estimation is performed based on information obtained from vehicle 200 has been described.
  • Embodiment 3 describes a case where estimation is performed based on information obtained from the vehicle 200 and information obtained from cameras present on the road.
  • FIG. 8 is a diagram showing an estimation system according to Embodiment 3.
  • the estimation system includes information processing device 100 b and drive recorder 201 .
  • FIG. 8 shows a car 200 present on the road.
  • FIG. 8 also shows a camera 400 present on the road.
  • FIG. 8 may be described as showing the camera 400 installed on the road.
  • camera 400 is a surveillance camera or a live camera.
  • Camera 400 captures the road conditions.
  • camera 400 captures road conditions in area 401 .
  • the image generated by the camera 400 is acquired by the information processing device 100b.
  • the camera 400 may store position information of the camera 400 .
  • FIG. 8 shows one camera. The number of cameras may be two or more.
  • FIG. 9 is a block diagram showing functions of the information processing apparatus according to the third embodiment. 9 that are the same as those shown in FIG. 3 are given the same reference numerals as those shown in FIG.
  • the information processing device 100b has a storage unit 110b, an acquisition unit 120b, and an estimation unit 130b.
  • the storage unit 110b may store the learned model 111b.
  • Acquisition unit 120 b has almost the same function as acquisition unit 120 . Therefore, descriptions of the same functions are omitted.
  • the acquisition unit 120b acquires the learned model 111b.
  • the acquisition unit 120b acquires the trained model 111b from the storage unit 110b or an external device.
  • the illustration of the external device is omitted.
  • the acquisition unit 120b acquires an image showing road conditions generated by one or more cameras.
  • the acquisition unit 120b may be expressed as acquiring the image obtained by imaging by one or more cameras.
  • the acquisition unit 120b acquires an image representing road conditions generated by the camera 400 .
  • the acquisition unit 120 b may acquire the image from the camera 400 . That is, the acquisition unit 120b may acquire the image directly from the camera 400.
  • the acquisition unit 120b may acquire the image from the camera 400 via an external device. That is, the acquisition unit 120b may indirectly acquire the image from the camera 400.
  • the image acquired by the acquisition unit 120b is also called a third image.
  • the acquisition unit 120b may acquire position information of one or more cameras.
  • the acquisition unit 120b may acquire the position information of the camera 400.
  • the acquisition unit 120 b may indirectly acquire the position information from the camera 400 .
  • one or more cameras may not store position information.
  • camera 400 may not store the position information of camera 400 .
  • the acquisition unit 120b cannot acquire the position information of one or more cameras. Therefore, the estimation unit 130b estimates the positions of one or more cameras.
  • the estimation unit 130b estimates the position of the camera 400 on the map indicated by the map information 112 based on the image generated by the camera 400 and the image generated by the drive recorder 201. FIG.
  • the information processing device 100b can obtain position information of one or more cameras.
  • the estimation unit 130b uses the learned model 111b, the map information 112, the image generated by the drive recorder 201, the position information of the vehicle 200, the first information, the image generated by one or more cameras, and the position information of one or more cameras. Estimate the risk content based on
  • one or more cameras are cameras 400. It is also assumed that camera 400 stores position information of camera 400 .
  • FIG. 10 is a flowchart illustrating an example of processing executed by the information processing apparatus according to the third embodiment; FIG.
  • the acquisition unit 120b acquires an image showing road conditions. Note that the image is an image generated by the drive recorder 201 .
  • the acquisition unit 120 b acquires position information of the car 200 .
  • the acquisition unit 120 b acquires the speed and acceleration of the vehicle 200 .
  • the acquisition unit 120 b acquires an image showing road conditions and position information of the camera 400 . Note that the image is an image generated by the camera 400 .
  • the acquisition unit 120b acquires the learned model 111b and the map information 112. FIG.
  • Step S36 Learned model 111b, map information 112, image generated by drive recorder 201, position information of car 200, speed of car 200, acceleration of car 200, image generated by camera 400, and position information of camera 400 Estimate the risk content based on (Step S37)
  • the output unit 140 outputs the estimation result. Note that the order in which steps S31 to S35 are executed is not limited to the order shown in FIG.
  • the acquisition unit 120b may acquire a plurality of images generated by the camera 400. That is, the acquisition unit 120b may acquire the video generated by the camera 400.
  • FIG. The estimation unit 130b may estimate risk details using the video.
  • the information processing device 100b estimates risk details based on the information obtained from the drive recorder 201 and the information obtained from the camera 400 .
  • the information processing device 100b estimates risk content based on information obtained from the range 202 in FIG. 8 and the range 401 in FIG. Therefore, the information processing apparatus 100b can estimate risk details with higher accuracy than in the first embodiment.
  • Embodiment 4 Next, Embodiment 4 will be described.
  • Embodiment 4 mainly matters different from Embodiments 1 to 3 will be described.
  • descriptions of matters common to the first to third embodiments are omitted.
  • the fourth embodiment a case in which the first to third embodiments are combined will be described.
  • FIG. 11 is a diagram showing an estimation system according to Embodiment 4.
  • the estimation system includes an information processing device 100 c and a drive recorder 201 .
  • FIG. 11 shows a car 200 and a car 300 on the road.
  • a drive recorder that the car 300 has takes images of road conditions.
  • the drive recorder captures road conditions in range 301 .
  • An image showing road conditions is generated by the image pickup by the drive recorder. The image is acquired by the information processing device 100c.
  • FIG. 11 shows a camera 400 existing on the road.
  • Camera 400 captures the road conditions.
  • camera 400 captures road conditions in range 401 .
  • the image generated by the camera 400 is acquired by the information processing device 100c.
  • FIG. 12 is a block diagram showing functions of the information processing apparatus according to the fourth embodiment. 12 that are the same as those shown in FIG. 3 are assigned the same reference numerals as those shown in FIG.
  • the information processing device 100c has a storage unit 110c, an acquisition unit 120c, and an estimation unit 130c.
  • the storage unit 110c may store the learned model 111c.
  • Acquisition unit 120 c has almost the same function as acquisition unit 120 . Therefore, descriptions of the same functions are omitted.
  • the acquisition unit 120c acquires the trained model 111c.
  • the acquisition unit 120c acquires an image showing road conditions generated by the drive recorders of one or more second moving bodies. For example, the acquisition unit 120c acquires an image representing road conditions generated by the drive recorder of the car 300 .
  • the acquisition unit 120c acquires position information of one or more second moving bodies. For example, the acquisition unit 120c acquires the position information of the car 300.
  • FIG. Acquisition unit 120c acquires the second information. For example, the acquisition unit 120c acquires the speed and acceleration of the vehicle 300.
  • the acquisition unit 120c acquires an image representing road conditions generated by one or more cameras. For example, the acquisition unit 120c acquires an image representing road conditions generated by the camera 400 .
  • the acquisition unit 120c may acquire position information of one or more cameras. For example, the acquisition unit 120c may acquire the position information of the camera 400.
  • FIG. Also, the estimation unit 130c may estimate the positions of one or more cameras.
  • the estimating unit 130c acquires the learned model 111c, the map information 112, the image generated by the drive recorder 201, the position information of the vehicle 200, the first information, the image generated by the drive recorder of one or more second mobile bodies, A risk content is estimated based on position information of one or more second mobile bodies, second information, images generated by one or more cameras, and position information of one or more cameras.
  • the one or more second moving bodies are assumed to be a car 300.
  • One or more cameras are cameras 400 . It is also assumed that camera 400 stores position information of camera 400 .
  • Step S41 The acquisition unit 120c acquires an image showing road conditions. Note that the image is an image generated by the drive recorder 201 .
  • Step S ⁇ b>42 The acquisition unit 120 c acquires position information of the car 200 .
  • Step S ⁇ b>43 The acquisition unit 120 c acquires the speed and acceleration of the vehicle 200 .
  • Step S ⁇ b>44 The acquisition unit 120 c acquires an image showing road conditions, the position information of the car 300 , the speed of the car 300 , and the acceleration of the car 300 .
  • the acquisition unit 120 c acquires an image showing road conditions and position information of the camera 400 .
  • the image is an image generated by the camera 400 .
  • the acquisition unit 120c acquires the learned model 111c and the map information 112. FIG.
  • Step S47 The estimation unit 130c acquires the learned model 111c, the map information 112, the image generated by the drive recorder 201, the position information of the car 200, the speed of the car 200, the acceleration of the car 200, and the drive recorder of the car 300. Risk content is estimated based on the captured image, the position information of the vehicle 300, the speed of the vehicle 300, the acceleration of the vehicle 300, the image generated by the camera 400, and the position information of the camera 400.
  • Step S48 The output unit 140 outputs the estimation result. Note that the order in which steps S41 to S46 are executed is not limited to the order shown in FIG.
  • information processing device 100c estimates risk content based on information obtained from drive recorder 201, information obtained from the drive recorder of vehicle 300, and information obtained from camera 400. do.
  • the information processing device 100c estimates risk content based on information obtained from the range 202 in FIG. 11, the range 301 in FIG. 11, and the range 401 in FIG. Therefore, the information processing apparatus 100c can estimate risk details with higher accuracy than in the first to third embodiments.
  • Embodiment 5 Next, Embodiment 5 will be described. In Embodiment 5, mainly matters different from Embodiments 1 to 4 will be described. Further, in the fifth embodiment, descriptions of matters common to the first to fourth embodiments are omitted.
  • FIG. 14 is a block diagram showing functions of the drive recorder of the fifth embodiment.
  • the drive recorder 500 is also called an information processing device.
  • the drive recorder 500 has a processor, volatile storage, and non-volatile storage.
  • the drive recorder 500 may have processing circuitry.
  • the drive recorder 500 may be considered the drive recorder 201.
  • the difference between the drive recorder 500 and the drive recorder 201 is that the drive recorder 500 does not transmit images to an information processing device (for example, a server).
  • the drive recorder 500 has the same functions as the information processing apparatuses 100, 100a, 100b, and 100c. Therefore, the drive recorder 500 can estimate risk content using the image generated by the drive recorder 500 .
  • the drive recorder 500 has a storage section 510 , an acquisition section 520 , an estimation section 530 and an output section 540 .
  • the storage unit 510 may be implemented as a storage area secured in a volatile storage device or a non-volatile storage device of the drive recorder 500 .
  • a part or all of the acquisition unit 520, the estimation unit 530, and the output unit 540 may be realized by a processing circuit included in the drive recorder 500.
  • part or all of the acquisition unit 520, the estimation unit 530, and the output unit 540 may be implemented as modules of a program executed by a processor included in the drive recorder 500.
  • the storage unit 510 may store the learned model 511 and the map information 512.
  • the trained model 511 can perform estimation in the same way as the trained models 111, 111a, 111b, and 111c.
  • Map information 512 is the same as map information 112 .
  • the acquisition unit 520 has the same functions as the acquisition units 120, 120a, 120b, and 120c. Therefore, for example, when the car 200 has the drive recorder 500 , the acquisition unit 520 acquires the image generated by the drive recorder 500 , the position information of the car 200 , the speed of the car 200 , and the acceleration of the car 200 . Also, for example, the acquisition unit 520 can acquire an image generated by the drive recorder of the car 300 , the position information of the car 300 , the speed of the car 300 , and the acceleration of the car 300 . Also, for example, the acquisition unit 520 can acquire the image generated by the camera 400 and the position information of the camera 400 . Thus, the acquisition unit 520 has the same functions as the acquisition units 120, 120a, 120b, and 120c. Therefore, detailed description of the function of the acquisition unit 520 is omitted.
  • Estimating section 530 has the same function as estimating sections 130, 130a, 130b, and 130c. Therefore, detailed description of the function of the estimation unit 530 is omitted.
  • Output section 540 has the same function as output section 140 . Therefore, detailed description of the function of the output unit 540 is omitted.
  • drive recorder 500 has the same functions as information processing apparatuses 100, 100a, 100b, and 100c. Therefore, the drive recorder 500 has the same effects as those of the first to fourth embodiments.
  • FIG. 15 is a diagram illustrating an example of the process of generating a trained model.
  • FIG. 15 shows an image 601, an image 602, position information 603, map information 604, speed information 605, and acceleration information 606.
  • FIG. An image 601 is an image generated by the drive recorder of the vehicle to which the estimation result is output (hereinafter referred to as the target vehicle).
  • An image 602 is an image generated by a drive recorder owned by a vehicle other than the target vehicle.
  • Image 602 may be an image generated by a surveillance camera.
  • the position information 603 is the position information of the target vehicle.
  • the location information 603 may include location information of vehicles other than the target vehicle and surveillance cameras.
  • the map information 604 is information indicating a map.
  • Speed information 605 is information indicating the speed of the target vehicle.
  • the speed information 605 may include information indicating speeds of vehicles other than the target vehicle.
  • Acceleration information 606 is information indicating the acceleration of the target vehicle. Acceleration information 606 may include information indicating the acceleration of vehicles other than the target vehicle.
  • Scene classification is performed based on the image 601, image 602, position information 603, map information 604, speed information 605, and acceleration information 606.
  • Scene classification may be done by a user or by a machine.
  • the content of the scene classification may be considered as the risk content.
  • the image 602 may not be used for scene classification.
  • at least one of velocity information 605 and acceleration information 606 may be used in scene classification.
  • Risk content is added to the information obtained by scene classification.
  • labeling is done.
  • Information obtained by labeling is used as teacher data in machine learning.
  • a trained model is generated by machine learning.
  • 100, 100a, 100b, 100c information processing device 101 processor, 102 volatile storage device, 103 non-volatile storage device, 110, 110a, 110b, 110c storage unit, 111, 111a, 111b, 111c learned model, 112 map information , 120, 120a, 120b, 120c acquisition unit, 130, 130a, 130b, 130c estimation unit, 140 output unit, 200 vehicle, 201 drive recorder, 202 range, 300 vehicle, 301 range, 400 camera, 401 range, 500 drive recorder , 510 Storage unit, 511 Trained model, 512 Map information, 520 Acquisition unit, 530 Estimation unit, 540 Output unit, 601 Image, 602 Image, 603 Location information, 604 Map information, 605 Speed information, 606 Acceleration information.

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Human Computer Interaction (AREA)
  • Traffic Control Systems (AREA)

Abstract

An information processing device (100) estimates a risk content to a vehicle (200) which exists on a road and includes a drive recorder (201). The information processing device (100) includes an acquisition unit (120) which acquires a learned model (111), map information (112), an image generated by the drive recorder (201) and indicating a situation of the road, position information for a vehicle (200), and first information which is at least one of the speed and the acceleration of the vehicle (200), and an estimation unit (130) which estimates the risk content on the basis of the learned model (111), the map information (112), said image, the position information for the vehicle (200), and the first information.

Description

情報処理装置、推定システム、推定方法、及び推定プログラムInformation processing device, estimation system, estimation method, and estimation program
 本開示は、情報処理装置、推定システム、推定方法、及び推定プログラムに関する。 The present disclosure relates to an information processing device, an estimation system, an estimation method, and an estimation program.
 道路には、多くの車が走っている。また、道路には、多くの人が存在している。そのため、事故が発生する可能性が高い。事故が発生する前に、リスクを推定することで、事故の発生を未然に防止することができる。ここで、リスクの推定に関する技術が提案されている(特許文献1を参照)。例えば、特許文献1では、リスクは、ユーザが所定の不測の事態に遭う可能性に基づく値として、表される。 "There are many cars on the road." Also, there are many people on the road. Therefore, there is a high possibility that an accident will occur. By estimating the risk before an accident occurs, the occurrence of an accident can be prevented. Here, a technology related to risk estimation has been proposed (see Patent Literature 1). For example, in Patent Literature 1, risk is expressed as a value based on the possibility of a user encountering a predetermined contingency.
特開2021-99701号公報Japanese Patent Application Laid-Open No. 2021-99701
 上記の技術では、リスクは値で表される。そのため、上記の技術は、どのようなリスクの内容であるかを推定していない。そのため、上記の技術の推定精度は、高いと言えない。 With the above technology, risk is represented by a value. Therefore, the above technology does not estimate what kind of risk it is. Therefore, the estimation accuracy of the above technique cannot be said to be high.
 本開示の目的は、高い精度の推定を行うことである。 The purpose of this disclosure is to perform highly accurate estimation.
 本開示の一態様に係る情報処理装置が提供される。情報処理装置は、道路に存在し、かつ第1の撮像装置を有する第1の移動体に対するリスク内容を推定する。情報処理装置は、学習済モデル、地図情報、前記第1の撮像装置が生成した、前記道路の状況を示す第1の画像、前記第1の移動体の位置情報、及び前記第1の移動体の速度及び加速度のうちの少なくとも1つの情報である第1の情報を取得する取得部と、前記学習済モデル、前記地図情報、前記第1の画像、前記第1の移動体の位置情報、及び前記第1の情報に基づいて、前記リスク内容を推定する推定部と、を有する。 An information processing device according to one aspect of the present disclosure is provided. The information processing device estimates risk details for a first moving object that is present on the road and has the first imaging device. The information processing device includes a learned model, map information, a first image showing the condition of the road generated by the first imaging device, position information of the first moving body, and the first moving body. an acquisition unit that acquires first information that is at least one of the velocity and acceleration of the learned model, the map information, the first image, position information of the first moving object, and and an estimation unit that estimates the risk content based on the first information.
 本開示によれば、高い精度の推定を行うことができる。 According to the present disclosure, highly accurate estimation can be performed.
実施の形態1の推定システムを示す図である。1 is a diagram showing an estimation system according to Embodiment 1; FIG. 実施の形態1の情報処理装置が有するハードウェアを示す図である。2 illustrates hardware included in the information processing apparatus according to the first embodiment; FIG. 実施の形態1の情報処理装置の機能を示すブロック図である。2 is a block diagram showing functions of the information processing apparatus according to Embodiment 1; FIG. 実施の形態1の情報処理装置が実行する処理の例を示すフローチャートである。4 is a flow chart showing an example of processing executed by the information processing apparatus according to the first embodiment; 実施の形態2の推定システムを示す図である。FIG. 12 is a diagram showing an estimation system according to Embodiment 2; FIG. 実施の形態2の情報処理装置の機能を示すブロック図である。3 is a block diagram showing functions of an information processing apparatus according to a second embodiment; FIG. 実施の形態2の情報処理装置が実行する処理の例を示すフローチャートである。10 is a flow chart showing an example of processing executed by the information processing apparatus according to the second embodiment; 実施の形態3の推定システムを示す図である。FIG. 13 is a diagram showing an estimation system according to Embodiment 3; FIG. 実施の形態3の情報処理装置の機能を示すブロック図である。FIG. 11 is a block diagram showing functions of an information processing apparatus according to a third embodiment; 実施の形態3の情報処理装置が実行する処理の例を示すフローチャートである。10 is a flow chart showing an example of processing executed by the information processing apparatus according to the third embodiment; 実施の形態4の推定システムを示す図である。FIG. 13 is a diagram showing an estimation system according to Embodiment 4; FIG. 実施の形態4の情報処理装置の機能を示すブロック図である。FIG. 11 is a block diagram showing functions of an information processing apparatus according to a fourth embodiment; FIG. 実施の形態4の情報処理装置が実行する処理の例を示すフローチャートである。FIG. 13 is a flow chart showing an example of processing executed by an information processing apparatus according to a fourth embodiment; FIG. 実施の形態5のドライブレコーダの機能を示すブロック図である。FIG. 11 is a block diagram showing functions of a drive recorder according to Embodiment 5; 学習済モデルの生成過程の例を示す図である。FIG. 10 is a diagram illustrating an example of a process of generating a learned model;
 以下、図面を参照しながら実施の形態を説明する。以下の実施の形態は、例にすぎず、本開示の範囲内で種々の変更が可能である。 Embodiments will be described below with reference to the drawings. The following embodiments are merely examples, and various modifications are possible within the scope of the present disclosure.
実施の形態1.
 図1は、実施の形態1の推定システムを示す図である。推定システムは、情報処理装置100とドライブレコーダ201とを含む。
 情報処理装置100とドライブレコーダ201とは、ネットワークを介して、接続する。なお、ネットワークは、有線ネットワーク又は無線ネットワークである。
 情報処理装置100は、推定方法を実行する装置である。例えば、情報処理装置100は、サーバである。
Embodiment 1.
FIG. 1 is a diagram showing an estimation system according to Embodiment 1. FIG. The estimation system includes information processing device 100 and drive recorder 201 .
Information processing apparatus 100 and drive recorder 201 are connected via a network. Note that the network is a wired network or a wireless network.
The information processing device 100 is a device that executes an estimation method. For example, the information processing device 100 is a server.
 図1は、道路に存在する車200を示している。車200は、自動運転が可能な車でもよい。また、車200は、ADAS(Advanced Driver-Assistance Systems)の車でもよい。さらに、車200は、PMV(personal mobility vehicle)又はAMR(Autonomous Mobile Robot)でもよい。車200は、第1の移動体とも言う。 FIG. 1 shows a car 200 existing on the road. The car 200 may be a car capable of automatic driving. Also, the car 200 may be an ADAS (Advanced Driver-Assistance Systems) car. Furthermore, the car 200 may be a PMV (personal mobility vehicle) or an AMR (Autonomous Mobile Robot). The car 200 is also called a first moving object.
 車200は、ドライブレコーダ201を有する。なお、この文は、車200がドライブレコーダ201を搭載している場合、及びドライブレコーダ201が車200に外付けされている場合を意味する。ドライブレコーダ201は、第1の撮像装置とも言う。
 ここで、情報処理装置100は、車200に搭載されている装置でもよい。
A car 200 has a drive recorder 201 . This sentence means the case where the drive recorder 201 is installed in the vehicle 200 and the case where the drive recorder 201 is externally attached to the vehicle 200 . The drive recorder 201 is also called a first imaging device.
Here, the information processing device 100 may be a device mounted on the vehicle 200 .
 ドライブレコーダ201は、道路の状況を撮像する。例えば、ドライブレコーダ201は、範囲202の道路の状況を撮像する。また、ドライブレコーダ201は、後方の道路の状況を撮像してもよい。ドライブレコーダ201が撮像することにより、道路の状況を示す画像が生成される。当該画像は、情報処理装置100に取得される。 The drive recorder 201 takes images of road conditions. For example, the drive recorder 201 captures an image of road conditions in a range 202 . In addition, the drive recorder 201 may take an image of the situation of the road behind. The drive recorder 201 captures images to generate an image showing road conditions. The image is acquired by the information processing apparatus 100 .
 情報処理装置100は、当該画像を用いて、車200に対するリスク内容を推定する。以下、情報処理装置100を詳細に説明する。 The information processing device 100 estimates the risk content for the car 200 using the image. The information processing apparatus 100 will be described in detail below.
 次に、情報処理装置100が有するハードウェアを説明する。
 図2は、実施の形態1の情報処理装置が有するハードウェアを示す図である。情報処理装置100は、プロセッサ101、揮発性記憶装置102、及び不揮発性記憶装置103を有する。
Next, hardware included in the information processing apparatus 100 will be described.
FIG. 2 illustrates hardware included in the information processing apparatus according to the first embodiment. The information processing device 100 has a processor 101 , a volatile memory device 102 and a nonvolatile memory device 103 .
 プロセッサ101は、情報処理装置100全体を制御する。例えば、プロセッサ101は、CPU(Central Processing Unit)、FPGA(Field Programmable Gate Array)などである。プロセッサ101は、マルチプロセッサでもよい。また、情報処理装置100は、処理回路を有してもよい。 The processor 101 controls the information processing apparatus 100 as a whole. For example, the processor 101 is a CPU (Central Processing Unit), FPGA (Field Programmable Gate Array), or the like. Processor 101 may be a multiprocessor. Further, the information processing device 100 may have a processing circuit.
 揮発性記憶装置102は、情報処理装置100の主記憶装置である。例えば、揮発性記憶装置102は、RAM(Random Access Memory)である。不揮発性記憶装置103は、情報処理装置100の補助記憶装置である。例えば、不揮発性記憶装置103は、HDD(Hard Disk Drive)、又はSSD(Solid State Drive)である。 The volatile memory device 102 is the main memory device of the information processing device 100 . For example, the volatile memory device 102 is RAM (Random Access Memory). The nonvolatile storage device 103 is an auxiliary storage device of the information processing device 100 . For example, the nonvolatile storage device 103 is a HDD (Hard Disk Drive) or an SSD (Solid State Drive).
 次に、情報処理装置100が有する機能を説明する。
 図3は、実施の形態1の情報処理装置の機能を示すブロック図である。情報処理装置100は、記憶部110、取得部120、推定部130、及び出力部140を有する。
Next, functions of the information processing apparatus 100 will be described.
FIG. 3 is a block diagram showing functions of the information processing apparatus according to the first embodiment. The information processing device 100 has a storage unit 110 , an acquisition unit 120 , an estimation unit 130 and an output unit 140 .
 記憶部110は、揮発性記憶装置102又は不揮発性記憶装置103に確保した記憶領域として実現してもよい。
 取得部120、推定部130、及び出力部140の一部又は全部は、処理回路によって実現してもよい。また、取得部120、推定部130、及び出力部140の一部又は全部は、プロセッサ101が実行するプログラムのモジュールとして実現してもよい。例えば、プロセッサ101が実行するプログラムは、推定プログラムとも言う。例えば、推定プログラムは、記録媒体に記録されている。
The storage unit 110 may be implemented as a storage area secured in the volatile storage device 102 or the nonvolatile storage device 103 .
A part or all of the acquisition unit 120, the estimation unit 130, and the output unit 140 may be realized by a processing circuit. Also, part or all of the acquisition unit 120, the estimation unit 130, and the output unit 140 may be implemented as modules of a program executed by the processor 101. FIG. For example, the program executed by processor 101 is also called an estimation program. For example, the estimation program is recorded on a recording medium.
 記憶部110は、学習済モデル111及び地図情報112を記憶してもよい。 The storage unit 110 may store the learned model 111 and the map information 112.
 取得部120は、学習済モデル111を取得する。例えば、取得部120は、学習済モデル111を記憶部110から取得する。また、例えば、取得部120は、学習済モデル111を外部装置から取得する。なお、外部装置の図示は、省略されている。 The acquisition unit 120 acquires the trained model 111. For example, the acquisition unit 120 acquires the learned model 111 from the storage unit 110 . Also, for example, the acquisition unit 120 acquires the trained model 111 from an external device. The illustration of the external device is omitted.
 取得部120は、地図情報112を取得する。例えば、取得部120は、地図情報112を記憶部110から取得する。また、例えば、取得部120は、地図情報112を外部装置から取得する。 The acquisition unit 120 acquires the map information 112. For example, the acquisition unit 120 acquires the map information 112 from the storage unit 110 . Also, for example, the acquisition unit 120 acquires the map information 112 from an external device.
 取得部120は、ドライブレコーダ201が生成した、道路の状況を示す画像を取得する。取得部120は、ドライブレコーダ201が撮像を行うことにより得られた当該画像を取得すると表現してもよい。例えば、取得部120は、ドライブレコーダ201から当該画像を取得する。当該画像は、第1の画像とも言う。 The acquisition unit 120 acquires an image showing road conditions generated by the drive recorder 201 . The acquisition unit 120 may be expressed as acquiring the image obtained by the drive recorder 201 capturing the image. For example, the acquisition unit 120 acquires the image from the drive recorder 201 . The image is also called the first image.
 取得部120は、車200の位置情報を取得する。例えば、取得部120は、ドライブレコーダ201から当該位置情報を取得する。また、例えば、取得部120は、車200が有する通信装置から当該位置情報を取得する。なお、例えば、当該位置情報は、車200が有するGPS(Global Positioning System)により得られた情報である。 The acquisition unit 120 acquires the position information of the car 200. For example, the acquisition unit 120 acquires the position information from the drive recorder 201 . Also, for example, the acquisition unit 120 acquires the position information from the communication device of the car 200 . Note that, for example, the position information is information obtained by a GPS (Global Positioning System) that the car 200 has.
 取得部120は、車200の速度及び加速度のうちの少なくとも1つの情報である第1の情報を取得する。例えば、取得部120は、ドライブレコーダ201から第1の情報を取得する。また、例えば、取得部120は、車200が有する通信装置から第1の情報を取得する。なお、例えば、当該速度は、車200が有する速度センサにより、計測された速度である。また、例えば、当該加速度は、車200が有するIMU(Inertial Measurement Unit)センサにより、計測された加速度である。 The acquisition unit 120 acquires first information that is at least one information of the speed and acceleration of the vehicle 200 . For example, the acquisition unit 120 acquires first information from the drive recorder 201 . Also, for example, the acquisition unit 120 acquires the first information from the communication device of the vehicle 200 . Note that, for example, the speed is a speed measured by a speed sensor that the vehicle 200 has. Also, for example, the acceleration is the acceleration measured by an IMU (Inertial Measurement Unit) sensor that the vehicle 200 has.
 推定部130は、学習済モデル111、地図情報112、道路の状況を示す画像、車200の位置情報、及び第1の情報に基づいて、リスク内容を推定する。例えば、推定部130は、学習済モデル111、地図情報112、道路の状況を示す画像、車200の位置情報、車200の速度、及び車200の加速度に基づいて、リスク内容を推定する。詳細には、推定部130は、地図情報112、当該画像、車200の位置情報、車200の速度、及び車200の加速度を学習済モデル111に入力する。これにより、学習済モデル111は、リスク内容を出力する。 The estimation unit 130 estimates risk details based on the learned model 111, the map information 112, the image showing the road conditions, the position information of the car 200, and the first information. For example, the estimation unit 130 estimates risk content based on the learned model 111 , the map information 112 , the image showing the road conditions, the position information of the vehicle 200 , the speed of the vehicle 200 , and the acceleration of the vehicle 200 . Specifically, the estimation unit 130 inputs the map information 112 , the image, the position information of the car 200 , the speed of the car 200 , and the acceleration of the car 200 to the learned model 111 . As a result, the learned model 111 outputs risk details.
 なお、例えば、学習済モデル111は、当該画像に含まれている物体の識別処理の結果、当該物体と車200との間の距離の演算処理の結果などに基づいて、リスク内容を推定する。 Note that, for example, the learned model 111 estimates the risk content based on the result of the identification processing of the object included in the image, the result of the arithmetic processing of the distance between the object and the car 200, and the like.
 例えば、リスク内容は、1以上のリスク要因から特定される内容である。具体的には、リスク内容は、顕在的なリスク及び潜在的なリスクのうちの少なくとも1つの内容である。例えば、顕在的なリスクは、画像に含まれている人が急に飛び出すことで、車200と人が衝突するリスクである。例えば、潜在的なリスクは、現在は存在が確定されていないが数秒後に起こる可能性のあるリスクである。また、リスク内容は、静的で潜在的なリスク及び動的で顕在的なリスクのうちの少なくとも1つの内容でもよい。静的で潜在的なリスクは、建物などの動かない物体と車200との関係から発生するリスクである。例えば、静的で潜在的なリスクは、建物から飛び出してきた人と車200が衝突するリスクである。動的で顕在的なリスクは、バスなどの動く物体と車200との関係から発生するリスクである。例えば、動的で顕在的なリスクは、バスから降りてきた人と車200が衝突するリスクである。 For example, the risk content is the content specified from one or more risk factors. Specifically, the risk content is at least one content of an actual risk and a latent risk. For example, an overt risk is a risk of a collision between the car 200 and the person due to a person included in the image suddenly jumping out. For example, a latent risk is a risk whose existence is currently undetermined but could occur in the next few seconds. Also, the risk content may be at least one content of a static potential risk and a dynamic manifest risk. A static potential risk is a risk arising from the relationship between a non-moving object such as a building and the vehicle 200 . For example, a static potential risk is the risk of a car 200 colliding with a person jumping out of a building. A dynamic manifest risk is a risk arising from the relationship between a moving object such as a bus and the car 200 . For example, a dynamic manifest risk is the risk of a collision between a person getting off a bus and the car 200 .
 ここで、取得部120は、複数の画像を取得してもよい。すなわち、取得部120は、映像を取得してもよい。推定部130は、映像を用いて、リスク内容を推定してもよい。
 また、以下の説明では、車200の速度及び加速度が取得されるものとする。
Here, the acquisition unit 120 may acquire a plurality of images. That is, the acquisition unit 120 may acquire the video. The estimation unit 130 may estimate risk details using video.
Also, in the following description, it is assumed that the speed and acceleration of the vehicle 200 are acquired.
 出力部140は、推定結果を出力する。例えば、情報処理装置100がサーバである場合、出力部140は、車200が有する通信装置に推定結果を出力する。そして、車200は、車200が有するディスプレイに推定結果を出力する。また、例えば、情報処理装置100が車200に搭載されている場合、出力部140は、車200が有するディスプレイに推定結果を出力する。具体的には、車200が有するディスプレイには、“バスから降りてきた人と車200が衝突するリスクがあります。”が表示される。また、推定結果は、音声で出力されてもよい。
 このように、出力部140の出力処理により、車200に存在するユーザは、リスク内容を認識することができる。また、車200が、自動運転が可能な車である場合、車200は、推定結果に基づく運転を行うことができる。
The output unit 140 outputs the estimation result. For example, if the information processing device 100 is a server, the output unit 140 outputs the estimation result to the communication device of the car 200 . The vehicle 200 then outputs the estimation results to a display that the vehicle 200 has. Further, for example, when the information processing device 100 is mounted on the vehicle 200 , the output unit 140 outputs the estimation result to the display of the vehicle 200 . Specifically, the display of the car 200 displays "There is a risk of a collision between the person getting off the bus and the car 200." Also, the estimation result may be output by voice.
In this way, the output processing of the output unit 140 allows the user present in the vehicle 200 to recognize the details of the risk. Further, when the car 200 is capable of automatic driving, the car 200 can drive based on the estimation result.
 次に、情報処理装置100が実行する処理を、フローチャートを用いて、説明する。
 図4は、実施の形態1の情報処理装置が実行する処理の例を示すフローチャートである。
 (ステップS11)取得部120は、道路の状況を示す画像を取得する。
 (ステップS12)取得部120は、車200の位置情報を取得する。
 (ステップS13)取得部120は、車200の速度及び加速度を取得する。
 (ステップS14)取得部120は、学習済モデル111及び地図情報112を取得する。
Next, processing executed by the information processing apparatus 100 will be described using a flowchart.
4 is a flowchart illustrating an example of processing executed by the information processing apparatus according to the first embodiment; FIG.
(Step S11) The acquisition unit 120 acquires an image showing road conditions.
(Step S<b>12 ) The acquisition unit 120 acquires position information of the car 200 .
(Step S<b>13 ) The acquisition unit 120 acquires the speed and acceleration of the vehicle 200 .
(Step S<b>14 ) Acquisition unit 120 acquires learned model 111 and map information 112 .
 (ステップS15)推定部130は、学習済モデル111、地図情報112、当該画像、車200の位置情報、車200の速度、及び車200の加速度に基づいて、リスク内容を推定する。
 (ステップS16)出力部140は、推定結果を出力する。
(Step S<b>15 ) The estimating unit 130 estimates risk content based on the learned model 111 , the map information 112 , the image, the position information of the vehicle 200 , the speed of the vehicle 200 , and the acceleration of the vehicle 200 .
(Step S16) The output unit 140 outputs the estimation result.
 図4の処理は、当該画像、車200の位置情報、車200の速度、及び車200の加速度が取得される度に実行されてもよい。なお、ステップS11~14が実行される順番は、図4が示す順番に限らない。 The processing of FIG. 4 may be executed each time the image, the position information of the car 200, the speed of the car 200, and the acceleration of the car 200 are acquired. Note that the order in which steps S11 to S14 are executed is not limited to the order shown in FIG.
 実施の形態1によれば、情報処理装置100は、学習済モデル111を用いることで、リスク内容を推定する。そのため、情報処理装置100は、高い精度の推定を行うことができる。
 また、情報処理装置100は、車200の速度及び車200の加速度のうちの少なくとも1つの情報を用いて、リスク内容を推定できる。しかし、情報処理装置100は、当該速度及び当該加速度の両方の情報を用いることで、より精度の高いリスク内容を推定できる。
According to Embodiment 1, the information processing apparatus 100 uses the learned model 111 to estimate risk details. Therefore, the information processing apparatus 100 can perform highly accurate estimation.
In addition, the information processing apparatus 100 can estimate the risk details using at least one information of the speed of the vehicle 200 and the acceleration of the vehicle 200 . However, the information processing apparatus 100 can estimate risk details with higher accuracy by using information on both the speed and the acceleration.
実施の形態2.
 次に、実施の形態2を説明する。実施の形態2では、実施の形態1と相違する事項を主に説明する。そして、実施の形態2では、実施の形態1と共通する事項の説明を省略する。
 実施の形態1では、車200から得られた情報に基づいて、推定が行われる場合を説明した。実施の形態2では、車200から得られた情報と、車200以外の車から得られた情報とに基づいて、推定が行われる場合を説明する。
Embodiment 2.
Next, Embodiment 2 will be described. In Embodiment 2, mainly matters different from Embodiment 1 will be described. In the second embodiment, descriptions of items common to the first embodiment are omitted.
In Embodiment 1, the case where estimation is performed based on information obtained from vehicle 200 has been described. Embodiment 2 describes a case where estimation is performed based on information obtained from vehicle 200 and information obtained from vehicles other than vehicle 200 .
 図5は、実施の形態2の推定システムを示す図である。推定システムは、情報処理装置100aとドライブレコーダ201とを含む。
 図5は、道路に存在する車200と車300とを示している。車300は、ADASの車でもよい。また、車300は、PMV又はAMRでもよい。車300は、ドライブレコーダを有する。なお、この文は、車300がドライブレコーダを搭載している場合、及びドライブレコーダが車300に外付けされている場合を意味する。ここで、当該ドライブレコーダは、第2の撮像装置とも言う。
FIG. 5 is a diagram showing an estimation system according to Embodiment 2. FIG. The estimation system includes information processing device 100 a and drive recorder 201 .
FIG. 5 shows a car 200 and a car 300 on the road. Vehicle 300 may be an ADAS vehicle. Also, vehicle 300 may be a PMV or an AMR. Car 300 has a drive recorder. This sentence means the case where the vehicle 300 is equipped with a drive recorder and the case where the drive recorder is externally attached to the vehicle 300 . Here, the drive recorder is also called a second imaging device.
 車300が有するドライブレコーダは、道路の状況を撮像する。例えば、当該ドライブレコーダは、範囲301の道路の状況を撮像する。これにより、道路の状況を示す画像が生成される。当該画像は、情報処理装置100aに取得される。
 図5は、車200以外の車として、車300を示している。すなわち、図5における、車200以外の車の数は、1つである。しかし、車200以外の車の数は、2つ以上でもよい。ここで、車200以外の車は、第2の移動体とも言う。例えば、車300は、第2の移動体とも言う。
A drive recorder that the car 300 has takes images of road conditions. For example, the drive recorder captures road conditions in range 301 . As a result, an image showing road conditions is generated. The image is acquired by the information processing device 100a.
FIG. 5 shows a car 300 as a car other than car 200 . That is, the number of vehicles other than the vehicle 200 in FIG. 5 is one. However, the number of cars other than car 200 may be two or more. Here, vehicles other than the vehicle 200 are also referred to as second moving bodies. For example, the car 300 is also called a second moving body.
 次に、情報処理装置100aが有する機能を説明する。
 図6は、実施の形態2の情報処理装置の機能を示すブロック図である。図3に示される構成と同じ図6の構成は、図3に示される符号と同じ符号を付している。情報処理装置100aは、記憶部110a、取得部120a、及び推定部130aを有する。
Next, functions of the information processing apparatus 100a will be described.
FIG. 6 is a block diagram showing functions of the information processing apparatus according to the second embodiment. 6 that are the same as those shown in FIG. 3 are given the same reference numerals as those shown in FIG. The information processing device 100a has a storage unit 110a, an acquisition unit 120a, and an estimation unit 130a.
 記憶部110aは、学習済モデル111aを記憶してもよい。
 取得部120aは、取得部120とほとんど同じ機能を有する。そのため、同じ機能については、説明を省略する。
The storage unit 110a may store the learned model 111a.
Acquisition unit 120 a has almost the same function as acquisition unit 120 . Therefore, descriptions of the same functions are omitted.
 取得部120aは、学習済モデル111aを取得する。例えば、取得部120aは、学習済モデル111aを記憶部110a又は外部装置から取得する。なお、外部装置の図示は、省略されている。 The acquisition unit 120a acquires the learned model 111a. For example, the acquisition unit 120a acquires the trained model 111a from the storage unit 110a or an external device. The illustration of the external device is omitted.
 取得部120aは、1以上の第2の移動体が有するドライブレコーダが生成した、道路の状況を示す画像を取得する。取得部120aは、当該ドライブレコーダが撮像を行うことにより得られた当該画像を取得すると表現してもよい。例えば、取得部120aは、車300のドライブレコーダが生成した、道路の状況を示す画像を取得する。なお、取得部120aは、車300の通信装置から当該画像を取得してもよい。すなわち、取得部120aは、車300の通信装置から直接的に当該画像を取得してもよい。また、取得部120aは、外部装置を介して、車300の通信装置から当該画像を取得してもよい。すなわち、取得部120aは、車300の通信装置から間接的に当該画像を取得してもよい。ここで、取得部120aに取得される画像は、第2の画像とも言う。 The acquisition unit 120a acquires an image showing road conditions generated by a drive recorder possessed by one or more second moving bodies. The acquisition unit 120a may be expressed as acquiring the image obtained by the drive recorder capturing the image. For example, the acquisition unit 120a acquires an image showing road conditions generated by the drive recorder of the car 300. FIG. Note that the acquisition unit 120 a may acquire the image from the communication device of the vehicle 300 . That is, the acquiring unit 120a may acquire the image directly from the communication device of the vehicle 300. FIG. Alternatively, the acquisition unit 120a may acquire the image from the communication device of the vehicle 300 via an external device. That is, the acquisition unit 120a may indirectly acquire the image from the communication device of the vehicle 300. FIG. Here, the image acquired by the acquisition unit 120a is also referred to as a second image.
 取得部120aは、1以上の第2の移動体の位置情報を取得する。例えば、取得部120aは、車300の位置情報を取得する。なお、取得部120aは、車300の通信装置から直接的に当該位置情報を取得してもよい。取得部120aは、車300の通信装置から間接的に当該位置情報を取得してもよい。 The acquisition unit 120a acquires position information of one or more second moving bodies. For example, the acquisition unit 120a acquires the position information of the car 300. FIG. Note that the acquisition unit 120 a may acquire the position information directly from the communication device of the vehicle 300 . The acquisition unit 120 a may indirectly acquire the position information from the communication device of the vehicle 300 .
 取得部120aは、1以上の第2の移動体の速度及び加速度のうちの少なくとも1つの情報である第2の情報を取得する。例えば、取得部120aは、車300の速度及び加速度のうちの少なくとも1つの情報である第2の情報を取得する。なお、取得部120aは、車300の通信装置から直接的に第2の情報を取得してもよい。取得部120aは、車300の通信装置から間接的に第2の情報を取得してもよい。 The acquisition unit 120a acquires second information, which is at least one of the speed and acceleration of one or more second moving bodies. For example, the acquisition unit 120a acquires second information that is at least one of the speed and acceleration of the vehicle 300 . Note that the acquisition unit 120 a may acquire the second information directly from the communication device of the vehicle 300 . The acquisition unit 120 a may indirectly acquire the second information from the communication device of the vehicle 300 .
 推定部130aは、学習済モデル111a、地図情報112、ドライブレコーダ201が生成した画像、車200の位置情報、第1の情報、1以上の第2の移動体が有するドライブレコーダが生成した画像、1以上の第2の移動体の位置情報、及び第2の情報に基づいて、リスク内容を推定する。例えば、推定部130aは、学習済モデル111a、地図情報112、ドライブレコーダ201が生成した画像、車200の位置情報、車200の速度、車200の加速度、1以上の第2の移動体が有するドライブレコーダが生成した画像、1以上の第2の移動体の位置情報、1以上の第2の移動体の速度、及び1以上の第2の移動体の加速度に基づいて、リスク内容を推定する。 The estimating unit 130a stores the learned model 111a, the map information 112, the image generated by the drive recorder 201, the position information of the vehicle 200, the first information, the image generated by the drive recorder of one or more second mobile bodies, Based on the location information of one or more second moving bodies and the second information, risk content is estimated. For example, the estimating unit 130a uses the learned model 111a, the map information 112, the image generated by the drive recorder 201, the position information of the vehicle 200, the speed of the vehicle 200, the acceleration of the vehicle 200, and the Risk content is estimated based on the image generated by the drive recorder, the position information of the one or more second mobile bodies, the speed of the one or more second mobile bodies, and the acceleration of the one or more second mobile bodies. .
 以下の説明では、1以上の第2の移動体は、車300とする。また、以下の説明では、1以上の第2の移動体の速度及び加速度が取得されるものとする。 In the following description, the one or more second moving bodies are assumed to be a car 300. Also, in the following description, it is assumed that the velocity and acceleration of one or more second moving bodies are acquired.
 次に、情報処理装置100aが実行する処理を、フローチャートを用いて、説明する。
 図7は、実施の形態2の情報処理装置が実行する処理の例を示すフローチャートである。
 (ステップS21)取得部120aは、道路の状況を示す画像を取得する。なお、当該画像は、ドライブレコーダ201が生成した画像である。
 (ステップS22)取得部120aは、車200の位置情報を取得する。
 (ステップS23)取得部120aは、車200の速度及び加速度を取得する。
 (ステップS24)取得部120aは、道路の状況を示す画像、車300の位置情報、車300の速度、及び車300の加速度を取得する。なお、当該画像は、車300のドライブレコーダが生成した画像である。
 (ステップS25)取得部120aは、学習済モデル111a及び地図情報112を取得する。
Next, processing executed by the information processing device 100a will be described using a flowchart.
7 is a flowchart illustrating an example of processing executed by the information processing apparatus according to the second embodiment; FIG.
(Step S21) The acquisition unit 120a acquires an image showing road conditions. Note that the image is an image generated by the drive recorder 201 .
(Step S<b>22 ) The acquisition unit 120 a acquires position information of the car 200 .
(Step S<b>23 ) The acquisition unit 120 a acquires the speed and acceleration of the vehicle 200 .
(Step S<b>24 ) The acquisition unit 120 a acquires an image showing road conditions, the position information of the car 300 , the speed of the car 300 , and the acceleration of the car 300 . Note that the image is an image generated by the drive recorder of the vehicle 300 .
(Step S<b>25 ) The acquisition unit 120 a acquires the learned model 111 a and the map information 112 .
 (ステップS26)推定部130aは、学習済モデル111a、地図情報112、ドライブレコーダ201が生成した画像、車200の位置情報、車200の速度、車200の加速度、車300のドライブレコーダが生成した画像、車300の位置情報、車300の速度、及び車300の加速度に基づいて、リスク内容を推定する。
 (ステップS27)出力部140は、推定結果を出力する。
 なお、ステップS21~25が実行される順番は、図7が示す順番に限らない。
(Step S26) The estimation unit 130a generates the learned model 111a, the map information 112, the image generated by the drive recorder 201, the position information of the vehicle 200, the speed of the vehicle 200, the acceleration of the vehicle 200, and the The risk content is estimated based on the image, the position information of the car 300, the speed of the car 300, and the acceleration of the car 300.
(Step S27) The output unit 140 outputs the estimation result.
Note that the order in which steps S21 to S25 are executed is not limited to the order shown in FIG.
 ここで、取得部120aは、車300のドライブレコーダが生成した複数の画像を取得してもよい。すなわち、取得部120aは、車300のドライブレコーダが生成した映像を取得してもよい。推定部130aは、当該映像を用いて、リスク内容を推定してもよい。 Here, the acquisition unit 120a may acquire a plurality of images generated by the drive recorder of the car 300. That is, the acquiring unit 120a may acquire the video generated by the drive recorder of the car 300. FIG. The estimation unit 130a may estimate risk details using the video.
 実施の形態2によれば、情報処理装置100aは、ドライブレコーダ201から得られる情報と車300が有するドライブレコーダから得られる情報とに基づいて、リスク内容を推定する。例えば、情報処理装置100aは、図5の範囲202と図5の範囲301とから得られる情報に基づいて、リスク内容を推定する。そのため、情報処理装置100aは、実施の形態1よりも、精度の高いリスク内容を推定できる。 According to the second embodiment, the information processing device 100a estimates the risk details based on the information obtained from the drive recorder 201 and the information obtained from the drive recorder of the vehicle 300. For example, the information processing device 100a estimates risk details based on information obtained from the range 202 in FIG. 5 and the range 301 in FIG. Therefore, the information processing apparatus 100a can estimate risk details with higher accuracy than in the first embodiment.
実施の形態3.
 次に、実施の形態3を説明する。実施の形態3では、実施の形態1と相違する事項を主に説明する。そして、実施の形態3では、実施の形態1と共通する事項の説明を省略する。
 実施の形態1では、車200から得られた情報に基づいて、推定が行われる場合を説明した。実施の形態3では、車200から得られた情報と、道路に存在するカメラから得られた情報とに基づいて、推定が行われる場合を説明する。
Embodiment 3.
Next, Embodiment 3 will be described. In the third embodiment, mainly matters different from the first embodiment will be described. In the third embodiment, descriptions of matters common to the first embodiment are omitted.
In Embodiment 1, the case where estimation is performed based on information obtained from vehicle 200 has been described. Embodiment 3 describes a case where estimation is performed based on information obtained from the vehicle 200 and information obtained from cameras present on the road.
 図8は、実施の形態3の推定システムを示す図である。推定システムは、情報処理装置100bとドライブレコーダ201とを含む。
 図8は、道路に存在する車200を示している。また、図8は、道路に存在するカメラ400を示している。図8は、道路に設置されているカメラ400を示していると表現してもよい。例えば、カメラ400は、監視カメラ又はライブカメラである。カメラ400は、道路の状況を撮像する。例えば、カメラ400は、範囲401の道路の状況を撮像する。カメラ400が生成した画像は、情報処理装置100bに取得される。また、カメラ400は、カメラ400の位置情報を記憶してもよい。
 図8は、1つのカメラを示している。カメラの数は、2つ以上でもよい。
FIG. 8 is a diagram showing an estimation system according to Embodiment 3. FIG. The estimation system includes information processing device 100 b and drive recorder 201 .
FIG. 8 shows a car 200 present on the road. FIG. 8 also shows a camera 400 present on the road. FIG. 8 may be described as showing the camera 400 installed on the road. For example, camera 400 is a surveillance camera or a live camera. Camera 400 captures the road conditions. For example, camera 400 captures road conditions in area 401 . The image generated by the camera 400 is acquired by the information processing device 100b. Also, the camera 400 may store position information of the camera 400 .
FIG. 8 shows one camera. The number of cameras may be two or more.
 次に、情報処理装置100bが有する機能を説明する。
 図9は、実施の形態3の情報処理装置の機能を示すブロック図である。図3に示される構成と同じ図9の構成は、図3に示される符号と同じ符号を付している。情報処理装置100bは、記憶部110b、取得部120b、及び推定部130bを有する。
Next, functions of the information processing device 100b will be described.
FIG. 9 is a block diagram showing functions of the information processing apparatus according to the third embodiment. 9 that are the same as those shown in FIG. 3 are given the same reference numerals as those shown in FIG. The information processing device 100b has a storage unit 110b, an acquisition unit 120b, and an estimation unit 130b.
 記憶部110bは、学習済モデル111bを記憶してもよい。
 取得部120bは、取得部120とほとんど同じ機能を有する。そのため、同じ機能については、説明を省略する。
The storage unit 110b may store the learned model 111b.
Acquisition unit 120 b has almost the same function as acquisition unit 120 . Therefore, descriptions of the same functions are omitted.
 取得部120bは、学習済モデル111bを取得する。例えば、取得部120bは、学習済モデル111bを記憶部110b又は外部装置から取得する。なお、外部装置の図示は、省略されている。 The acquisition unit 120b acquires the learned model 111b. For example, the acquisition unit 120b acquires the trained model 111b from the storage unit 110b or an external device. The illustration of the external device is omitted.
 取得部120bは、1以上のカメラが生成した、道路の状況を示す画像を取得する。取得部120bは、1以上のカメラが撮像を行うことにより得られた当該画像を取得すると表現してもよい。例えば、取得部120bは、カメラ400が生成した、道路の状況を示す画像を取得する。なお、取得部120bは、カメラ400から当該画像を取得してもよい。すなわち、取得部120bは、カメラ400から直接的に当該画像を取得してもよい。また、取得部120bは、外部装置を介して、カメラ400から当該画像を取得してもよい。すなわち、取得部120bは、カメラ400から間接的に当該画像を取得してもよい。ここで、取得部120bに取得される画像は、第3の画像とも言う。 The acquisition unit 120b acquires an image showing road conditions generated by one or more cameras. The acquisition unit 120b may be expressed as acquiring the image obtained by imaging by one or more cameras. For example, the acquisition unit 120b acquires an image representing road conditions generated by the camera 400 . Note that the acquisition unit 120 b may acquire the image from the camera 400 . That is, the acquisition unit 120b may acquire the image directly from the camera 400. FIG. Alternatively, the acquisition unit 120b may acquire the image from the camera 400 via an external device. That is, the acquisition unit 120b may indirectly acquire the image from the camera 400. FIG. Here, the image acquired by the acquisition unit 120b is also called a third image.
 取得部120bは、1以上のカメラの位置情報を取得してもよい。例えば、取得部120bは、カメラ400の位置情報を取得してもよい。なお、取得部120bは、カメラ400から直接的に当該位置情報を取得してもよい。取得部120bは、カメラ400から間接的に当該位置情報を取得してもよい。 The acquisition unit 120b may acquire position information of one or more cameras. For example, the acquisition unit 120b may acquire the position information of the camera 400. FIG. Note that the acquisition unit 120 b may acquire the position information directly from the camera 400 . The acquisition unit 120 b may indirectly acquire the position information from the camera 400 .
 ここで、1以上のカメラが位置情報を記憶していない場合がある。例えば、カメラ400が、カメラ400の位置情報を記憶していない場合がある。このような場合、取得部120bは、1以上のカメラの位置情報を取得することができない。そこで、推定部130bは、1以上のカメラの位置を推定する。例えば、推定部130bは、カメラ400が生成した画像と、ドライブレコーダ201が生成した画像とに基づいて、地図情報112が示す地図上におけるカメラ400の位置を推定する。このように、情報処理装置100bは、1以上のカメラの位置情報を得ることができる。 Here, one or more cameras may not store position information. For example, camera 400 may not store the position information of camera 400 . In such a case, the acquisition unit 120b cannot acquire the position information of one or more cameras. Therefore, the estimation unit 130b estimates the positions of one or more cameras. For example, the estimation unit 130b estimates the position of the camera 400 on the map indicated by the map information 112 based on the image generated by the camera 400 and the image generated by the drive recorder 201. FIG. Thus, the information processing device 100b can obtain position information of one or more cameras.
 推定部130bは、学習済モデル111b、地図情報112、ドライブレコーダ201が生成した画像、車200の位置情報、第1の情報、1以上のカメラが生成した画像、及び1以上のカメラの位置情報に基づいて、リスク内容を推定する。 The estimation unit 130b uses the learned model 111b, the map information 112, the image generated by the drive recorder 201, the position information of the vehicle 200, the first information, the image generated by one or more cameras, and the position information of one or more cameras. Estimate the risk content based on
 以下の説明では、1以上のカメラは、カメラ400とする。また、カメラ400は、カメラ400の位置情報を記憶しているものとする。 In the following description, one or more cameras are cameras 400. It is also assumed that camera 400 stores position information of camera 400 .
 次に、情報処理装置100bが実行する処理を、フローチャートを用いて、説明する。
 図10は、実施の形態3の情報処理装置が実行する処理の例を示すフローチャートである。
 (ステップS31)取得部120bは、道路の状況を示す画像を取得する。なお、当該画像は、ドライブレコーダ201が生成した画像である。
 (ステップS32)取得部120bは、車200の位置情報を取得する。
 (ステップS33)取得部120bは、車200の速度及び加速度を取得する。
 (ステップS34)取得部120bは、道路の状況を示す画像、及びカメラ400の位置情報を取得する。なお、当該画像は、カメラ400が生成した画像である。
 (ステップS35)取得部120bは、学習済モデル111b及び地図情報112を取得する。
Next, processing executed by the information processing device 100b will be described using a flowchart.
10 is a flowchart illustrating an example of processing executed by the information processing apparatus according to the third embodiment; FIG.
(Step S31) The acquisition unit 120b acquires an image showing road conditions. Note that the image is an image generated by the drive recorder 201 .
(Step S<b>32 ) The acquisition unit 120 b acquires position information of the car 200 .
(Step S<b>33 ) The acquisition unit 120 b acquires the speed and acceleration of the vehicle 200 .
(Step S<b>34 ) The acquisition unit 120 b acquires an image showing road conditions and position information of the camera 400 . Note that the image is an image generated by the camera 400 .
(Step S35) The acquisition unit 120b acquires the learned model 111b and the map information 112. FIG.
 (ステップS36)学習済モデル111b、地図情報112、ドライブレコーダ201が生成した画像、車200の位置情報、車200の速度、車200の加速度、カメラ400が生成した画像、及びカメラ400の位置情報に基づいて、リスク内容を推定する。
 (ステップS37)出力部140は、推定結果を出力する。
 なお、ステップS31~35が実行される順番は、図10が示す順番に限らない。
(Step S36) Learned model 111b, map information 112, image generated by drive recorder 201, position information of car 200, speed of car 200, acceleration of car 200, image generated by camera 400, and position information of camera 400 Estimate the risk content based on
(Step S37) The output unit 140 outputs the estimation result.
Note that the order in which steps S31 to S35 are executed is not limited to the order shown in FIG.
 ここで、取得部120bは、カメラ400が生成した複数の画像を取得してもよい。すなわち、取得部120bは、カメラ400が生成した映像を取得してもよい。推定部130bは、当該映像を用いて、リスク内容を推定してもよい。 Here, the acquisition unit 120b may acquire a plurality of images generated by the camera 400. That is, the acquisition unit 120b may acquire the video generated by the camera 400. FIG. The estimation unit 130b may estimate risk details using the video.
 実施の形態3によれば、情報処理装置100bは、ドライブレコーダ201から得られる情報とカメラ400から得られる情報とに基づいて、リスク内容を推定する。例えば、情報処理装置100bは、図8の範囲202と図8の範囲401とから得られる情報に基づいて、リスク内容を推定する。そのため、情報処理装置100bは、実施の形態1よりも、精度の高いリスク内容を推定できる。 According to Embodiment 3, the information processing device 100b estimates risk details based on the information obtained from the drive recorder 201 and the information obtained from the camera 400 . For example, the information processing device 100b estimates risk content based on information obtained from the range 202 in FIG. 8 and the range 401 in FIG. Therefore, the information processing apparatus 100b can estimate risk details with higher accuracy than in the first embodiment.
実施の形態4.
 次に、実施の形態4を説明する。実施の形態4では、実施の形態1~3と相違する事項を主に説明する。そして、実施の形態4では、実施の形態1~3と共通する事項の説明を省略する。実施の形態4では、実施の形態1~3を組み合わせた場合を説明する。
Embodiment 4.
Next, Embodiment 4 will be described. In Embodiment 4, mainly matters different from Embodiments 1 to 3 will be described. In the fourth embodiment, descriptions of matters common to the first to third embodiments are omitted. In the fourth embodiment, a case in which the first to third embodiments are combined will be described.
 図11は、実施の形態4の推定システムを示す図である。推定システムは、情報処理装置100cとドライブレコーダ201とを含む。
 図11は、道路に存在する車200と車300とを示している。車300が有するドライブレコーダは、道路の状況を撮像する。例えば、当該ドライブレコーダは、範囲301の道路の状況を撮像する。当該ドライブレコーダが撮像することにより、道路の状況を示す画像が生成される。当該画像は、情報処理装置100cに取得される。
FIG. 11 is a diagram showing an estimation system according to Embodiment 4. FIG. The estimation system includes an information processing device 100 c and a drive recorder 201 .
FIG. 11 shows a car 200 and a car 300 on the road. A drive recorder that the car 300 has takes images of road conditions. For example, the drive recorder captures road conditions in range 301 . An image showing road conditions is generated by the image pickup by the drive recorder. The image is acquired by the information processing device 100c.
 また、図11は、道路に存在するカメラ400を示している。カメラ400は、道路の状況を撮像する。例えば、カメラ400は、範囲401の道路の状況を撮像する。カメラ400が生成した画像は、情報処理装置100cに取得される。 Also, FIG. 11 shows a camera 400 existing on the road. Camera 400 captures the road conditions. For example, camera 400 captures road conditions in range 401 . The image generated by the camera 400 is acquired by the information processing device 100c.
 次に、情報処理装置100cが有する機能を説明する。
 図12は、実施の形態4の情報処理装置の機能を示すブロック図である。図3に示される構成と同じ図12の構成は、図3に示される符号と同じ符号を付している。情報処理装置100cは、記憶部110c、取得部120c、及び推定部130cを有する。
Next, functions of the information processing device 100c will be described.
FIG. 12 is a block diagram showing functions of the information processing apparatus according to the fourth embodiment. 12 that are the same as those shown in FIG. 3 are assigned the same reference numerals as those shown in FIG. The information processing device 100c has a storage unit 110c, an acquisition unit 120c, and an estimation unit 130c.
 記憶部110cは、学習済モデル111cを記憶してもよい。
 取得部120cは、取得部120とほとんど同じ機能を有する。そのため、同じ機能については、説明を省略する。取得部120cは、学習済モデル111cを取得する。
The storage unit 110c may store the learned model 111c.
Acquisition unit 120 c has almost the same function as acquisition unit 120 . Therefore, descriptions of the same functions are omitted. The acquisition unit 120c acquires the trained model 111c.
 取得部120cは、1以上の第2の移動体のドライブレコーダが生成した、道路の状況を示す画像を取得する。例えば、取得部120cは、車300のドライブレコーダが生成した、道路の状況を示す画像を取得する。 The acquisition unit 120c acquires an image showing road conditions generated by the drive recorders of one or more second moving bodies. For example, the acquisition unit 120c acquires an image representing road conditions generated by the drive recorder of the car 300 .
 取得部120cは、1以上の第2の移動体の位置情報を取得する。例えば、取得部120cは、車300の位置情報を取得する。
 取得部120cは、第2の情報を取得する。例えば、取得部120cは、車300の速度及び加速度を取得する。
The acquisition unit 120c acquires position information of one or more second moving bodies. For example, the acquisition unit 120c acquires the position information of the car 300. FIG.
Acquisition unit 120c acquires the second information. For example, the acquisition unit 120c acquires the speed and acceleration of the vehicle 300. FIG.
 取得部120cは、1以上のカメラが生成した、道路の状況を示す画像を取得する。例えば、取得部120cは、カメラ400が生成した、道路の状況を示す画像を取得する。
 取得部120cは、1以上のカメラの位置情報を取得してもよい。例えば、取得部120cは、カメラ400の位置情報を取得してもよい。また、推定部130cは、1以上のカメラの位置を推定してもよい。
The acquisition unit 120c acquires an image representing road conditions generated by one or more cameras. For example, the acquisition unit 120c acquires an image representing road conditions generated by the camera 400 .
The acquisition unit 120c may acquire position information of one or more cameras. For example, the acquisition unit 120c may acquire the position information of the camera 400. FIG. Also, the estimation unit 130c may estimate the positions of one or more cameras.
 推定部130cは、学習済モデル111c、地図情報112、ドライブレコーダ201が生成した画像、車200の位置情報、第1の情報、1以上の第2の移動体が有するドライブレコーダが生成した画像、1以上の第2の移動体の位置情報、第2の情報、1以上のカメラが生成した画像、及び1以上のカメラの位置情報に基づいて、リスク内容を推定する。 The estimating unit 130c acquires the learned model 111c, the map information 112, the image generated by the drive recorder 201, the position information of the vehicle 200, the first information, the image generated by the drive recorder of one or more second mobile bodies, A risk content is estimated based on position information of one or more second mobile bodies, second information, images generated by one or more cameras, and position information of one or more cameras.
 以下の説明では、1以上の第2の移動体は、車300とする。1以上のカメラは、カメラ400とする。また、カメラ400は、カメラ400の位置情報を記憶しているものとする。 In the following description, the one or more second moving bodies are assumed to be a car 300. One or more cameras are cameras 400 . It is also assumed that camera 400 stores position information of camera 400 .
 次に、情報処理装置100cが実行する処理を、フローチャートを用いて、説明する。
 図13は、実施の形態4の情報処理装置が実行する処理の例を示すフローチャートである。
 (ステップS41)取得部120cは、道路の状況を示す画像を取得する。なお、当該画像は、ドライブレコーダ201が生成した画像である。
 (ステップS42)取得部120cは、車200の位置情報を取得する。
 (ステップS43)取得部120cは、車200の速度及び加速度を取得する。
 (ステップS44)取得部120cは、道路の状況を示す画像、車300の位置情報、車300の速度、及び車300の加速度を取得する。なお、当該画像は、車300のドライブレコーダが生成した画像である。
 (ステップS45)取得部120cは、道路の状況を示す画像、及びカメラ400の位置情報を取得する。なお、当該画像は、カメラ400が生成した画像である。
 (ステップS46)取得部120cは、学習済モデル111c及び地図情報112を取得する。
Next, processing executed by the information processing device 100c will be described using a flowchart.
13 is a flowchart illustrating an example of processing executed by the information processing apparatus according to the fourth embodiment; FIG.
(Step S41) The acquisition unit 120c acquires an image showing road conditions. Note that the image is an image generated by the drive recorder 201 .
(Step S<b>42 ) The acquisition unit 120 c acquires position information of the car 200 .
(Step S<b>43 ) The acquisition unit 120 c acquires the speed and acceleration of the vehicle 200 .
(Step S<b>44 ) The acquisition unit 120 c acquires an image showing road conditions, the position information of the car 300 , the speed of the car 300 , and the acceleration of the car 300 . Note that the image is an image generated by the drive recorder of the vehicle 300 .
(Step S<b>45 ) The acquisition unit 120 c acquires an image showing road conditions and position information of the camera 400 . Note that the image is an image generated by the camera 400 .
(Step S46) The acquisition unit 120c acquires the learned model 111c and the map information 112. FIG.
 (ステップS47)推定部130cは、学習済モデル111c、地図情報112、ドライブレコーダ201が生成した画像、車200の位置情報、車200の速度、車200の加速度、車300が有するドライブレコーダが生成した画像、車300の位置情報、車300の速度、車300の加速度、カメラ400が生成した画像、及びカメラ400の位置情報に基づいて、リスク内容を推定する。
 (ステップS48)出力部140は、推定結果を出力する。
 なお、ステップS41~46が実行される順番は、図13が示す順番に限らない。
(Step S47) The estimation unit 130c acquires the learned model 111c, the map information 112, the image generated by the drive recorder 201, the position information of the car 200, the speed of the car 200, the acceleration of the car 200, and the drive recorder of the car 300. Risk content is estimated based on the captured image, the position information of the vehicle 300, the speed of the vehicle 300, the acceleration of the vehicle 300, the image generated by the camera 400, and the position information of the camera 400.
(Step S48) The output unit 140 outputs the estimation result.
Note that the order in which steps S41 to S46 are executed is not limited to the order shown in FIG.
 実施の形態4によれば、情報処理装置100cは、ドライブレコーダ201から得られる情報と、車300が有するドライブレコーダから得られる情報と、カメラ400から得られる情報とに基づいて、リスク内容を推定する。例えば、情報処理装置100cは、図11の範囲202と図11の範囲301と図11の範囲401とから得られる情報に基づいて、リスク内容を推定する。そのため、情報処理装置100cは、実施の形態1~3よりも、精度の高いリスク内容を推定できる。 According to the fourth embodiment, information processing device 100c estimates risk content based on information obtained from drive recorder 201, information obtained from the drive recorder of vehicle 300, and information obtained from camera 400. do. For example, the information processing device 100c estimates risk content based on information obtained from the range 202 in FIG. 11, the range 301 in FIG. 11, and the range 401 in FIG. Therefore, the information processing apparatus 100c can estimate risk details with higher accuracy than in the first to third embodiments.
実施の形態5.
 次に、実施の形態5を説明する。実施の形態5では、実施の形態1~4と相違する事項を主に説明する。そして、実施の形態5では、実施の形態1~4と共通する事項の説明を省略する。
Embodiment 5.
Next, Embodiment 5 will be described. In Embodiment 5, mainly matters different from Embodiments 1 to 4 will be described. Further, in the fifth embodiment, descriptions of matters common to the first to fourth embodiments are omitted.
 図14は、実施の形態5のドライブレコーダの機能を示すブロック図である。ドライブレコーダ500は、情報処理装置とも言う。ドライブレコーダ500は、プロセッサ、揮発性記憶装置、及び不揮発性記憶装置を有する。ドライブレコーダ500は、処理回路を有してもよい。 FIG. 14 is a block diagram showing functions of the drive recorder of the fifth embodiment. The drive recorder 500 is also called an information processing device. The drive recorder 500 has a processor, volatile storage, and non-volatile storage. The drive recorder 500 may have processing circuitry.
 ここで、例えば、ドライブレコーダ500は、ドライブレコーダ201と考えてもよい。ドライブレコーダ500とドライブレコーダ201との違いは、ドライブレコーダ500が画像を情報処理装置(例えば、サーバ)に送信しない点である。ドライブレコーダ500は、情報処理装置100,100a,100b,100cと同じ機能を有する。そのため、ドライブレコーダ500は、ドライブレコーダ500が生成した画像を用いて、リスク内容を推定することができる。 Here, for example, the drive recorder 500 may be considered the drive recorder 201. The difference between the drive recorder 500 and the drive recorder 201 is that the drive recorder 500 does not transmit images to an information processing device (for example, a server). The drive recorder 500 has the same functions as the information processing apparatuses 100, 100a, 100b, and 100c. Therefore, the drive recorder 500 can estimate risk content using the image generated by the drive recorder 500 .
 ドライブレコーダ500は、記憶部510、取得部520、推定部530、及び出力部540を有する。
 記憶部510は、ドライブレコーダ500が有する揮発性記憶装置又は不揮発性記憶装置に確保した記憶領域として実現してもよい。
 取得部520、推定部530、及び出力部540の一部又は全部は、ドライブレコーダ500が有する処理回路によって実現してもよい。また、取得部520、推定部530、及び出力部540の一部又は全部は、ドライブレコーダ500が有するプロセッサが実行するプログラムのモジュールとして実現してもよい。
The drive recorder 500 has a storage section 510 , an acquisition section 520 , an estimation section 530 and an output section 540 .
The storage unit 510 may be implemented as a storage area secured in a volatile storage device or a non-volatile storage device of the drive recorder 500 .
A part or all of the acquisition unit 520, the estimation unit 530, and the output unit 540 may be realized by a processing circuit included in the drive recorder 500. Also, part or all of the acquisition unit 520, the estimation unit 530, and the output unit 540 may be implemented as modules of a program executed by a processor included in the drive recorder 500. FIG.
 記憶部510は、学習済モデル511と地図情報512とを記憶してもよい。学習済モデル511は、学習済モデル111,111a,111b,111cと同じように推定を行うことができる。地図情報512は、地図情報112と同じである。 The storage unit 510 may store the learned model 511 and the map information 512. The trained model 511 can perform estimation in the same way as the trained models 111, 111a, 111b, and 111c. Map information 512 is the same as map information 112 .
 取得部520は、取得部120,120a,120b,120cと同じ機能を有する。そのため、例えば、車200がドライブレコーダ500を有する場合、取得部520は、ドライブレコーダ500が生成した画像、車200の位置情報、車200の速度、及び車200の加速度を取得する。また、例えば、取得部520は、車300のドライブレコーダが生成した画像、車300の位置情報、車300の速度、及び車300の加速度を取得することができる。また、例えば、取得部520は、カメラ400が生成した画像、及びカメラ400の位置情報を取得することができる。このように、取得部520は、取得部120,120a,120b,120cと同じ機能を有する。そのため、取得部520の詳細な機能の説明は、省略する。 The acquisition unit 520 has the same functions as the acquisition units 120, 120a, 120b, and 120c. Therefore, for example, when the car 200 has the drive recorder 500 , the acquisition unit 520 acquires the image generated by the drive recorder 500 , the position information of the car 200 , the speed of the car 200 , and the acceleration of the car 200 . Also, for example, the acquisition unit 520 can acquire an image generated by the drive recorder of the car 300 , the position information of the car 300 , the speed of the car 300 , and the acceleration of the car 300 . Also, for example, the acquisition unit 520 can acquire the image generated by the camera 400 and the position information of the camera 400 . Thus, the acquisition unit 520 has the same functions as the acquisition units 120, 120a, 120b, and 120c. Therefore, detailed description of the function of the acquisition unit 520 is omitted.
 推定部530は、推定部130,130a,130b,130cと同じ機能を有する。そのため、推定部530の詳細な機能の説明は、省略する。
 出力部540は、出力部140と同じ機能を有する。そのため、出力部540の詳細な機能の説明は、省略する。
Estimating section 530 has the same function as estimating sections 130, 130a, 130b, and 130c. Therefore, detailed description of the function of the estimation unit 530 is omitted.
Output section 540 has the same function as output section 140 . Therefore, detailed description of the function of the output unit 540 is omitted.
 実施の形態5によれば、ドライブレコーダ500は、情報処理装置100,100a,100b,100cと同じ機能を有する。そのため、ドライブレコーダ500は、実施の形態1~4と同じ効果を奏する。 According to Embodiment 5, drive recorder 500 has the same functions as information processing apparatuses 100, 100a, 100b, and 100c. Therefore, the drive recorder 500 has the same effects as those of the first to fourth embodiments.
 次に、学習済モデルの生成過程を簡単に説明する。
 図15は、学習済モデルの生成過程の例を示す図である。図15は、画像601、画像602、位置情報603、地図情報604、速度情報605、及び加速度情報606を示している。画像601は、推定結果の出力先の車(以下、対象車)が有するドライブレコーダが生成した画像である。画像602は、対象車以外の車が有するドライブレコーダが生成した画像である。画像602は、監視カメラが生成した画像でもよい。位置情報603は、対象車の位置情報である。位置情報603は、対象車以外の車及び監視カメラの位置情報を含んでもよい。地図情報604は、地図を示す情報である。速度情報605は、対象車の速度を示す情報である。速度情報605は、対象車以外の車の速度を示す情報を含んでもよい。加速度情報606は、対象車の加速度を示す情報である。加速度情報606は、対象車以外の車の加速度を示す情報を含んでもよい。
Next, the process of generating a trained model will be briefly described.
FIG. 15 is a diagram illustrating an example of the process of generating a trained model. FIG. 15 shows an image 601, an image 602, position information 603, map information 604, speed information 605, and acceleration information 606. FIG. An image 601 is an image generated by the drive recorder of the vehicle to which the estimation result is output (hereinafter referred to as the target vehicle). An image 602 is an image generated by a drive recorder owned by a vehicle other than the target vehicle. Image 602 may be an image generated by a surveillance camera. The position information 603 is the position information of the target vehicle. The location information 603 may include location information of vehicles other than the target vehicle and surveillance cameras. The map information 604 is information indicating a map. Speed information 605 is information indicating the speed of the target vehicle. The speed information 605 may include information indicating speeds of vehicles other than the target vehicle. Acceleration information 606 is information indicating the acceleration of the target vehicle. Acceleration information 606 may include information indicating the acceleration of vehicles other than the target vehicle.
 画像601、画像602、位置情報603、地図情報604、速度情報605、及び加速度情報606に基づいて、シーン分類が行われる。シーン分類は、ユーザにより行われてもよいし、機械により行われてもよい。シーン分類の内容は、リスク内容と考えてもよい。なお、シーン分類で画像602が用いられなくてもよい。さらに、シーン分類では、速度情報605及び加速度情報606のうちの少なくとも1つが、用いられてもよい。 Scene classification is performed based on the image 601, image 602, position information 603, map information 604, speed information 605, and acceleration information 606. Scene classification may be done by a user or by a machine. The content of the scene classification may be considered as the risk content. Note that the image 602 may not be used for scene classification. Additionally, at least one of velocity information 605 and acceleration information 606 may be used in scene classification.
 リスク内容が、シーン分類により得られた情報に付与される。言い換えれば、ラベル付けが、行われる。ラベル付けが行われることにより得られた情報が教師データとして、機械学習で用いられる。機械学習により学習済モデルが生成される。  Risk content is added to the information obtained by scene classification. In other words, labeling is done. Information obtained by labeling is used as teacher data in machine learning. A trained model is generated by machine learning.
 以上に説明した各実施の形態における特徴は、互いに適宜組み合わせることができる。 The features of each embodiment described above can be combined with each other as appropriate.
 100,100a,100b,100c 情報処理装置、 101 プロセッサ、 102 揮発性記憶装置、 103 不揮発性記憶装置、 110,110a,110b,110c 記憶部、 111,111a,111b,111c 学習済モデル、 112 地図情報、 120,120a,120b,120c 取得部、 130,130a,130b,130c 推定部、 140 出力部、 200 車、 201 ドライブレコーダ、 202 範囲、 300 車、 301 範囲、 400 カメラ、 401 範囲、 500 ドライブレコーダ、 510 記憶部、 511 学習済モデル、 512 地図情報、 520 取得部、 530 推定部、 540 出力部、 601 画像、 602 画像、 603 位置情報、 604 地図情報、 605 速度情報、 606 加速度情報。 100, 100a, 100b, 100c information processing device, 101 processor, 102 volatile storage device, 103 non-volatile storage device, 110, 110a, 110b, 110c storage unit, 111, 111a, 111b, 111c learned model, 112 map information , 120, 120a, 120b, 120c acquisition unit, 130, 130a, 130b, 130c estimation unit, 140 output unit, 200 vehicle, 201 drive recorder, 202 range, 300 vehicle, 301 range, 400 camera, 401 range, 500 drive recorder , 510 Storage unit, 511 Trained model, 512 Map information, 520 Acquisition unit, 530 Estimation unit, 540 Output unit, 601 Image, 602 Image, 603 Location information, 604 Map information, 605 Speed information, 606 Acceleration information.

Claims (8)

  1.  道路に存在し、かつ第1の撮像装置を有する第1の移動体に対するリスク内容を推定する情報処理装置であって、
     学習済モデル、地図情報、前記第1の撮像装置が生成した、前記道路の状況を示す第1の画像、前記第1の移動体の位置情報、及び前記第1の移動体の速度及び加速度のうちの少なくとも1つの情報である第1の情報を取得する取得部と、
     前記学習済モデル、前記地図情報、前記第1の画像、前記第1の移動体の位置情報、及び前記第1の情報に基づいて、前記リスク内容を推定する推定部と、
     を有する情報処理装置。
    An information processing device for estimating a risk content for a first moving body existing on a road and having a first imaging device,
    A learned model, map information, a first image showing the condition of the road generated by the first imaging device, position information of the first moving body, and velocity and acceleration of the first moving body an acquisition unit that acquires the first information that is at least one of the information;
    an estimation unit that estimates the risk content based on the learned model, the map information, the first image, the position information of the first moving body, and the first information;
    Information processing device having
  2.  前記取得部は、前記道路に存在する1以上の第2の移動体の位置情報、前記第2の移動体が有する第2の撮像装置が生成した、前記道路の状況を示す第2の画像、及び前記第2の移動体の速度及び加速度のうちの少なくとも1つの情報である第2の情報を取得し、
     前記推定部は、前記学習済モデル、前記地図情報、前記第1の画像、前記第1の移動体の位置情報、前記第1の情報、前記第2の画像、前記第2の移動体の位置情報、及び前記第2の情報に基づいて、前記リスク内容を推定する、
     請求項1に記載の情報処理装置。
    The acquisition unit provides position information of one or more second moving bodies present on the road, a second image showing the road conditions generated by a second imaging device possessed by the second moving bodies, and acquiring second information that is at least one information of speed and acceleration of the second moving body,
    The estimating unit comprises the learned model, the map information, the first image, the position information of the first moving object, the first information, the second image, the position of the second moving object. estimating the risk content based on the information and the second information;
    The information processing device according to claim 1 .
  3.  前記取得部は、前記道路に存在する1以上のカメラが生成した、前記道路の状況を示す第3の画像を取得し、
     前記推定部は、前記学習済モデル、前記地図情報、前記第1の画像、前記第1の移動体の位置情報、前記第1の情報、前記第3の画像、及び前記カメラの位置情報に基づいて、前記リスク内容を推定し、
     前記カメラの位置情報は、前記カメラから取得された位置情報、又は推定により得られた位置情報である、
     請求項1又は2に記載の情報処理装置。
    The acquisition unit acquires a third image showing the condition of the road generated by one or more cameras present on the road,
    The estimating unit, based on the learned model, the map information, the first image, the position information of the first moving body, the first information, the third image, and the position information of the camera to estimate the risk content,
    The position information of the camera is position information obtained from the camera or position information obtained by estimation.
    The information processing apparatus according to claim 1 or 2.
  4.  推定結果を出力する出力部をさらに有する、
     請求項1から3のいずれか1項に記載の情報処理装置。
    Further having an output unit that outputs the estimation result,
    The information processing apparatus according to any one of claims 1 to 3.
  5.  前記情報処理装置は、前記第1の移動体に搭載されている、
     請求項1から4のいずれか1項に記載の情報処理装置。
    The information processing device is mounted on the first moving body,
    The information processing apparatus according to any one of claims 1 to 4.
  6.  道路に存在する第1の移動体が有する第1の撮像装置と、
     前記第1の移動体に対するリスク内容を推定する情報処理装置と、
     を含み、
     前記情報処理装置は、
     学習済モデル、地図情報、前記第1の撮像装置が生成した、前記道路の状況を示す第1の画像、前記第1の移動体の位置情報、及び前記第1の移動体の速度及び加速度のうちの少なくとも1つの情報である第1の情報を取得する取得部と、
     前記学習済モデル、前記地図情報、前記第1の画像、前記第1の移動体の位置情報、及び前記第1の情報に基づいて、前記リスク内容を推定する推定部と、
     を有する、
     推定システム。
    a first imaging device possessed by a first moving object existing on a road;
    an information processing device for estimating a risk content for the first moving object;
    including
    The information processing device is
    A learned model, map information, a first image showing the condition of the road generated by the first imaging device, position information of the first moving body, and velocity and acceleration of the first moving body an acquisition unit that acquires the first information that is at least one of the information;
    an estimation unit that estimates the risk content based on the learned model, the map information, the first image, the position information of the first moving body, and the first information;
    having
    estimation system.
  7.  道路に存在し、かつ第1の撮像装置を有する第1の移動体に対するリスク内容を推定する情報処理装置が、
     学習済モデル、地図情報、前記第1の撮像装置が生成した、前記道路の状況を示す第1の画像、前記第1の移動体の位置情報、及び前記第1の移動体の速度及び加速度のうちの少なくとも1つの情報である第1の情報を取得し、
     前記学習済モデル、前記地図情報、前記第1の画像、前記第1の移動体の位置情報、及び前記第1の情報に基づいて、前記リスク内容を推定する、
     推定方法。
    An information processing device for estimating the risk content for a first moving body existing on a road and having a first imaging device,
    A learned model, map information, a first image showing the condition of the road generated by the first imaging device, position information of the first moving body, and velocity and acceleration of the first moving body Obtaining first information that is at least one information of
    estimating the risk content based on the learned model, the map information, the first image, the position information of the first moving body, and the first information;
    estimation method.
  8.  道路に存在し、かつ第1の撮像装置を有する第1の移動体に対するリスク内容を推定する情報処理装置に、
     学習済モデル、地図情報、前記第1の撮像装置が生成した、前記道路の状況を示す第1の画像、前記第1の移動体の位置情報、及び前記第1の移動体の速度及び加速度のうちの少なくとも1つの情報である第1の情報を取得し、
     前記学習済モデル、前記地図情報、前記第1の画像、前記第1の移動体の位置情報、及び前記第1の情報に基づいて、前記リスク内容を推定する、
     処理を実行させる推定プログラム。
     
    An information processing device for estimating the risk content for a first moving object that exists on a road and has a first imaging device,
    A learned model, map information, a first image showing the condition of the road generated by the first imaging device, position information of the first moving body, and velocity and acceleration of the first moving body Obtaining first information that is at least one information of
    estimating the risk content based on the learned model, the map information, the first image, the position information of the first moving body, and the first information;
    An inferred program that causes the process to take place.
PCT/JP2022/001093 2022-01-14 2022-01-14 Information processing device, estimation system, estimation method, and estimation program WO2023135738A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2022535944A JPWO2023135738A1 (en) 2022-01-14 2022-01-14
PCT/JP2022/001093 WO2023135738A1 (en) 2022-01-14 2022-01-14 Information processing device, estimation system, estimation method, and estimation program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2022/001093 WO2023135738A1 (en) 2022-01-14 2022-01-14 Information processing device, estimation system, estimation method, and estimation program

Publications (1)

Publication Number Publication Date
WO2023135738A1 true WO2023135738A1 (en) 2023-07-20

Family

ID=87278716

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/001093 WO2023135738A1 (en) 2022-01-14 2022-01-14 Information processing device, estimation system, estimation method, and estimation program

Country Status (2)

Country Link
JP (1) JPWO2023135738A1 (en)
WO (1) WO2023135738A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014154005A (en) * 2013-02-12 2014-08-25 Fujifilm Corp Danger information provision method, device, and program
JP2018180983A (en) * 2017-04-14 2018-11-15 ソニー株式会社 Information processing device, information processing method, and program
JP2019214318A (en) * 2018-06-13 2019-12-19 本田技研工業株式会社 Vehicle control device, vehicle control method and program
JP2020046882A (en) * 2018-09-18 2020-03-26 株式会社東芝 Information processing device, vehicle control device, and moving body control method
JP2021077071A (en) * 2019-11-08 2021-05-20 アイシン・エィ・ダブリュ株式会社 Risk area display system and risk area display program

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014154005A (en) * 2013-02-12 2014-08-25 Fujifilm Corp Danger information provision method, device, and program
JP2018180983A (en) * 2017-04-14 2018-11-15 ソニー株式会社 Information processing device, information processing method, and program
JP2019214318A (en) * 2018-06-13 2019-12-19 本田技研工業株式会社 Vehicle control device, vehicle control method and program
JP2020046882A (en) * 2018-09-18 2020-03-26 株式会社東芝 Information processing device, vehicle control device, and moving body control method
JP2021077071A (en) * 2019-11-08 2021-05-20 アイシン・エィ・ダブリュ株式会社 Risk area display system and risk area display program

Also Published As

Publication number Publication date
JPWO2023135738A1 (en) 2023-07-20

Similar Documents

Publication Publication Date Title
US10417816B2 (en) System and method for digital environment reconstruction
JP6119097B2 (en) Road surface inspection program and road surface inspection device
US8035725B2 (en) Multi-focus camera apparatus and image processing method and program used therein
JP5776545B2 (en) Road surface inspection program and road surface inspection device
JP5776546B2 (en) Road surface inspection program and road surface inspection device
US11823564B1 (en) Adaptive data collection based on fleet-wide intelligence
WO2021090943A1 (en) Image processing device and image processing method
WO2023135738A1 (en) Information processing device, estimation system, estimation method, and estimation program
Bock et al. Highly accurate scenario and reference data for automated driving
JP7384158B2 (en) Image processing device, moving device, method, and program
WO2020085223A1 (en) Information processing method, information processing device, information processing program, and information processing system
Tagiew et al. Onboard sensor systems for automatic train operation
JP2013161281A (en) Traffic image acquisition device, traffic image acquisition method, and traffic image acquisition program
Ćosić et al. Time to collision estimation for vehicles coming from behind using in-vehicle camera
CN115171025A (en) Method and device for storing video and electronic equipment
US20220121859A1 (en) System and method for detecting an object collision
Kloeker et al. Comparison of Camera-Equipped Drones and Infrastructure Sensors for Creating Trajectory Datasets of Road Users.
CN112556702A (en) Height correction method for vehicle moving track and related device
US20200035094A1 (en) System and method for automatic calibration of vehicle position determining device in a traffic system
Pradeep et al. Automatic railway detection and tracking inspecting system
CN111664829A (en) Correction method, correction device and computer storage medium
JP2020067818A (en) Image selection device and image selection method
CN116503789B (en) Bus passenger flow detection method, system and equipment integrating track and scale
US11590982B1 (en) Trip based characterization using micro prediction determinations
WO2022153888A1 (en) Solid-state imaging device, control method for solid-state imaging device, and control program for solid-state imaging device

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 2022535944

Country of ref document: JP

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22920263

Country of ref document: EP

Kind code of ref document: A1